Readit News logoReadit News
doctor_eval · 3 years ago
> Thin Events add coupling

That’s not my experience. In fact I’d say fat events add coupling because they create an invisible (from the emitter) dependency on the event body, which becomes ossified.

So I’d say the opposite: thin events reduce coupling. Sure, the receiver might call an API and that creates coupling with the API. But receivers are also free to call or not call any other API they want. What if they don’t care about the body of the object?

So I’m on team thin. Every time I’ve been tempted by the other team, I’ve regretted it. It’s also in my experience a lot more difficult to version events than it is to version APIs, so reducing their surface area also solves other problems.

SideburnsOfDoom · 3 years ago
> thin events reduce coupling. Sure, the receiver might call an API and that creates coupling with the API.

You make a statement in the first sentence, and in the next sentence produce evidence ... that the statement is wrong. And, YMMV.

It is my experience that thin events add coupling. If service B receives an event, and wants to process it ASAP (i.e. near real time) and so calls back over http to Service A for the details, then

a) there is additional latency for a http call. And time variance - Even if the average latency of a http request round-trip is fine, the P99 might be bad.

b) You're asking for occasional "eventual consistency" trouble when A's state lags or has moved on ahead of the event

c) Worst of all: When service A is down or unreachable, Service B is unable to do work: Service B uptime must be <= Service A uptime. You have coupled their reliability, and if Service B is identified as mission-critical, then you have the choice of either making Service A equally critical, or decoupling them e.g. with "fat events".

I don't believe that it's accurate to say "receivers are also free to call or not call..." it's not choosing a flavor of ice-cream, you do the calls that the work at hand _needs_.

If you find that you never need to call back to service A then yes, "thin events" would suit your case better. That has not been my experience.

It's fair that event data format versioning is a lot of work with fat events - nothing is without downside. But in your case, do you have "dependency on the event body" ? All of it? If a thin event is all that you need, then you depend on a couple of ids in the event body, and not the rest. Json reading is very forgiving of added / removed fields, you can ignore the parts of a fat event that you don't care about.

doctor_eval · 3 years ago
> You make a statement in the first sentence, and in the next sentence produce evidence ... that the statement is wrong.

My first sentence was quoting from the article, then I refute the article. Sorry if that wasn’t clear.

Re your point a), yes I agree in this case you’d send the contents in the body, but then I’d tend to call it stream processing rather than event processing - I admit this might seem like splitting hairs, but I do feel that there’s a difference between events and data distribution. And I personally find the data distribution pattern tends to be a lot more specialised.

Re b), it’s just an assumption that the receiver needs the version of data in the message, rather than the latest version. So I don’t think this is a strong argument for fat events.

Re c), again, it’s an assumption that the receiver needs the exact data provided in the event body; but I’ve found that, except in very simple cases, it’s very difficult to efficiently create event bodies that contain everything that all receivers are going to need. Maybe the receiver needs to collate a bunch more data, in which case the problem persists regardless of fat or thin, or maybe it just clears a local cache, in which case the problem is deferred until the data is needed and you probably have other things to worry about then anyway.

> I don't believe that it's accurate to say "receivers are also free to call or not call..." it's not choosing a flavor of ice-cream, you do the calls that the work at hand _needs_.

Sure, and the calls you make depend on the context, and if there is enough data in the event body to avoid making any calls at all. And I’m saying that in my experience that’s not generally the case. What I’ve seen is that the sender composes some event body and sends it, and the receivers end up needing to call APIs anyway.

In which case, the sender may as well have not gone to the trouble, hence my preference for thin events.

> But in your case, do you have "dependency on the event body" ? All of it?

From a maintenance perspective, the sender doesn’t know what the receivers depend on, so even if all your receivers only depend on the IDs, there is no way to find out. Because of this, it’s really easy to add fields to an event message, but really dangerous to remove them, because you can’t easily tell what receivers depend on the thing you’re removing. This is why I said that fat events create more coupling than thin events.

Of course as with most things there are always exceptions. Maybe I should have said, “I’m on team thin by default. But of course some use cases require fat messages, in which case proceed with great care”.

delusional · 3 years ago
b) You're asking for occasional "eventual consistency" trouble when A's state lags or has moved on ahead of the event

If you allow A's state to lag behind it's own events, then how are you ever going to create a sane system? Surely A either has to be ahead or at the state that caused the event to emit, or events are pointless.

FeepingCreature · 3 years ago
> b) You're asking for occasional "eventual consistency" trouble when A's state lags or has moved on ahead of the event

To be noted that this is the default if B is recovering after an outage.

Personally, I consider events to be insane. "We create an immutable database so that the state of the system is always recoverable." Okay, cool, very functional programming of you. "But then to actually work with the event from the immutable database, you have to query a stateful service." ??? What? And even fat events only go so far to get you out of that. So with a stream of n events, you don't have n states that the application can be in, but n times the product of all possible states of every other service that you query. How does this help?!

paphillips · 3 years ago
I also disagree with the article - thin events don't always result in more coupling, and I'll add that thin events can remove temporal or state coupling as illustrated below. However, the caveat is: as with many things I think choosing one team or the other has nuance and depends on the specific scenario.

An example: I'm using thin events in a master data application integration scenario to send a 'sync this record' type of command message into a queue. The message body does not have the record details, only the basic information to uniquely identify the record. It also doesn't identify the type of change except for a flag to identify deletes. The 'sync' message is generalized to work for all entities and systems, so routing, logging, and other functions preceding the mapping and target operation have no coupling to any system or entity and can expect a fixed message format that will probably never change. Thus versioning isn't a concern.

Choosing team 'thin event' does result in an extra read of the target system, but that is a feature for this scenario and what I want to enforce. I can't assume a target system is in any particular state, and the operation to be performed will be determined from the target system at whatever point in time a message is processed, which could be more than once. If the message ended up in a dead letter queue, it can be reprocessed later without issue. If one production system's data is cloned down to a lower environment, the integrations continue to work even if the source and target environment data is mismatched. No state is stored or depended upon from either system and the design is idempotent (ignoring a target system's business rules that may constrain valid operations over time).

In contrast, other scenarios may benefit from or require a fat event. I've never used event sourcing, but as others mention, if current state can be built from all previous events 'rolled forward' or 'replayed', then each event must be a stand-alone immutable record with all information - thin events cannot be used. Or, if a scenario requires high performance we might need to use a fat event to eliminate the extra read, and then compensate for the other consequences that arise.

avereveard · 3 years ago
assume the data format changes, it would change in the called api as well. as long as the fat event sends data that it's in the same format that the api would return, you'd have the same level of coupling.

I think fat vs thin is more about how much other services the event have to travel, because thin event would multiply reads by a fair factor, with the tradeoff being the performance hit for the queue system to store and ship large events

doctor_eval · 3 years ago
With an API you can publish a new endpoint (/v1, /v2 etc). It’s normally reasonably easy to maintain an old API even while you add features to the new API, and the runtime penalty is minimal because clients would be expected to call just one version of the API for any given event. (You can also see who’s calling the old API and ask them to change)

But this is not true for events. If you change the body such that you now need to maintain two versions of an event, then you have to publish both events simultaneously, which means double the server side effort, storage etc for each event version. It’s pretty inefficient, and painful. You can work out who subscribes to the old event but there is still a big efficiency hit.

You might be right about many reads per event in a simplistic way; if you have a lot of clients then it could be expensive if you don’t have a server side cache. But there would typically be a lot of temporality in such a system so it seems like an easy problem to solve for most use cases; you don’t have to cache for long, but caches are of course tricky if your use case is not very simple. That said, if there is already a HTTP connection open then the additional latency and bandwidth hit cause by this events are going to be minimal in most cases, and probably drowned out entirely if you need to push multiple versions.

As I said in another thread, I should have said that thin is my default. There are cases when fat makes more sense, but normally I’d start with thin and see if I need to flesh it out. Whenever I’ve started fat I’ve ended up reverting.

drewcoo · 3 years ago
Thank you. Came here to say that.

When I've seen this fat event pattern it's been because different services' responsibilities were not fully separated. And that's tight coupling. Fat events imply tight coupling.

The "thin" pattern described in the article goes like this:

1) service FOO gets an event

2) FOO then has to query BAR (and maybe BAZ and QUUX) to determine the overall state of everything to determine what to do next

And #2 means all of that is kind of "thin" is tightly coupled, too.

I've also personally seen thin events that are not the article's thin strawman.

I sometimes wonder if people understand coupling or design.

majke · 3 years ago
I'll bite. Neither. Both. Depending on system.

When the "state" is large, or changes often, obviously you can't send full state every time - that would be too much for end-nodes to process on every event. Both cpu - deserialization, and bandwidth. Delta is the answer.

Delta though is hard, since there always is an inherent race between getting the first full snapshot, and subscribing to updates.

On the other hand doing delta is hard. Therefore, for simple small updated not-often things, fat events carrying all state might be okay.

There is a linear tradeoff on the "data delivery" component:

- worse latency saves cpu and bandwidth (think: batching updates)

- better latency burns more cpu and bandwidth

Finally, the receiver system always requires some domain specific API. In some cases passing delta to application is fine, in some cases passing a full object is better. For example, sometimes you can save a re-draw by just updating some value, in other cases the receiver will need to redraw everything so changing the full object is totally fine.

I would like to see a pub/sub messaging system that solves these issues. That you can "publish" and object, select latency goal, and "subscribe" to the event on the receiver and allow the system to choose the correct delivery method. For example, the system might choose pull vs push, or appropriate delta algorithm. As a programmer, I really just want to get access to the "synchronized" object on multiple systems.

echelon · 3 years ago
There's a third type of event:

- Entire Object.

You send the entire state of the entire object that changed. Irrelevant fields and all.

This makes business logic and migrations easier in dependent services. You can easily roll back to earlier points in time without diffing objects to determine what state changed. You don't have to replay an entire history of events to repopulate caches and databases. You can even send "synthetic" events to reset the state of everything that is listening from a central point of control.

I've dealt with all three types of system, and this is by far the easiest one to work with.

SideburnsOfDoom · 3 years ago
How does this differ from a "fat event" ?
knome · 3 years ago
>Delta though is hard, since there always is an inherent race between getting the first full snapshot, and subscribing to updates.

Since the deltas include a version identifier for what they should be applied on top of, then you should always be able to safely start by requesting the deltas, then ask for the object. Buffer the deltas till your full copy is received, then discard deltas for previous versions until the stream applies to yours, applying them thereafter to keep it up to date.

SideburnsOfDoom · 3 years ago
This omits the issues with "thin events" - it may be fine most of the time, but as it usually involves a "get more details" call over http or of some other kind, it has more moving parts, is therefor more prone to failures and slowdowns due to the extra coupling. This can kick in when load goes up or some other issue affects reliability, and cause cascading failure.
ivan_gammel · 3 years ago
I‘d pick neither and just let the system in possession of data send with the event only the part of data it owns (i.e. something in between fat and thin). Saves API call back, the body doesn’t have to be fully deserialized, so no format coupling, the rest can be fetched from other services on demand (coherent state though is not guaranteed, but that’s usually not critical with well designed bounded contexts).
SideburnsOfDoom · 3 years ago
> just let the system in possession of data send with the event only the part of data it owns (i.e. something in between fat and thin).

Is that really different from a fat event?

ivan_gammel · 3 years ago
Yes. The owned part of saga can be as small as an acknowledgment of something that happened. Basically, you do not create the pattern of fat event in your architecture and then stick to it, instead you send both fat, thin and in between depending on context.
sithlord · 3 years ago
I'm a fan of fat events, and let the receiver decide if they want to trust the event or not, or go ahead and make a call to the service to get the data.

for example:

if one receiver wants to know if you have read a book, then there is no reason to make a call to the service.

but if a service wants to know the last book you read, and doesn't trust the events to be in order, then it would make sense to just call the service.

SideburnsOfDoom · 3 years ago
> if a service wants to know the last book you read, and doesn't trust the events to be in order, then it would make sense to just call the service.

It would make more sense to me if the events had an increasing sequence number, version number or accurate timestamp, so that if I record that "'sithlord' last read 'The Godfather' at event '123456'" I can record that, and ignore any event related to "sithlord last read" with event < 123456.

This is not a new problem, there are existing solutions to it.

mabbo · 3 years ago
As always, it depends. Yay engineering and trade-offs.

Hey just remember: both is always an option if you're consumers disagree. Thin stream from the consumers who don't trust the fat data, fat stream for the event log and other consumers that prefer it.

kgeist · 3 years ago
Fat events once overloaded our message broker with OOM under high load and the broker's default behavior was to block all publishers until the queue was emptied (to release memory) - downtime as a result. Another issue was that under high load, if the event queue was too large, handlers would end up processing very stale data resulting in all kinds of broken behavior (from the point of view of the user).

Thin events resulted in DDoS of our service a few times because handlers would call our APIs too frequently to retrieve object state (which was partially mitigated by having separate machines serve incoming traffic and process events).

(A trick we used which worked for both fat and thin events was to add versioning to objects to avoid unnecessary processing).

We also used delta events as well but they had same issues as thin events because handlers usually have to retrieve full object state anyway to have meaningful processing (not always, depends on business logic and the architecture).

There are so many ways to shoot yourself in the foot with all three approaches and I still hesitate a lot when choosing what kind of events to use for the next project.

ulam2 · 3 years ago
Why not have an option for both? Both have their use cases.
acjohnson55 · 3 years ago
For me, this depends on the semantics of the system. Is the sender commanding the receiver to carry out the rest of the process, or is the sender broadcasting information to a dynamic set of interested parties? In other words, are you building a pipeline or a pub-sub?

If the former, there is inherently tight coupling between sender and receiver, and the sender should send all necessary context to simplify the system design.

If the latter, then we talking about a decoupled system, where the sender cannot make assumptions about what info the receiver does or doesn't need to take further action. A thin event is called for to keep the contract simple.

One of my frustrations with the event-driven trend is that people don't always seem to think through what they're designing. It's easy to end up with a much more complex system than a transactional architecture.

Generally, I favor modeling as much of my system as possible as pipelines, and use pub-subs sparingly, as places where you have fan out to parallel pipelines.

Raw events are like GOTOs. They are extremely powerful, but also very difficult to reason about.