Readit News logoReadit News
quapster · a month ago
The funny thing about event sourcing is that most teams adopt it for the sexy parts (time travel, Kafka, sagas), but the thing that actually determines whether it survives contact with production is discipline around modeling and versioning.

You don’t pay the cost up front, you pay it 2 years in when the business logic has changed 5 times, half your events are “v2” or “DeprecatedFooHappened”, and you realize your “facts” about the past were actually leaky snapshots of whatever the code thought was true at the time. The hard part isn’t appending events, it’s deciding what not to encode into them so you can change your mind later without a migration horror show.

There’s also a quiet tradeoff here: you’re swapping “schema complexity + migrations” for “event model complexity + replay semantics”. In a bank-like domain that genuinely needs an audit trail, that trade is usually worth it. In a CRUD-ish SaaS where the real requirement is “be able to see who edited this record”, a well-designed append-only table with explicit revisions gets you 80% of the value at 20% of the operational and cognitive overhead.

Using Postgres as the event store is interesting because it pushes against the myth that you need a specialized log store from day one. But it also exposes the other myth: that event sourcing is primarily a technical choice. It isn’t. It’s a commitment to treat “how the state got here” as a first-class part of the domain, and that cultural/organizational shift is usually harder than wiring up SaveEvents and a Kafka projection.

simonw · a month ago
This comment just made it finally click for me why event sourcing sounds so good on paper but rarely seems to work out for real-world projects: it expects a level of correct-design-up-front which isn't realistic for most teams.
mrkeen · a month ago
> it expects a level of correct-design-up-front which isn't realistic for most teams.

The opposite is true.

A non-ES system is an ES system where you are so sure about being correct-up-front that you perform your reduce/fold step when any new input arrives, and throw away the input.

It's like not keeping your receipts around for tax time (because they might get crinkled or hard to read, or someone might want to change them).

hermanradtke · a month ago
> it expects a level of correct-design-up-front which isn't realistic for most teams

It requires a business that is willing to pay the maintenance cost of event sourcing in order to get capabilities needed capabilities (like an audit trail or replayability).

pdhborges · a month ago
I would upvote this comment more if I could.

I already refrained from introducing event sourcing to tackle wierd dependecies multiple time just by justaposing the amount of discipline that the team has that lead to the current state vs the discipline that is required to keep the event source solution going.

iw7tdb2kqo9 · a month ago
will ClickHouse be more appropriate for event sourcing than PostgreSQL due to append only nature?
zknill · a month ago
Anyone who's built, run, evolved, and operated any reasonably sized event sourced system will know it's a total nightmare.

Immutable history sounds like a good idea, until you're writing code to support every event schema you ever published. And all the edge cases that inevitably creates.

CQRS sounds good, until you just want to read a value that you know has been written.

Event sourcing probably has some legitimate applications, but I'm convinced the hype around it is predominantly just excellent marketing of an inappropriate technology by folks and companies who host queueing technologies (like Kafka).

anthonylevine · a month ago
> CQRS sounds good, until you just want to read a value that you know has been written.

This is for you and the author apparently: Prating CQRS does not mean you're splitting up databases. CQRS is simply using different models for reading and writing. That's it. Nothing about different databases or projections or event sourcing.

This quote from the article is just flat out false:

> CQRS introduces eventual consistency between write and read models:

No it doesn't. Eventual consistency is a design decision made independent of using CQRS. Just because CQRS might make it easier to split, it doesn't in any way have an opinion on whether you should or not.

> by folks and companies who host queueing technologies (like Kafka).

Well that's good because Kafka isn't an event-sourcing technology and shouldn't be used as one.

mrsmrtss · a month ago
Yes, I don't know where the misconception that CQRS or Event Sourcing automatically means eventual consistency comes from. We have built, run, evolved, and operated quite a few reasonably sized event sourced systems successfully, and these systems are running to this day without any major incidents. We added eventually consistent projections where performance justified it, fully aware of the implications, but kept most of the system synchronous.
zknill · a month ago
Please explain how you intend to use different models for reading and writing without there being some temporal separation between the two?

Most all CQRS designs have some read view or projection built off consuming the write side.

If this is not the case, and you're just writing your "read models" in the write path; where is the 'S' from CQRS (s for segregation). You wouldn't have a CQRS system here. You'd just be writing read optimised data.

mrkeen · a month ago
> Just because CQRS might make it easier to split

Or segregate even.

fleahunter · a month ago
[flagged]
mrkeen · a month ago
> Event sourcing shines when the business actually cares about history (finance, compliance, complex workflows)

Flip it on its head.

Would those domains be better off with simple crud? Did the accountants make a wrong turn when they switched from simple-balances to single-entry ledgers?

mexicocitinluez · a month ago
> or a third team starts depending on your event stream as an integration API.

> vents stop being an internal persistence detail and become a public contract.

You can't blame event sourcing for people not doing it correctly, though.

The events aren't a public contract and shouldn't be treated as such. Treating them that way will result in issues.

> Used as a generic CRUD replacement it’s a complexity bomb with a 12-18 month fuse.

This is true, but all you're really saying it "Use the right tool for the right job".

simonw · a month ago
> You can't blame event sourcing for people not doing it correctly, though.

You really can. If there's a technology or approach which the majority of people apply incorrectly that's a problem with that technology or approach.

zknill · a month ago
> You can't blame event sourcing for people not doing it correctly, though.

Perhaps not, but you can criticise articles like this that suggest that CQRS will solve many problems for you, without touching on _any_ of its difficulties or downsides, or the mistakes that many people end up making when implementing these systems.

javier2 · a month ago
This. This is also a reason why its so impressive google docs/sheets has managed to stay largely the same for so long
liampulles · a month ago
If you are considering event sourcing, run an event/audit log for a while and see if that does not get you most of the way there.

You get similar levels of historical insight, with the disadvantage that to replay things you might need to put a little CLI or script together to infer commands out of the audit log (which if you do a lot, you can make a little library to make building those one off tools quite simple - I've done that). But you avoid all the many well documented footguns that come from trying to run an event sourced system in a typical evolving business.

mrkeen · a month ago
I've done this.

We have a customer whom we bill for feature X.

Does he actually have feature X or are we billing him for nothing?

With ES: We see his Subscriptions and Cancellations and know if he has feature X.

Without ES: We don't know if he subscribed or cancelled.

With audit log: We almost know whether he subscribed or cancelled.

Deleted Comment

xlii · a month ago
I'm going to have a word with my ISP. It seems that sites SSL certificates has expired. That's not a good thing, but my ISP decided I'm an idiot and gave me a condescending message about accepting expired certificate - unacceptable in my book. VPN helped.

Too much dry code for my taste and not many remarks/explanations - that's not bad because for prose I'd recommend Martin's Fowler articles on Event processing, but _could be better_ ;-)

WRT to tech itself - personally I think Go is one of the best languages to go for Event Sourcing today (with Haskell maybe being second). I've been doing complexity analysis for ES in various languages and Go implementation was mostly free (due to Event being an interface and not a concrete structure).

azkalam · a month ago
> Go is one of the best languages to go for Event Sourcing toda

Can you explain this? Go has a very limited type system.

mrsmrtss · a month ago
Have you also considered C# for Event Sourcing? We've built many successful ES projects with C# and the awesome Marten library (https://martendb.io/). It's a real productivity multiplier for us.
tamnd · a month ago
I don't think this design in the article works in practice.

A single `events` table falls apart as the system grows, and untyped JSONB data in `event_data` column just moves the mess into code. Event payloads drift, handlers fill with branching logic, and replaying or migrating old events becomes slow and risky. The pattern promises clarity but eventually turns into a pile of conditionals trying to decode years of inconsistent data.

A simpler and more resilient approach is using the database features already built for this. Stored procedures can record both business data and audit records in a controlled way. CDC provides a clean stream for the tables that actually need downstream consumers. And even carefully designed triggers give you consistent invariants and auditability without maintaining a separate projection system that can lag or break.

Event sourcing works when the domain truly centers on events, but for most systems these database driven tools stay cleaner, cheaper, and far more predictable over time.

tamnd · a month ago
The only place this kind of append-only event log consistently works well is clickstream-style workloads.

You rarely replay them to reconstruct business state; you just pump them into analytics or enrichment pipelines.

coredev_ · a month ago
There are a lot of voices against event sourcing in the comments. I'd just want to balance things a bit. For a mature domain (like when you rebuild an existing system), event sourcing can work really well and make so much sense. But yes, dicipline is a must, as well is thinking things through before you implement a new event.

Currently working on a DDDd event sourced system with CQRS and really enjoy it.

cedws · a month ago
Stuffing data into JSONB columns always makes me feel uncomfortable. Not necessarily for performance/efficiency reasons. You also loose the strong schema that SQL gives you, you don't get to use constraints. You might as well use Mongo, no?

How can you be sure that the data stuffed into JSONB fits a particular schema, and that future changes are backwards compatible with rows added long ago?

LVB · a month ago
JSONB can have constraints. I think with an extension you can do full JSON Schema validation, too.
cedws · a month ago
Yes, I watched a video[0] about using CHECKs and pg_jsonschema to do this in the past. However this only checks for conformance at insert/update time. As time goes on you'll inevitably need to evolve your structures but you won't be able to have any assurances to whether past data conforms to the new structure.

The way this article suggests using JSONB would also be problematic because you're stuffing potentially varying structures into one column. You could technically create one massive jsonschema that uses oneOf to validate that the event conforms to one of your structures, but I think it would be horrible for performance.

[0]: https://www.youtube.com/watch?v=F6X60ln2VNc