Readit News logoReadit News
victorNicollet · 8 years ago
The DDD/CQRS/ES topic appears on Hacker News every so often, and never fails to stir a strong debate. In fact, I was going to "Show HN" an Event Sourcing library I open sourced yesterday, but I guess I'll wait until everything dies down a bit :-)

I believe that the core principles of DDD/CQRS/ES are flawed and are the cause of a lot of pain. If your team plows forward and ignores the pain, you will have a bad experience working there. If your team decides to fix the problems with the architecture (usually in a way that's rather specific to their situation), and still insists that they're doing DDD/CQRS/ES, then you will have a great experience. The same can be said of many approaches to SQL database design.

When I joined Lokad, most of the code had been written by a strong CQRS advocate, and there were significant pain points caused by this approach that would have been nonexistent with a plain old SQL architecture. The author himself did a retrospective that covers a few issues: https://abdullin.com/lokad-cqrs-retrospective/

Instead of dropping everything and moving back to a more conventional SQL architecture, we dropped everything and moved to a solution which, if I were to describe it as DDD/CQRS/ES, would have me branded as a heretic by the DDD/CQRS/ES community: it does fit the definition, but it does not follow the best practices. There is a single data model shared by both commands and queries. Commands can sometimes return data. Each (micro?)service has access to exactly one aggregate. There is no uncertainty on event ordering, and no eventual consistency at the aggregate level. Materialized views for each aggregate are kept in-memory at all times. All of these intentional weakenings of the CQRS architecture were driven by pragmatic needs, and made with knowledge of the consequences.

UK-AL · 8 years ago
I'm CQRS advocate. None of are weakening the CQRS. CQRS is in fact pretty damn lose.

"There is a single data model shared by both commands and queries." - Nothing wrong with that, especially in simple cases.

"Commands can sometimes return data" - Well commands can succeed or fail. Greg young himself admits commands are really synchronous, and you need to communicate that.

"and no eventual consistency at the aggregate level." - This is the recommended approach. Between aggregates and it becomes loser.

"Materialized views for each aggregate are kept in-memory at all times" - Thats some advanced CQRS usage there.

gregyoung1 · 8 years ago
I will speak for myself.

Commands are by definition synchronous and return a success/failure.

One of my favourite read models to use is an in memory model providing I can keep all of the data in memory (and its reasonable for the query patterns).

victorNicollet · 8 years ago
Maybe I'm not as much of a heretic as I thought I would be :-)

Dead Comment

bonesss · 8 years ago
> The author himself did a retrospective that covers a few issues: https://abdullin.com/lokad-cqrs-retrospective/

Having read the article: veeeeeery little of what is mentioned is related in any meaningful sense to CQRS...

Horse-before-the-cart framework design is always bad. Encapsulating untested theories in frameworks instead of lifting them from operational designs, always bad. Building frameworks as a learning exercise, bad. Abstracting away data decisions and then trying to provide opinionated answers to storage at the framework level, very bad. Conflating logical system architecture with supplier provided products (ie cloud services), is framework _cancer_.

That is simply not a reasonable starting point to provide value and scalable systems, and the end result was the same as all other similar attempts: an A for effort with a result that was at best 'close, but not quite...'

To be clear: I do not intend this as criticism leveled at the author or the company. The article is great. I very much agree with his analysis though:

> Lokad.CQRS was created with a very short-sighted design approach in mind, a reusable LEGO constructor...These days I'd try to limit the damage I inflict upon the developers and avoid writing any widely reusable frameworks

This is where all the issues lay... Premature framework development causing up-front decisions untethered to practicality based on conjecture and prognostication instead of experience in a challenging domain insufficiently understood by the framework architect.

From a high level perspective: neither CQRS, nor framework development, nor application development were sufficiently groked before being baked into a long-term production commitment. Those are organizational issues, not CQRS issues. In the same vein, coming from such an environment and basing judgment of DDD or CQRS on anything that came from that environment won't hold water.

The architectural problems you're talking about, the core principles you think are flawed, work stunningly well in other places that have not made the same underlying mistakes. CQRS advocates != CQRS experts != framework experts.

sandGorgon · 8 years ago
hi - could you explain the implementation of this "Materialized views for each aggregate are kept in-memory at all times" ? i have been considering a heretical flavor of CQRS/ES as well, where the command returns data.... and i synchronously build the latest state before I return (rather than building through the whole event stream).

when you say "materialized views" - do you really mean the current state of an Order for example ? and by "in memory", is it being persisted to a database ?

victorNicollet · 8 years ago
What we would do is have a single aggregate representing the entire state of a micro-service. For instance, for the "Orders" micro-service, it would contain all orders in the system, with events such as OrderCreated, OrderPaymentReceived, OrderShipped, OrderCancelled, and so on.

Then, we would have a materialized view that would be a single C# immutable object (canonically called `State`) that contains a dictionary of every order, and maybe a few indices (such as "identifiers of orders currently waiting to be shipped out" or "list of orders by customer"). Everything fits in memory, so every query is simply a bit of C# code that traverses the in-memory graph of objectd rooted at the `State`.

A command emits events, which are then applied by the system to create a new `State` from the previous one. Events emitted by other instances of the application are also fetched by the system every so often, and you can request a "force catch-up" to guarantee that the `State` you access takes into account all events appended so far (so that you're certain to see the results of your actions, even if the load balancer bounces you from one instance to another).

Here's what we use: https://github.com/Lokad/AzureEventStore

he0001 · 8 years ago
While event sourcing is a quite alright tech for certain problems, this idea of "one solution fits all" is just ridiculous. Also the idea of "eventual consistency" is highly situational particularly in an distributed solution. Paradoxically looking at one event source log this looks like a great idea, but there are no guarantees that events propagate evenly and "in-order" so even if events happen and there's "nothing you can do about it", systems will still be out of order, when read in relation to each other because the systems won't know that any other event actually have happened. The system will happily consume it because it won't have the proper info or acted on the wrong information. No matter how you try to mitigate this this will happen and you won't have the slightest idea that it happened. Worst of all, you can't recover from it because the information is gone.

It's just fundamental aspect of asynchronous programming, conveniently forgotten trying to sell solutions.

josephg · 8 years ago
This is a problem in how lots of people do event sourcing, but its not a fundamental problem with event sourcing. If you enforce strict ordering of events throughout your stack and do operation catchup properly, these sorts of bugs are completely avoidable.
he0001 · 8 years ago
Well that's the entire problem, you are not the one making the requirements, the solution of the problem does... You can't eat dinner before you have made it.

Edit: I would argue that you can't write a system which is open (several domains) and believe that you will have entirely independent events throughout the system and claim that your system will be "bugfree"/correct.

he0001 · 8 years ago
And by "one solution fits all" I mean CQRS.
perlgeek · 8 years ago
There is a disconnect in your messaging. Here on HN the title is "Learn what DDD, CQRS, and event-sourcing are all about", but the tagline in the PDF is "The semantic JavaScript backend for event-driven development".

So, what's this about? about DDD, CQRS and event-sourcing, or about a particular Javascript framework? Those two aren't the same at all (even though I suspect there is a little overlap).

goloroden · 8 years ago
The brochure is to 2/3s an introduction to DDD, CQRS, and event-sourcing in general, not being related to a specific JavaScript framework.

The last 1/3 shows how to apply these concepts using wolkenkit, which is a specific framework.

When we created the brochure it seemed important to us to first explain the concepts in a neutral way, because we want to help people gain a better understanding of these concepts, no matter whether they are going to use wolkenkit or not.

perlgeek · 8 years ago
That's great. My point is that the title page does not properly represent that.

You might have an easier time getting people to read it when they don't get the impression "it's just another javascript framework".

shock · 8 years ago
> In fact, not even credit institutes work with transactions, they too rely on eventually being consistent.

Is this really the case? I was always under the impression that everything regarding financial transactions needed to be transactional.

andreareina · 8 years ago
Consistency here is CAP consistency[1], not ACID consistency. In short, if datum d is visible to Alice at time t1, it MUST also be visible to Bob at time t2 > t1.

So financial transactions are transactional, but there's no guarantee that every node has a fully up-to-date state at all times.

[1]: https://en.wikipedia.org/wiki/CAP_theorem

jen20 · 8 years ago
This is also not how bank transfers work inter-bank. I as Bob (a recipient) may have zero knowledge of a transaction initiated by Alice regardless of time.
gregyoung1 · 8 years ago
Bank transactions are usually handled via long running messaging state machines. When as example you transfer from an account in the US to one in Sweden it does not just open a 2PC transaction between the two databases. There are also often intermediaries involved.
jen20 · 8 years ago
Wrong. Do you seriously think that a cross-bank transfer is operating inside a distributed database transaction? Furthermore, what did banks do before they had computer systems available?

Dead Comment

gspetr · 8 years ago
I apologize for making a meta comment, but I had to modify the way the article is displayed because it's an all too common example of this: https://www.wired.com/2016/10/how-the-web-became-unreadable/

Which is very annoying because the article about CSS can't get it's UI right.

They call Svelte "The magical disappearing UI framework" and indeed, their fonts look like they are written in an invisble ink about to dry up.

aaronmu · 8 years ago
Can anyone explain why eventual consistency is so scary? The way I see it, eventual consistency is everywhere. Even in your non-distributed synchronous CRUD app. The minute you start two db transactions within one script you have eventual consistency. You have to deal with the fact that one transaction can fail and that your state can be inconsistent.
victorNicollet · 8 years ago
Dealing with eventual consistency adds a lot of mental complexity to almost every feature. It turns a simple rule like "Do not allow registering an username if that username has already been registered" into an entire rulebook on what should happen if a username is registered twice. It even forces you to deal with the obvious "Once a gizmo is created, it should appear in the list of gizmos". Every part of your code becomes a potential race condition that you have to detect and handle appropriately.

You can't get rid of eventual consistency, obviously, but you can set up your system so that your code expresses rules and features in an "immediately" consistent abstraction, and the system then translates those rules and features into the eventually consistent world, by applying tactics and techniques that you would have otherwise applied manually. It's a really natural approach for programmers: automate the translation from immediate to eventual, instead of doing it yourself every time.

Of course, most of these tactics come at a cost, either of performance (we won't acknowledge the creation of a gizmo until we've propagated that creation to every server capable of rendering the list of gizmos) or of availability (the registration of your username can't be acknowledged unless the persistent consensus reflects that you're the only one who registered it). But that's fine: most of these things aren't critical enough to make it worth dealing with eventual consistency manually, and when they do become critical, it's time to go down a level of abstraction and apply immediate-to-eventual translation tactics that rely on additional assumptions about what you're doing in order to improve performance or availability.

he0001 · 8 years ago
> The minute you start two db transactions within one script you have eventual consistency.

Well it's all about errors (and subsequently about control and correctness). Even if you have two transactions you are able to undo the other without anyone else (readers of the system) know that it was there because something else happened just before yours. A transaction usually happens in an atomic fashion, which makes it possible to back out on something which is no more correct so you have to fail it somehow to someone which does have the ability to recover, the one who initiated the call.

Without being too philosophical, most, if not everything depends on synchronous state (fex events themselves), computing just gives the illusion of that things happens immediately. Eventual consistency is cheating, and you can do it where the events are entirely independent. Otherwise you will have side effects when events are not ordered.

JavaScript have an "asynchronous" coding style, where (red/blue) methods are put on event-loop as "events". But the events needs to executed in order (synchronously by a thread) for the program to behave correctly so you give the methods attributes (red/blue) for them to execute in an linked order. So even the asynchronous behaviour in JS relies on synchronous features for it to be correct!

jorgeleo · 8 years ago
It really depends on the application. Eventual consistency means that eventually, the databases will be consistent.

Some bank transactions, for example, do not have that luxury because it might allow 2 possible answers to the same query if the eventuality has not been met.

Some trading transactions don't have that luxury either because the milliseconds that you waited for the consistency a stock might have changed its price dramatically.

It really depends on the Domain if the time to wait for the eventuality is acceptable or not.

gregyoung1 · 8 years ago
Bank transactions are eventually consistent. How do you think you can get an overdraft fee? Ever use a gas pump and notice it took a $1 authorization on your card then later settled for what you pumped?

Eventual consistency is not all that scary, try putting on a business hat instead of a technical one. Its about risk management.

nightski · 8 years ago
Banks have daily lag (pending transactions). They also stop business in the evening and resume in the morning to process all these items. That isn't feasible in a lot of scenarios.

Rarely do you have an up to the microsecond view of the markets (unless you are located appropriately and pay significant amounts of money). There is always a delay in the information's accuracy and relevancy.

_pmf_ · 8 years ago
> The minute you start two db transactions within one script you have eventual consistency.

You may also have two independent items of information that have no consistency relationship or requirements.

That's why eventual consistency "works" so "well" with NoSQL: because there's an overlap between using NoSQL databases and not requiring actual consistency.

raarts · 8 years ago
Reading this.... It's just like react and redux, combined with redux-sagas. My head is spinning.
gregyoung1 · 8 years ago
Only made 10 years prior.
bauerd · 8 years ago
Offtopic, but was the PDF typeset with layouting software or a markup converter such as Pandoc and the like? Looks really slick, kudos. Also is there a way to tell by PDF metadata?

Edit: Had a second look and it's layouting software