> You can make enemies easily...
Short term, definitely. In the long tail? If you are right more than you are wrong, then that manifests as respect.Given the quality and the structure neither approach really matters much. The root problems are elsewhere.
Every action, every button click, basically every input is sent to the server, and the changed dom is sent back to the client. And we're all just supposed to act like this isn't absolutely insane.
Main downside is the hot reload is not nearly as nice as TS.
But the coding experience with a C# BE/stack is really nice for admin/internal tools.
Generally I say, "Message queues are for tasks, Kafka is for data." But in the latter case, if your data volume is not huge, a message queue for async ETL will do just fine and give better guarantees as FIFO goes.
In essence, Kafka is a very specialized version of much more general-purpose message queues, which should be your default starting point. It's similar to replacing a SQL RDBMS with some kind of special NoSQL system - if you need it, okay, but otherwise the general-purpose default is usually the better option.
> Ah yes, and every consumer should just do this in a while (true) loop as producers write to it. Very efficient and simple with no possibility of lock contention or hot spots. Genius, really.
Seemed to imply that it's not possible to build a high performance pub/sub system using a simple SQL select. I do not think that is true and it is in fact fairly easy to build a high performance pub/sub system with a simple SQL select. Clearly, this design as proposed is not the same as Kafka.I used ZMQ to connect nodes and the worker nodes would connect to an indexer/coordinator node that effectively did a `SELECT FROM ORDER BY ASC`.
It's easier than you may think and the bits here ended up with probably < 1000 SLOC all told.
- Coordinator node ingests from a SQL table
- There is a discriminator key for each row in the table for ordering by stacking into an in-memory list-of-lists
- Worker nodes are started with _n_ threads
- Each thread sends a "ready" message to the coordinator and coordinator replies with a "work" message
- On each cycle, the coordinator advances the pointer on the list, locks the list, and marks the first item in the child list as "pending"
- When worker thread finishes, it sends a "completed" message to the coordinator and coordinator replies with another "work" message
- Coordinator unlocks the list the work item originated from and dequeues the finished item.
- When it reaches the end of the list, it cycles to the beginning of the list and starts over, skipping over any child lists marked as locked (has a pending work item)
Effectively a distributed event loop with the events queued up via a simple SQL query.Dead simple design, extremely robust, very high throughput, very easy to scale workers both horizontally (more nodes) and vertically (more threads). ZMQ made it easy to connect the remote threads to the centralized coordinator. It was effectively "self balancing" because the workers would only re-queue their thread once it finished work. Very easy to manage, but did not have hot failovers since we kept the materialized, "2D" work queue in memory. Though very rarely did we have issues with this.
> ...and internal systems they have to use, whose sole purpose is to make sure nobody does anything
I once had to use Lotus Notes after the company I was at was acquired by the now defunct Computer Sciences Corporation. I decided I would never, ever work for another company that used Lotus Notes.All of the machinery is already designed to handle containers so it just becomes another type of container.
One danger is trying to reduce multiple different concepts to a single concept. For example, instead of thinking about an email, a task, and a calendar entry as separate things, we could just have a generic concept like an "entity" which has attributes like a time/date or a list of people or a body of text.
Programmers love that kind of abstraction. We love having a few simple pieces that we can combine in various ways to get what we want. That is literally what we do when we program.
Normal people, though, hate that. Instead of giving them a tool to get their job done, we've given them a puzzle. They need to figure out how to combine the pieces. And what's obvious to us, is absolutely not obvious to them.
I haven't seen your UX, so I don't know if this is an issue, but I would focus on the mental model. As a user, what do I need to know to start using the tool? If that's more than a single sentence (to start) then you're in trouble.
I'm always happy to brainstorm with people. Hit me up if you want to chat sometime.
> Normal people, though, hate that. Instead of giving them a tool to get their job done, we've given them a puzzle
Normal people already have a process and mental model of fitting tools into that process. Loosely coupled tools -- as inefficient as they may be -- have the benefit of being able to conform to those existing processes and models. It's why the humble spreadsheet is still so widely used as the glue between people and processes because it is just rigid enough while offering endless flexibility and handling of edge cases.On the other hand, highly integrated or opinionated tools need to be both easy and flexible enough to fit existing models/processes or it will simply overwhelm new users. Or it has to have some other benefit that is significant enough that users are willing to change their models/processes.
There's a lot of good research and writing on this topic. This paper, in particular has been really helpful for my cause: https://dl.acm.org/doi/pdf/10.1145/3593856.3595909
It has a lot going for it: 1) it's from Google, 2) it's easy to read and digest, 3) it makes a really clear case for monoliths.