Message queues are usually a core part of any distributed architecture, and the options are endless: Kafka, RabbitMQ, NATS, Redis Streams, SQS, ZeroMQ... and then there's the “just use Postgres” camp for simpler use cases.
I’m trying to make sense of the tradeoffs between:
- async fire-and-forget pub/sub vs. sync RPC-like point to point communication
- simple FIFO vs. priority queues and delay queues
- intelligent brokers (e.g. RabbitMQ, NATS with filters) vs. minimal brokers (e.g. Kafka’s client-driven model)
There's also a fair amount of ideology/emotional attachment - some folks root for underdogs written in their favorite programming language, others reflexively dismiss anything that's not "enterprise-grade". And of course, vendors are always in the mix trying to steer the conversation toward their own solution.
If you’ve built a production system in the last few years:
1. What queue did you choose?
2. What didn't work out?
3. Where did you regret adding complexity?
4. And if you stuck with a DB-based queue — did it scale?
I’d love to hear war stories, regrets, and opinions.
Mostly because it has been very reliable for years in production at a previous company, and doesn’t require babysitting. Its recent versions also has new features that make it is a descent alternative to Kafka if you don’t need to scale to the moon.
And the logo is a rabbit.
inb4 "oh but you wont be taken seriously" well... datadog.
[0] https://www.inkmi.com/blog/how-i-made-inkmi-selfhealing
In the case of a queue, you put an item in the queue, and then something removes it later. There is a single flow of items. They are put in. They are taken out.
In the case of a stream, you put an item in the queue, then it can be removed multiple times by any other process that cares to do so. This may be called 'fan out'.
This is an important distinction and really effects how one designs software that uses these systems. Queues work just fine for, say, background jobs. A user signs up, and you put a task in the 'send_registration_email' queue.[1]
However, what if some _other_ system then cares about user sign ups? Well, you have to add another queue, and the user sign-up code needs to be aware of it. For example, a 'add_user_to_crm' queue.
The result here is that choosing a queue early on leads to a tight-coupling of services down the road.
The alternative is to choose streams. In this case, instead of saying what _should_ happen, you say what _did_ happen (past tense). Here you replace 'send_registration_email' and 'add_user_to_crm' with a single stream called 'used_registered'. Each service that cares about this fact is then free to subscribe to that steam and get its own copy of the events (it does so via a 'consumer group', or something of a similar name).
This results in a more loosely coupled system, where you potentially also have access to an event history should you need it (if you configure your broker to keep the events around).
--
This is where Postgresql and SQS tend to fall down. I've yet to hear of an implementation of streams in Postgresql[2]. And SQS is inherently a queue.
I therefore normally reach for Redis Steams, but mostly because it is what I am familiar with.
Note: This line of thinking leads into Domain Driven Design, CQRS, and Event Sourcing. Each of which is interesting and certainly has useful things to offer, although I would advise against simply consuming any of them wholesale.
[1] Although this is my go-to example, I'm actually unconvinced that email sending should be done via a queue. Email is just a sequence of queues anyway.
[2] If you know of one please tell me!
- https://electric-sql.com (disclaimer: co-founder) - https://feldera.com - https://materialize.com - https://powersync.com - https://sequinstream.com - https://supabase.com/docs/guides/realtime/broadcast - https://zero.rocicorp.dev
Etc.
Would be interesting to get your take on queues vs streams on the below.
I consider myself a little late to the Postgres party after time with other nosql and rbdms, but it seems more and more an ok place to consider beginning from.
For Streaming…
Supabase has some Kafka stream type examples that covers change data capture: https://supabase.com/blog/postgres-wal-logical-replication
Tables can also do some amount of stream like behaviour with visibility and timeout behaviours:
pg-boss — durable job queues with visibility timeouts and retries.
Zilla — supports Postgres as a source using CDC to act as a stream. • ElectricSQL — uses Postgres replication and CRDTs for reactive sync (great for frontend state as a stream
Streaming inside Postgres also has some attention from
Postgres as Event Store https://eventmodeling.org. This can combine event sourcing with Postgres for stream modeling.
pgmq — from Tempo - this is a minimal message queue built on Postgres using append-only design.. Effectively works as a persistent stream with ordered delivery
I’ve always felt that streams should be implementable via stored procedures, and that it would be a fun project. I’ve just never quite had the driving force to do it.
However, I have noticed that oftentimes devs are using queues where Workflow Engines would be a better fit.
If your message processing time is in tens of seconds – talk to your local Workflow Engine professional (:
AWS Step Functions or GCP Workflows if you are on the cloud.
Kafka is a great tool with lots of very useful properties (not just queues, it can be your primary datastore), but it's not operationally simple. If you're going to use it you should fully commit to building your whole system on it and accept that you will need to invest in ops at least a little. It's not a good fit for a "side" feature on the edge of your system.
I didn't have anything but bad experiences with RabbitMQ, maybe I cannot "cook" it, but it would always go split-brain, or last issue I had, a part of clients connected to certain clustered nodes just stopped receiving messages. Cluster restart helped, but all logs and all metrics were green and clean. I try to avoid it if I can.
ZeroMQ is more like a building block for your applications. If you need something very special, it could be a good fit, but for a typical EDA-ish bus architecture Redis or Kafka/Redpanda are both very good.
We wanted replayability and multiple clients on the same topic, so we evaluated Kafka, but we determined it was too operationally complex for our needs. Persistence was also unnecessary as the data stream already had a separate archiving system and existing clients only needed about 24hr max of context. AWS Kinesis ended up being simpler for our needs and I have nothing but good things to say about it for the most part. Streaming client support in Elixir was not as good as Kafka but writing our own adapter wasn’t too hard.