Readit News logoReadit News
Posted by u/enether 7 months ago
Ask HN: What's your go-to message queue in 2025?
The space is confusing to say the least.

Message queues are usually a core part of any distributed architecture, and the options are endless: Kafka, RabbitMQ, NATS, Redis Streams, SQS, ZeroMQ... and then there's the “just use Postgres” camp for simpler use cases.

I’m trying to make sense of the tradeoffs between:

- async fire-and-forget pub/sub vs. sync RPC-like point to point communication

- simple FIFO vs. priority queues and delay queues

- intelligent brokers (e.g. RabbitMQ, NATS with filters) vs. minimal brokers (e.g. Kafka’s client-driven model)

There's also a fair amount of ideology/emotional attachment - some folks root for underdogs written in their favorite programming language, others reflexively dismiss anything that's not "enterprise-grade". And of course, vendors are always in the mix trying to steer the conversation toward their own solution.

If you’ve built a production system in the last few years:

1. What queue did you choose?

2. What didn't work out?

3. Where did you regret adding complexity?

4. And if you stuck with a DB-based queue — did it scale?

I’d love to hear war stories, regrets, and opinions.

speedgoose · 7 months ago
I played with most message queues and I go with RabbitMQ in production.

Mostly because it has been very reliable for years in production at a previous company, and doesn’t require babysitting. Its recent versions also has new features that make it is a descent alternative to Kafka if you don’t need to scale to the moon.

And the logo is a rabbit.

swyx · 7 months ago
Datadog too. i often wonder how come more companies dont pick cute mascots. gives a logo, makes everyone have warm fuzzies immediately, creates pun opportunities.

inb4 "oh but you wont be taken seriously" well... datadog.

DonsDiscountGas · 7 months ago
Hugging face clearly shares the same philosophy
aitchnyu · 7 months ago
Just used it as a Celery (job queue) backend. How is it a Kafka alternative?
speedgoose · 7 months ago
KingOfCoders · 7 months ago
NATS.io because I'm using Go, and I can just embed it for one server [0], one binary to deploy with Systemd, but able to split it out when scaling the MVP.

[0] https://www.inkmi.com/blog/how-i-made-inkmi-selfhealing

adamcharnock · 7 months ago
I would highlight a distinction between Queues and Streams, as I think this is an important factor in making this choice.

In the case of a queue, you put an item in the queue, and then something removes it later. There is a single flow of items. They are put in. They are taken out.

In the case of a stream, you put an item in the queue, then it can be removed multiple times by any other process that cares to do so. This may be called 'fan out'.

This is an important distinction and really effects how one designs software that uses these systems. Queues work just fine for, say, background jobs. A user signs up, and you put a task in the 'send_registration_email' queue.[1]

However, what if some _other_ system then cares about user sign ups? Well, you have to add another queue, and the user sign-up code needs to be aware of it. For example, a 'add_user_to_crm' queue.

The result here is that choosing a queue early on leads to a tight-coupling of services down the road.

The alternative is to choose streams. In this case, instead of saying what _should_ happen, you say what _did_ happen (past tense). Here you replace 'send_registration_email' and 'add_user_to_crm' with a single stream called 'used_registered'. Each service that cares about this fact is then free to subscribe to that steam and get its own copy of the events (it does so via a 'consumer group', or something of a similar name).

This results in a more loosely coupled system, where you potentially also have access to an event history should you need it (if you configure your broker to keep the events around).

--

This is where Postgresql and SQS tend to fall down. I've yet to hear of an implementation of streams in Postgresql[2]. And SQS is inherently a queue.

I therefore normally reach for Redis Steams, but mostly because it is what I am familiar with.

Note: This line of thinking leads into Domain Driven Design, CQRS, and Event Sourcing. Each of which is interesting and certainly has useful things to offer, although I would advise against simply consuming any of them wholesale.

[1] Although this is my go-to example, I'm actually unconvinced that email sending should be done via a queue. Email is just a sequence of queues anyway.

[2] If you know of one please tell me!

thruflo · 7 months ago
adamcharnock · 7 months ago
I think these all relate to streaming data. Not streams in the sense of the data-structure for message passing (a la Kafka, Redis Streams, etc)
j45 · 7 months ago
While someone’s use case would have to be verified, the below is to show that there are streaming options in Postgres.

Would be interesting to get your take on queues vs streams on the below.

I consider myself a little late to the Postgres party after time with other nosql and rbdms, but it seems more and more an ok place to consider beginning from.

For Streaming…

Supabase has some Kafka stream type examples that covers change data capture: https://supabase.com/blog/postgres-wal-logical-replication

Tables can also do some amount of stream like behaviour with visibility and timeout behaviours:

pg-boss — durable job queues with visibility timeouts and retries.

Zilla — supports Postgres as a source using CDC to act as a stream. • ElectricSQL — uses Postgres replication and CRDTs for reactive sync (great for frontend state as a stream

Streaming inside Postgres also has some attention from

Postgres as Event Store https://eventmodeling.org. This can combine event sourcing with Postgres for stream modeling.

pgmq — from Tempo - this is a minimal message queue built on Postgres using append-only design.. Effectively works as a persistent stream with ordered delivery

adamcharnock · 7 months ago
I suspect this comment is LLM generated. There is a 404-ing URL, discussion of queues, and some discussion of Postgres CDC which I believe is Postgres logical replication. Neither of which are a streams implementation on Postgres.
vlvdus · 7 months ago
What makes Postgres (or any decent relational DB) fall down in this case?
adamcharnock · 7 months ago
It is simply that I’m unaware of a streams implementation for postgresql. Although another comment is mentioning them, so I’ll read that in some more detail shortly.

I’ve always felt that streams should be implementable via stored procedures, and that it would be a fun project. I’ve just never quite had the driving force to do it.

ryandvm · 7 months ago
Great comment. I'm disappointed that I had to scroll this far down to see someone pointing out that queues and streams ARE NOT THE SAME.
bilinguliar · 7 months ago
I am using Beanstalkd, it is small and fast and you just apt-get it on Debian.

However, I have noticed that oftentimes devs are using queues where Workflow Engines would be a better fit.

If your message processing time is in tens of seconds – talk to your local Workflow Engine professional (:

janstice · 7 months ago
In that case, any suggestions if the answer was looking for workflow engines? Ideally something that will work for no-person-in-the-middle workloads in the tens of seconds range as well as person-making-a-decision workflows that can live for anywhere between minutes and months?
bilinguliar · 7 months ago
Temporal if you do not want vendor locks.

AWS Step Functions or GCP Workflows if you are on the cloud.

dkh · 7 months ago
A classic. Not something I personally use these days, but I think just as a piece of software it is an eternally good example of something simple, powerful, well-engineered, pleasant to use, and widely-compatible, all at the same time
wordofx · 7 months ago
Postgres. Doing ~ 70k messages/second average. Nothing huge but don’t need anything dedicated yet.
lawn · 7 months ago
I'm curious on how people use Postgres as a message queue. Do you rely on libraries or do you run a custom implementation?
ericaska · 7 months ago
We also use Postgres but we don't have many jobs. It's usually 10-20 squedule that creates hourly-monthly jobs and they are mostly independent. Currently a custom made solution but we are going to update it to use skip locked and use Notify/Listen + interval to handle jobs. There is a really good video about it on YouTube called: "Queues in PostgreSQL Citus Con."
padjo · 7 months ago
You can go an awfully long way with just SELECT … FOR UPDATE … SKIP LOCKED
j45 · 7 months ago
Built right in using a group of pg functions, or also with a library, or also with a python based tool that happens to use pg for the queue.
wordofx · 7 months ago
Just select for update skipped locked. Table is partitioned to keep unprocessed small.
iamcalledrob · 7 months ago
Curious what kind of hardware you're using for that 70K/s?
wordofx · 7 months ago
It’s an r8g instance in aws. Can’t remember the size but I think it’s over provisioned because it’s at like 20% utilisation and only spikes to 50.
aynyc · 7 months ago
What’s your batch size?
lmm · 7 months ago
SQS is great if you're already on AWS - it works and gets out of your way.

Kafka is a great tool with lots of very useful properties (not just queues, it can be your primary datastore), but it's not operationally simple. If you're going to use it you should fully commit to building your whole system on it and accept that you will need to invest in ops at least a little. It's not a good fit for a "side" feature on the edge of your system.

mstaoru · 7 months ago
Redis Streams is a "go-to" for me, mostly because of operational simplicity and performance. It's also dead simple to write consumers in any language. If I had more stringent durability requirements, I would probably pick Redpanda, but Kafka-esque (!) processing semantics can be daunting sometimes.

I didn't have anything but bad experiences with RabbitMQ, maybe I cannot "cook" it, but it would always go split-brain, or last issue I had, a part of clients connected to certain clustered nodes just stopped receiving messages. Cluster restart helped, but all logs and all metrics were green and clean. I try to avoid it if I can.

ZeroMQ is more like a building block for your applications. If you need something very special, it could be a good fit, but for a typical EDA-ish bus architecture Redis or Kafka/Redpanda are both very good.

jolux · 7 months ago
Kafka is fairly different from the rest of these — it’s persistent and designed for high read throughput to multiple simultaneous clients at the same time, as some other commenters have pointed out.

We wanted replayability and multiple clients on the same topic, so we evaluated Kafka, but we determined it was too operationally complex for our needs. Persistence was also unnecessary as the data stream already had a separate archiving system and existing clients only needed about 24hr max of context. AWS Kinesis ended up being simpler for our needs and I have nothing but good things to say about it for the most part. Streaming client support in Elixir was not as good as Kafka but writing our own adapter wasn’t too hard.