summarising some context for folks;
From Synadia's Cease and Demand Letter:
> As should be clear, the NATS.io project has failed to thrive as a CNCF project, with essentially all growth of the project to date arising from Synadia’s efforts and at Synadia’s expense. It is for this reason that Synadia requests to end its relationship with the Foundation and receive full control of the nats.io domain name and Github repository within two weeks.
From Synadia's exit proposal:
> We propose that NATS.io exit from the CNCF Foundation effective immediately... Upon leaving CNCF, Synadia will adopt the Business Source License (BSL) for the NATS.io server... specific use cases (such as offering NATS as a managed service or integrating it into specific commercial offerings) will require a commercial license.
From CNCF's response:
> Let's be clear: this is not a typical license change or fork. It's an attempt to "take back" a mature, community-driven open source project and convert it into a proprietary product—after years of growth and collaboration under open governance and CNCF's stewardship.
Primary sources:
- Synadia's Cease and Demand Letter: https://github.com/cncf/foundation/blob/main/documents/nats/...
- Synadia's Exit Proposal: https://github.com/cncf/foundation/blob/main/documents/nats/...
- CNCF's Response: https://www.cncf.io/blog/2025/04/24/protecting-nats-and-the-...
From the exit proposal: "Over 97% of contributions to the NATS.io server were made by employees of Synadia and its predecessor company".
Also, when they applied for graduation in 2018, they were told no because most of the contributors work for Synadia (https://github.com/cncf/toc/pull/168). As of now 7 years later, it's still not graduated by CNCF. At this point it likely never will.
Putting yourself in their shoes, are your surprised they want to take it back from CNCF?
On the other hand, if you know what you need to do and it’s supported by it, NATS is IME the way to go (particularly JetStream).
RabbitMQ is good for 1 producer -> 1 consumer with ack/nack
Right?
1. ok
2. error
3. ok
4. ok
I cannot not-ACK message#2 because that means message#1 is not ACK-ed as well.
Does NATS solve this? F.ex. can I get a reference to each message in my parallel workers for them to also say "I am not ACK-ing this because I failed processing it, let the next batch include it again"?
You can also 'negative ack' messages, specify a back-off period before the message is re-delivered (because NATS automatically re-delivers un-acked (or nacked) messages) when you can't temporarily process it, or 'term' a message (don't try to re-deliver it, e.g. because the payload is bad), or even 'ask for more time before needing to ack the message (if you are temporarily too slow at processing the message).
Traditional message brokers (RabbitMQ and similar) do support the event-driven architecture, yet the data they handle is ephemeral. Once a message has been processed, it is gone forever. Connecting a new raw data source is not an established practice and requires a technical «adapter» of sorts to be built. High concurrency levels is problematic for scenarios where the strict message processing ordering is required: the traditional message brokers do not handle it well in highly parallel scenarios out of the box.
Kafka and similar also support event-driven architectures, yet they allow the data to be processed multiple times – by existing (i.e. a data replay) and, most importantly, new or unknown at the time consumers (note: this is distinct from the data replay!). This is allows to plug existing data source(s) into a data streaming platform (Kafka) and incrementally add new data consumers and processors over the time with the datasets being available intact. This is an important distinction. Kafka and similar also improve on the strict processing order guarantee by allowing a message source (a Kafka topic) to be explicitly partitioned out and guaranteeing that the message order will be retained and enforced for a consumer group receiving messages from that partition.
To recap, traditional message brokers are a good fit for handling the ephemeral data, and data streaming platforms are a good fit for connecting data sources and allowing the data to be ingested multiple times. Both implement and support event-driven architectures in a variety of scenarios.
So you can for example ask for [the first/the last/all] of the messages on a particular subject, or on a hierarchy of subjects by using wildcards. All the filtering is done at the server level.
[0] https://cwiki.apache.org/confluence/display/KAFKA/KIP-932%3A...