Readit News logoReadit News
osigurdson · 2 months ago
I like this article. Lots of comments are stating that they are "using it wrong" and I'm sure they are. However, it does help to contrast the much more common, "use Postgres for everything" type sentiment. It is pretty hard to use Postgres wrong for relational things in the sense that everyone knows about indexes and so on. But using something like L/N comes with a separate learning curve anyway - evidenced in this case by someone having to read comments in the Postgres source code itself. Then if it turns out that it cannot work for your situation it may be very hard to back away from as you may have tightly integrated it with your normal Postgres stuff.

I've landed on Postgres/ClickHouse/NATS since together they handle nearly any conceivable workload managing relational, columnar, messaging/streaming very well. It is also not painful at all to use as it is lightweight and fast/easy to spin up in a simple docker compose. Postgres is of course the core and you don't always need all three but compliment each other very well imo. This has been my "go to" for a while.

jelder · 2 months ago
"use Postgres for everything" is certainly wrong, eventually. It's still the second-best choice for every new project, and most products will never see the traffic levels that justify using something more specialized. Obviously, recall.ai hit the level of traffic where Postgres was no longer ideal. I bet they don't regret it for the other parts of their product.
closeparen · 2 months ago
They aren't even questioning its use as a database, just as an event bus.
lytedev · 2 months ago
What is the first-best choice for a new project? SQLite?
riedel · 2 months ago
Actually LISTEN/NOTIFY does also not scale the other way. Immich also moved to that pg for everything mentality (trying to remove redis dependencies). The problem: postgres needs a WAL flush for all notifications. I ran immich on my HDD-NAS. The result was constant noise because the pg backed socket.io backend issues constant keep alive messages.
j45 · 2 months ago
There’s no reason this article and start with Postgres for everything can’t be true.

In the beginning having fewer parts to connect and maintain lets the needs and bottlenecks of the actual application emerge.

If it was listen/notify in such a scenario at some volume where optimizing it isn’t in the cards… so be it. It would be some time down the road before sharding a function into a specific subsystem like what you described.

Appreciate learning about the Postgres/Clickhouse/nats combo. If there might be an article if the three together that you liked would be happy to read and learn.

dathinab · 2 months ago
Honestly whatever kind of DB you are speaking about always be wary of "niche/side features" which don't fit it's core design goals, they tend to have unexpected limitations.

listen/notify isn't necessary a replacement for redis or other pub/sub systems, redis pub/sub and similar isn't necessary a replacement for idk. Kafka or similar queue/messaging system

but a lot of companies have (for modern standards) surprisingly small amounts of data, very even a increase by 2,3,4x still isn't that big. In that case listen/notify and similar might just work fine :shrug:

also same is true the other way around, depending on you application you can go redis only, as long as you data volume stays small enough and needs for transactional/sync are reasonable simple enough (with watch+exec, NX,XX options etc. and maybe some redis side lua scripts you can do quite a lot for data synchronization). Issue with that is that stylistically redis data sync/transaction code is often much more similar to writing atomic data-structures then to SQL transactions, and even for SQL transactions there is a trend of devs severely overestimating what they provide, so often you are better of not touching on it when you can avoid it, also BTW. redis has something very similar to sqlite or Notify where "basically" (oversimplified by a lot) there is only one set of writes done at a time ;) (and then afterwards distributed to replicas), just that outside of some micro lua scripts you don't really run much logic outside of some NX, XX checks etc. so it's not blocking much and it's "more or less" all just in memory not touching a WAL (again oversimplified).

osigurdson · 2 months ago
>> also same is true the other way around, depending on you application you can go redis only

Really the primary reason not to try stuff like this is (at least for me), feel that I won't paint myself into a corner with Postgres. I can always add a table here or a join there and things will work. If I need columnar, I use ClickHouse and NATS for messaging. I know these well but still gravitate toward Postgres because I feel it can grow in whatever direction is needed. However, it is true, I have thought about trying to just use NATS KV and make all services stateful receiving notifications when things change. It does seem that it could massively simplify some things but expect there could be some sharp edges in the face of unknown requirements. If one could just design for exactly the problem at hand it would be different but it never seems to work out like that.

ownagefool · 2 months ago
Largely agree. Functionality wise if you don't have many jobs, using the database as the queue is fine.

However, I've been in several situations where scaling the queue brings down the database, and therefore the app, and am thus of the opinion you probably shouldn't couple these systems too tightly.

There are pros and cons, of course.

mike_hearn · 2 months ago
Using the database for queues is more than fine, it's often essential to correctness. In many use cases for queues you need to atomically update the database with respect to popping from the queue, and if they're separate systems you end up needing either XA or brittle and unreliable custom idempotency logic. I've seen this go wrong before and it's not nice, the common outcome is business-visible data corruption that can have financial impact.

This seems like another case where Postgres gets free marketing due to companies hitting its technical limits. I get why they choose to make lemonade in these cases with an eng blog post, but this is a way too common pattern on HN. Some startup builds on Postgres then spends half their eng budget at the most critical growth time firefighting around its limits instead of scaling their business. OpenAI had a similar blog post a couple of months ago where they revealed they were probably spending more than quarter of a million a month on an Azure managed Postgres, and it had stopped scaling so they were having to slowly abandon it, where I made the same comment [1].

Postgres is a great DB for what you pay, but IMHO well capitalized blitzscaling startups shouldn't be using it. If you buy a database - and realistically most Postgres users do anyway as they're paying for a cloud managed db - then you might as well just buy a commercial DB with an integrated queue engine. I have a financial COI because I have a part time job there in the research division (on non-DB stuff), so keep that in mind, but they should just migrate to an Oracle Database. It has a queue engine called TxEQ which is implemented on top of database tables with some C code for efficient blocking polls. It scales horizontally by just adding database nodes whilst retaining ACID transactions, and you can get hosted versions of them in all the major clouds. I'm using it in a project at the moment and it's been working well. In particular the ability to dequeue a message into the same transaction that does other database writes is very useful, as is the exposed lock manager.

Beyond scaling horizontally the nice thing about TxEQ/AQ is that it's a full message queue broker with all the normal features you'd expect. Delayed messages, exception queues, queue browsing, multi-consumer etc. LISTEN/NOTIFY is barely a queue at all, really.

For startups like this, the amount of time, money and morale they are losing with all these constant stories of firefights just doesn't make sense to me. It doesn't have to be Oracle, there are other DBs that can do this too. But "We discovered X about Postgres" is a eng blog cliché by this point. You're paying $$$ to a cloud and GPU vendor anyway, just buy a database and get back to work!

[1] https://news.ycombinator.com/item?id=44074506

KronisLV · 2 months ago
> Postgres/ClickHouse/NATS

Maybe throw in a dedicated key-value store like Redis or Valkey.

Oh and maybe something S3 compatible like MinIO, Garage or SeaweedFS for storing bunches of binary data.

With all of that, honestly it should cover most of the common workloads out there! Of course, depends on how specialized vs generic you like your software to be.

whaleofatw2022 · 2 months ago
NATS does KV pretty well now (didn't have expiration till earlier this year)
goodkiwi · 2 months ago
I’ve been meaning to check out NATS - I’ve tended to default to Redis for pubsub. What are the main advantages? I use clickhouse and Postgres extensively
sbstp · 2 months ago
I've been disappointed by Nats. Core Nats is good and works well, but if you need stronger delivery guarantees you need to use Jetstream which has a lot of quirks, for instance it does not integrate well with the permission system in Core Nats. Their client SDKs are very buggy and unreliable. I've used the Python, Rust and Go ones, only the Go one worked as expected. I would recommend using rabbitmq, Kafka or redpanda instead of Nats.
osigurdson · 2 months ago
NATS gives you regular pub/sub but also streams as well (similar to Kafka along with strong durability guarantees, etc).
riku_iki · 2 months ago
> However, it does help to contrast the much more common, "use Postgres for everything" type sentiment.

I think sentiment is to use "for everything in 99% business cases", which involves few 100GB of data with some thousands QPS, and could be handled by PG very well.

cryptonector · 2 months ago
I think PG could relax the ordering thing with NOTIFYs since... it seems a bit silly, but NOTIFYs already are unsafe to use because there is no authorization around channel access, so one might as well use change data capture (logical replication, basically) instead.
fathomdeez · 2 months ago
This kind of issue always comes up when people put business logic inside the database. Databases are for data. The data goes in and the data goes out, but the data does not get to decide what happens next based on itself. That's what application code is for.
tsimionescu · 2 months ago
The way you model data and store it in your database is fundamentally a part of your business logic. The same data can be modeled in many different ways, with different trade-offs for different use cases. Especially if you have a large amount of data, you can't just work with it as is, you need to know how you will use it and model it in a way that makes the common operations fast enough. As your application evolves, this may change, and even require data migrations.

None of this means you have to or even should use stored procedures, triggers, or listen/notify. I'm just making the point that there is no clean separation between "data" and "business logic".

chatmasta · 2 months ago
The first thing I did when I saw this article was to check the Postgres docs, because I thought "heh, surely they just didn't read the fine print," but the LISTEN/NOTIFY page has zero mentions of "lock" in the entire content.
physix · 2 months ago
That may hold to a certain extent for relational databases where your business model doesn't align well with physical model (tables). Although you might wonder why stored procedures and triggers were invented.

In databases where your domain is also your physical data model, coupling business logic to the database can work quite well, if the DBMS supports that.

https://medium.com/@paul_42036/entity-workflows-for-event-dr...

bevr1337 · 2 months ago
> the data does not get to decide what happens next based on itself.

Then why bother with a relational database? Relations and schemas are business logic, and I'll take all the data integrity I can get.

platzhirsch · 2 months ago
If you want your database to just store bytes, use a key-value store. But SQL gives you schemas and constraints for a reason; they're guardrails for your business logic. Just don’t ask your tables to run the business for you.
whstl · 2 months ago
This is one of those absolute statements that cause the kind of problem stated by grandparent. There are lots of those: "Use Postgres for everything", "No business data on the DB", "No methods bigger than 10 lines", "Abstractions only after 3 usages".

Back to the topic: Lots of potential bugs and data corruption issues are solved by moving part of the business logic to the database. Other people already covered two things: data validation and queue atomicity.

On the other hand, lots of potential issues can also arise by putting other parts of business logic to the database, for example, calling HTTPS endpoints from inside the DB itself is highly problematic.

The reality is that the world is not black and white, and being an engineer is about navigating this grey area.

panzi · 2 months ago
So what are your thoughts on constraints then? Foreign keys? Should that only be handled by the application, like Rails does (or did, haven't used in a long time).
cryptonector · 2 months ago
You're reaching the wrong conclusion, probably because of confirmation bias. Certainly this LISTEN/NOTIFY problem does not lead to your conclusion, nor does it support it. After all if you were relying on LISTEN/NOTIFY you could instead rely on logical replication decoding / CDC instead. And heck, you could even have a client connected to the same database that uses logical decoding to pick up events worth NOTIFYing about and then does just that, but without burdening any other transactions.
KronisLV · 2 months ago
> That's what application code is for.

I've seen people who disagree with that statement and say that having a separate back end component often leads to overfetching and in-database processing is better. I've worked on some systems where the back end is essentially just passing data to and from stored procedures.

It was blazing fast, but working with it absolutely sucked - though for whatever reason the people who believe that seem to hold those views quite strongly.

Cthulhu_ · 2 months ago
It really depends, but it's also a factor of time, that is, "back in the day", databases were designed to serve many different clients, nowadays a common practice is to have a 1:1 relationship between a database and a client application.

Of course, this is sometimes abused and taken to extremes in a microservices architecture where each service has their own database and you end up with nastiness like data duplication and distributed locking.

sgarland · 2 months ago
Disagree; these issues come up when people use more advanced features of DBs without having the requisite DB expertise on staff. I’ll give OP that Postgres’ docs do not mention this gotcha (and props to them for drilling down to source code!), but by and large, these issues are from people operating via tech blogs.

The DB is - or should be - the source of truth for your application. Also, since practically everyone is using cloud RDBMS with (usually) networked storage, the latency is atrocious. Given those, it seems silly to rely on an application to react to and direct changes to related data.

For example, if you want to soft-delete customer data while maintaining the ability to hard-delete, then instead of having an is_deleted and/or deleted_at column, have a duplicate table or tables, and an AFTER DELETE trigger on the originals that move the tuples to the other tables.

Or if you want to have get_or_create without multiple round trips (and you don’t have Postgres’ MERGE … RETURNING), you can easily accomplish this with a stored procedure.

Using database features shouldn’t be seen as verboten or outdated. What should be discouraged is not treating things like stored procedures and triggers as code. They absolutely should be in VCS, should go the same review process as anything else, and should be well-documented.

djfivyvusn · 2 months ago
That's purely because nobody knows how to write SQL let alone stored procedures. If stored procedures had better devex they'd be used for most of your app.

Deleted Comment

v5v3 · 2 months ago
Isn't Kafka the Postgresql of pub/sub

I.e. use Kafka unless you have a explicit reason not to?

So why Nats?

evnix · 2 months ago
After working with NATS, I wouldn't want to touch Kafka even with a long stick. Its just too complex and a memory hog for no good reason. It doesn't have all the features that NATS supports as well.
the_duke · 2 months ago
Kafka is far from trivial to operate, for one thing, even post zookeeper.
sorentwo · 2 months ago
Postgres LISTEN/NOTIFY was a consistent pain point for Oban (background job processing framework for Elixir) for a while. The payload size limitations and connection pooler issues alone would cause subtle breakage.

It was particularly ironic because Elixir has a fantastic distribution and pubsub story thanks to distributed Erlang. That’s much more commonly used in apps now compared to 5 or so years ago when 40-50% of apps didn’t weren’t clustered. Thanks to the rise of platforms like Fly that made it easier, and the decline of Heroku that made it nearly impossible.

cpursley · 2 months ago
How did you resolve this? Did you consider listening to the WAL?
sorentwo · 2 months ago
We have Postgres based pubsub, but encourage people to use a distributed Erlang based notifier instead whenever possible. Another important change was removing insert triggers, partially for the exact reasons mentioned in this post.
parthdesai · 2 months ago
Distributed Erlang if application is clustered, redis if it is not.

Source: Dev at one of the companies that hit this issue with Oban

nightpool · 2 months ago
What about Heroku made Erlang clustering difficult? It's had the same DNS clustering feature that Fly has, and they've had it since 2017: https://devcenter.heroku.com/articles/dyno-dns-service-disco....
sorentwo · 2 months ago
The problem was with restrictive connections, not DNS based discovery for clustering. It wasn't possible (as far as I'm aware) to connect directly from one dyno to another through tcp/udp.
alberth · 2 months ago
I didn’t realize Oban didn’t use Mnesia (Erlang built-in).
sorentwo · 2 months ago
Very very few applications use mnsesia. There’s absolutely no way I would recommend it over Postgres.
JoelJacobson · 2 months ago
Hey folks, I ran into similar scalability issues and ended up building a benchmark tool to analyze exactly how LISTEN/NOTIFY behaves as you scale up the number of listeners.

Turns out that all Postgres versions from 9.6 through current master scale linearly with the number of idle listeners — about 13 μs extra latency per connection. That adds up fast: with 1,000 idle listeners, a NOTIFY round-trip goes from ~0.4 ms to ~14 ms.

To better understand the bottlenecks, I wrote both a benchmark tool and a proof-of-concept patch that replaces the O(N) backend scan with a shared hash table for the single-listener case — and it brings latency down to near-O(1), even with thousands of listeners.

Full benchmark, source, and analysis here: https://github.com/joelonsql/pg-bench-listen-notify

No proposals yet on what to do upstream, just trying to gather interest and surface the performance cliff. Feedback welcome.

cryptonector · 2 months ago
That's pretty cool.

IMO LISTEN/NOTIFY is badly designed as an interface to begin with because there is no way to enforce access controls (who can notify; who can listen) nor is there any way to enforce payload content type (e.g., JSON). It's very unlike SQL to not have a `CREATE CHANNEL` and `GRANT` commands for dealing with authorization to listen/notify.

If you have authz then the lack of payload content type constraints becomes more tolerable, but if you add a `CREATE CHANNEL` you might as well add something there regarding payload types, or you might as well just make it so it has to always be JSON.

With a `CREATE CHANNEL` PG could provide:

  - authz for listen
  - authz for notify
  - payload content type constraints
    (maybe always JSON if you CREATE
    the channel)
  - select different serialization
    semantics (to avoid this horrible,
    no good, very bad locking behavior)
  - backwards-compatibility for listen/
    notify on non-created channels

maxbond · 2 months ago
> there is no way to enforce access controls

(I thought this was a fun puzzle, so don't take this as advice or as disagreement with your point.)

There is the option to use functions with SECURITY DEFINER to hack around this, but the cleanest way to do it (in the current API) would be to encrypt your messages on the application side using an authenticated system (eg AES-GCM). You can then apply access control to the keys. (Compromised services could still snoop on when adjacent channels were in use, however.)

infogulch · 2 months ago
Cool! This article and thread has already been referenced on the mailing list, maybe its worth mentioning this benchmark and experiment.

https://www.postgresql.org/message-id/flat/CAM527d_s8coiXDA4...

https://www.postgresql.org/message-id/flat/175222328116.3157...

WhyNotHugo · 2 months ago
Thanks for attacking this issue (even if still in a research phase, that's definitely a needed start).

I'm amused at how op brags about the huge scale at which they operate, but instead of even considering fixing the issue (both for themselves and for others), they just switched to something else for pubsub.

leontrolski · 2 months ago
I'd be interested as to how dumb-ol' polling would compare here (the FOR UPDATE SKIP LOCKED method https://leontrolski.github.io/postgres-as-queue.html). One day I will set up some benchmarks as this is the kind of thing people argue about a lot without much evidence either way.

Wasn't aware of this AccessExclusiveLock behaviour - a reminder (and shameless plug 2) of how Postgres locks interact: https://leontrolski.github.io/pglockpy.html

qianli_cs · 2 months ago
My colleague did some internal benchmarking and found that LISTEN/NOTIFY performs well under low to moderate load, but doesn't scale well with a large number of listeners. Our findings were pretty consistent with this blog post.

(Shameless plug [1]) I'm working on DBOS, where we implemented durable workflows and queues on top of Postgres. For queues, we use FOR UPDATE SKIP LOCKED for task dispatch, combined with exponential backoff and jitter to reduce contention under high load when many workers are polling the same table.

Would love to hear feedback from you and others building similar systems.

[1] https://github.com/dbos-inc/dbos-transact-py

mind-blight · 2 months ago
Nice! I'm using DBOS and am a little active on the discord. I was just wondering how y'all handled this under the hood. Glad to hear I don't have to worry much about this issue
eatonphil · 2 months ago
Why not read the WAL?
singron · 2 months ago
Polling is the way to go, but it's also very tricky to get right. In particular, it's non-trivial to make a reliable queue that's also fast when transactions are held open and vacuum isn't able to clean tuples. E.g. "get the first available tuple" might have to skip over 1000s of dead tuples.

Holding transactions open is an anti-pattern for sure, but it's occasionally useful. E.g. pg_repack keeps a transaction open while it runs, and I believe vacuum also holds an open transaction part of the time too. It's also nice if your database doesn't melt whenever this happens on accident.

time0ut · 2 months ago
An approach that has worked for me is to hash partition the table and have each worker look for work in one partition at a time. There are a number of strategies depending on how you manage workers. This allows you to only consider 1/Nth of the dead tuples, where N is the number of partitions, when looking for work. It does come at the cost of strict ordering, but there are many use cases where strict ordering is not required. The largest scale implementation of this strategy that I have done had 128 partitions with a worker per partition pumping through ~100 million tasks per day.

I also found LISTEN/NOTIFY to not work well at this scale and used a polling based approach with a back off when no work was found.

Quite an interesting problem and a bit challenging to get right at scale.

atombender · 2 months ago
Dead tuples is a real and significant problem, not just because it has to skip the tuples, but because the statistics that drive the planner don't account for them.

I found this out the hard way when I had a simple query that suddenly got very, very slow on a table where the application would constantly do a `SELECT ... FOR UPDATE SKIP LOCKED` and then immediately delete the rows after a tiny bit of processing.

It turned out that with a nearly empty table of about 10-20k dead tuples, the planner switched to using a different index scan, and would overfetch tons of pages just to discard them, as they only contained dead tuples. What I didn't realize is that the planner statistics doesn't care about dead tuples, and ANALYZE doesn't take them into account. So the planner started to think the table was much bigger than it actually was.

It's really important for these uses cases to tweak the autovacuum settings (which can be set on a per-table basis) to be much more aggressive, so that under high load, the vacuum runs pretty much continuously.

Another option is to avoid deleting rows, but instead use a column to mark rows as complete, which together with a partial index can avoid dead tuples. There are both pros and cons; it requires doing the cleanup (and VACUUM) as a separate job.

leontrolski · 2 months ago
> also fast when transactions are held open

In my linked example, on getting the item from the queue, you immediately set the status to something that you're not polling for - does Postgres still have to skip past these tuples (even in an index) until they're vacuumed up?

broken_broken_ · 2 months ago
I have implemented polling against a cluster of mixed mariadb/mysql databases which do not offer listen/notify. It was a pain in the neck to get right.

- The batch size needs to be adaptative for performance, latency, and recovering smoothly after downtime.

- The polling timeouts, frequency etc the same.

- You need to avoid hysteresis.

- You want to be super careful about not disturbing the main application by placing heavy load on the database or accidentally locking tables/rows

- You likely want multiple distributed workers in case of a network partition to keep handling events

It’s hard to get right especially when the databases at the time did not support SKIP LOCKED.

In retrospect I wish I had listened to the WAL. Much easier.

cpursley · 2 months ago
Have you played with pgmq? It's pretty neat: https://github.com/pgmq/pgmq
edoceo · 2 months ago
Another thing for @leontrolski to add to the benchmarks - which I cannot wait to read.
RedShift1 · 2 months ago
I use polling with back off up to one minute. So when a workload is done, it immediately polls for more work. If nothing found, wait for 5 seconds, still nothing 10 seconds, ... until one minute and from then on it polls every minute until it finds work again and the back off timer resets to 0 again.
TkTech · 2 months ago
With that experience behind you, would you have feedback for Chancy[1]? It aims to be a batteries-included offering for postgres+python, aiming for hundreds of millions of jobs a day, not massive horizontal worker scaling.

It both polls (configurable per queue) and supports listen/notify simply to inform workers that it can wake up early to trigger polling, and this can be turned off globally with a notifications=false flag.

[1]: https://github.com/tktech/chancy

cryptonector · 2 months ago
Instead of LISTEN/NOTIFY you could listen to the wal / logical replication stream.

Or you could have a worker whose only job is to listen to the wal / logical replication stream and then NOTIFY. Being the only one to do so would not burden other transactions.

Or you could have a worker whose only job is to listen to the wal / logical replication stream and then publish on some non-PG pubsub system.

aurumque · 2 months ago
I'll take the shameless plug. Thank you for putting this together! Very helpful overview of pg locks.
notarobot123 · 2 months ago
It's funny how "shameless plug" actually means "excuse the self-promotion" and implies at least a little bit of shame even when the reference is appropriate and on-topic.
sorentwo · 2 months ago
Ping requires something persistent to check. That requires creating tuples, and most likely deleting them after they’ve been consumed. That puts pressure on the database and requires vacuuming in ways that pubsub doesn’t because it’s entirely ephemeral.

Not to mention that pubsub allows multiple consumers for a single message, whereas FOR UPDATE is single consumer by design.

cpursley · 2 months ago
Right, plus there's character limitations (column size). This is why I prefer listening to the Postgres WAL for database changes:

https://github.com/cpursley/walex?tab=readme-ov-file#walex (there's a few useful links in here)

williamdclt · 2 months ago
I found recently that you can write directly to the WAL with transactional guarantees, without writing to an actual table. This sounds like it would be amazing for queue/outbox purposes, as the normal approaches of actually inserting data in a table cause a lot of resource usage (autovacuum is a major concern for these use cases).

Can’t find the function that does that, and I’ve not seen it used in the wild yet, idk if there’s gotchas

Edit: found it, it’s pg_logical_emit_message

gunnarmorling · 2 months ago
pg_logical_emit_message() is how I recommend users on Postgres to implement the outbox pattern [1]. No table overhead as you say, no need for housekeeping, etc. It has some other cool applications, e.g. providing application-specific metadata for CDC streams or transactional logging, wrote about it at [2] a while ago. Another one is making sure replication slots can advance also if there's no traffic in the database they monitor [3].

[1] https://speakerdeck.com/gunnarmorling/ins-and-outs-of-the-ou...

[2] https://www.infoq.com/articles/wonders-of-postgres-logical-d...

[3] https://www.morling.dev/blog/mastering-postgres-replication-...

cyberax · 2 months ago
One annoying thing is that there is no counterpart for an operation to wait and read data from WAL. You can poll it using pg_logical_slot_get_binary_changes, but it returns immediately.

It'd be nice to have a method that would block for N seconds waiting for a new entry.

You can also use a streaming replication connection, but it often is not enabled by default.

cryptonector · 2 months ago
`pg_logical_emit_message()` is great and better than `NOTIFY` in terms of how it works, but...

`pg_logical_emit_message()` perpetuates/continues the lack of authz around `NOTIFY`.

denysonique · 2 months ago
For node.js users there is postgres.js that can listen to the Postgres WAL and emit node events that can be handled by application code.
meesles · 2 months ago
Yeah until vendors butcher Postgres replication behaviors and prevent common paths of integrating these capabilities into other tools. Looking at you AWS
FZambia · 2 months ago
Many here recommend using Kafka or RabbitMQ for real-time notifications. While these tools work well with a relatively stable, limited set of topics, they become costly and inefficient when dealing with a large number of dynamic subscribers, such as in a messaging app where users frequently come and go. In RabbitMQ, queue bindings are resource-intensive, and in Kafka, creating new subscriptions often triggers expensive rebalancing operations. I've seen a use case for a messenger app with 100k concurrent subscribers where developers used RabbitMQ and individual queues for each user. It worked at 60 CPU on Rabbit side during normal situation and during mass reconnections of users (due to some proxy reload in infra) – it took up to several minutes for users to reconnect. I suggested switching to https://github.com/centrifugal/centrifugo with Redis engine (combines PUB/SUB + Redis streams for individual queues) – and it went to 0.3 CPU on Redis side. Now the system serves about 2 million concurrent connections.
odie5533 · 2 months ago
I wonder who works on centrifugo. Could be anyone.
CaliforniaKarl · 2 months ago
I appreciate this post for two reasons:

* It gives an indication of how much you need to grow before this Postgres functionality starts being a blocker.

* Folks encountering this issue—and its confusing log line—in the future will be able to find this post and quickly understand the issue.

Gigachad · 2 months ago
Sounds like ChatGPT appreciated the post
acdha · 2 months ago
If you think they’re a bot, flag and move on. No need for a derail about writing style.
CaliforniaKarl · 2 months ago
I did not use ChatGPT—nor any AI—in writing the post. I'm curious, would you mind emailing—or replying—with what made you think that it was written by AI? Or why you do not believe my statement?
yrds96 · 2 months ago
I'm ESL, so I often check my grammar on ChatGPT, and 99% of the time it includes em dashes in the corrected sentences, which I remove or just replace with commas or hyphens to sound more natural. So maybe this was not entirely written but just revised by ChatGPT.
jjgreen · 2 months ago
Just for the em-dashes? Some humans also use them.
andrewstuart · 2 months ago
There’s lots of ways to invoke NOTIFY without doing it from with the transaction doing the work.

The post author is too focused on using NOTIFY in only one way.

This post fails to explain WHY they are sending a NOTIFY. Not much use telling us what doesn’t work without telling us the actual business goal.

It’s crazy to send a notify for every transaction, they should be debounced/grouped.

The point of a NOTIFY is to let some other system know something has changed. Don’t do it every transaction.

0xCMP · 2 months ago
Agreed, I am struggling to understand why "it does not scale" is not "we used it wrong and hit the point where it's a problem" here.

Like if it needs to be very consistent I would use an unlogged table (since we're worried about "scale" here) and then `FOR UPDATE SKIP LOCKED` like others have mentioned. Otherwise what exactly is notify doing that can't be done after the first transaction?

Edit: in-fact, how can they send an HTTP call for something and not be able to do a `NOTIFY` after as well?

One possible way I could understand what they wrote is that somewhere in their code, within the same transaction, there are notifies which conditionally trigger and it would be difficult to know which ones to notify again in another transaction after the fact. But they must know enough to make the HTTP call, so why not NOTIFY?

andrewstuart · 2 months ago
Agreed.

They’re using it wrong and blaming Postgres.

Instead they should use Postgres properly and architect their system to match how Postgres works.

There’s correct ways to notify external systems of events via NOTIFY, they should use them.

thom · 2 months ago
Yeah, the way I've always used LISTEN/NOTIFY is just to tell some pool of workers that they should wake up and check some transactional outbox for new work. False positives are basically harmless and therefore don't need to be transactional. If you're sending sophisticated messages with NOTIFY (which is a reasonable thing to think you can do) you're probably headed for pain at some point.
tomrod · 2 months ago
Assuming you skip select transaction, or require logging on it because your regulated industry had bad auditors, then every transaction changes something.