Recent comment from one of the main authors:
https://x.com/DrBeaVillarroel/status/1949780669141332205
Previous work: https://www.nature.com/articles/s41598-021-92162-7
To make those claims it's necessary to know what work is being done while the lock is held. This includes a bunch of various resource cleanup, which should be cheap, and RecordTransactionCommit() which will grab a lock to insert a WAL record, wait for it to get flushed to disk and potentially also for it to get acknowledged by a synchronous replica. So the expected throughput is somewhere between hundreds and tens of thousands of notifies per second. But as far as I can tell this conclusion is only available from PostgreSQL source code and some assumptions about typical storage and network performance.
Yes, that's what a snippet generally is. The generated document from my very basic research prompt is over 300k in length. There are also sources from the official mailing lists, graphile, and various community discussions.
I'm not going to post the entire outout because it is completely beside the point. In my original post, I expliclity asked "What is the qualitative and quantitative nature of relevant workloads?" exactly because it's not clear from the blog post. If, for example, they only started hitting these issues with 10k simultaneous reads/writes, then it's reasonable to assume that many people who don't have such high work loads won't really care.
The ChatGPT snippet was included to show that that's what ChatGPT research told me. Nothing more. I basically typed a 2-line prompt and asked it to include the original article. Anyone who thinks that what I posted is authoritative in any way shouldn't be considering doing this type of work.
Maybe you also don't know what ChatGPT Research is (the Enterprise version, if you really need to know), or what Executive Summary implies, but here's a snippet of the 28 sources used:
Maybe I missed it in some folded up embedded content, or some graph (or maybe I'm probably just blind...), but is it mentioned at which point they started running into issues? The quoted bit about "10s of thousands of simultaneous writers" is all I can find.
What is the qualitative and quantitative nature of relevant workloads? Depending on the answers, some people may not care.
I asked ChatGPT to research it and this is the executive summary:
For PostgreSQL’s LISTEN/NOTIFY, a realistic safe throughput is:
Up to ~100–500 notifications/sec: Handles well on most systems with minimal tuning. Low risk of contention.
~500–2,000 notifications/sec: Reasonable with good tuning (short transactions, fast listeners, few concurrent writers). May start to see lock contention.
~2,000–5,000 notifications/sec: Pushing the upper bounds. Requires careful batching, dedicated listeners, possibly separate Postgres instances for pub/sub.
>5,000 notifications/sec: Not recommended for sustained load. You’ll likely hit serialization bottlenecks due to the global commit lock held during NOTIFY.
As an example, I asked one of my devs to implement a batching process to reduce the number of database operations. He presented extremely robust, high-quality code and unit tests. The problem was that it was MASSIVE overkill.
AI generated a new service class, a background worker, several hundred lines of code in the main file. And entire unit test suites.
I rejected the PR and implemented the same functionality by adding two new methods and one extra field.
Now I often hear comments about AI can generate exactly what I want if I just use the correct prompts. OK, how do I explain that to a junior dev? How do they distinguish between "good" simple, and "bad" simple (or complex)? Furthermore, in my own experience, LLMs tend to pick up to pick up on key phrases or technologies, then builds it's own context about what it thinks you need (e.g. "Batching", "Kafka", "event-driven" etc). By the time you've refined your questions to the point where the LLM generate something that resembles what you've want, you realise that you've basically pseudo-coded the solution in your prompt - if you're lucky. More often than not the LLM responses just start degrading massively to the point where they become useless and you need to start over. This is also something that junior devs don't seem to understand.
I'm still bullish on AI-assisted coding (and AI in general), but I'm not a fan at all of the vibe/agentic coding push by IT execs.
Like why do thumbnails have an invisible overlay that appears on hover over, hijacks the click and takes you to a support page about paid product placement?
I'm clicking on the thumbnail to watch the video not for a jarring detour off the youtube page to a boring help article. Honestly WTF. Maybe the UI designers don't use youtube themselves?
This freakin page:
https://support.google.com/youtube/answer/10588440?nohelpkit...
I actually cancelled my YT Premium sub after this latest change with the thumbnails. I realised it doesn't really offer me that much value, and often using the platform just annoys me.
Learn to live in the gray areas. Don't be dogmatic. The world isn't black and white. Take some parts of the black and white. And, don't be afraid to change your mind about some things.
This may sound obvious to some of you, and sure, in theory this is simple. But in practice? Definitely not, at least in my experience. It requires a change in mindset and worldview, which generally becomes harder as you age ("because your want to conserve the way of life you enjoy").