Readit News logoReadit News
ilitirit commented on Vibe Coding Is the Worst Idea of 2025 [video]   youtube.com/watch?v=1A6uP... · Posted by u/tomwphillips
ilitirit · 10 days ago
I don't really have that much of an issue with vibe coding as an appropriate tool in experienced hands. I think the worst ideas in 2025 are probably related to IT execs pushing AI in the wrong ways, or people espousing vibe coding as some sort of software development panacea.
ilitirit commented on Aligned, multiple-transient events in the First Palomar Sky Survey   researchgate.net/publicat... · Posted by u/ilitirit
ilitirit · a month ago
Pre-print of a paper which studied 1950 "transients" which - in tl;dr terms - might be evidence of artificial objects in orbit before the satellite era.

Recent comment from one of the main authors:

https://x.com/DrBeaVillarroel/status/1949780669141332205

Previous work: https://www.nature.com/articles/s41598-021-92162-7

ilitirit commented on Postgres LISTEN/NOTIFY does not scale   recall.ai/blog/postgres-l... · Posted by u/davidgu
ants_a · 2 months ago
In that snippet are links to Postgres docs and two blog posts, one being the blog post under discussion. None of those contain the information needed to make the presented claims about throughput.

To make those claims it's necessary to know what work is being done while the lock is held. This includes a bunch of various resource cleanup, which should be cheap, and RecordTransactionCommit() which will grab a lock to insert a WAL record, wait for it to get flushed to disk and potentially also for it to get acknowledged by a synchronous replica. So the expected throughput is somewhere between hundreds and tens of thousands of notifies per second. But as far as I can tell this conclusion is only available from PostgreSQL source code and some assumptions about typical storage and network performance.

ilitirit · 2 months ago
> In that snippet are links to Postgres docs and two blog posts

Yes, that's what a snippet generally is. The generated document from my very basic research prompt is over 300k in length. There are also sources from the official mailing lists, graphile, and various community discussions.

I'm not going to post the entire outout because it is completely beside the point. In my original post, I expliclity asked "What is the qualitative and quantitative nature of relevant workloads?" exactly because it's not clear from the blog post. If, for example, they only started hitting these issues with 10k simultaneous reads/writes, then it's reasonable to assume that many people who don't have such high work loads won't really care.

The ChatGPT snippet was included to show that that's what ChatGPT research told me. Nothing more. I basically typed a 2-line prompt and asked it to include the original article. Anyone who thinks that what I posted is authoritative in any way shouldn't be considering doing this type of work.

ilitirit commented on Postgres LISTEN/NOTIFY does not scale   recall.ai/blog/postgres-l... · Posted by u/davidgu
ilitirit · 2 months ago
What is wrong with you? Why would you even bother posting a comment like this?

Maybe you also don't know what ChatGPT Research is (the Enterprise version, if you really need to know), or what Executive Summary implies, but here's a snippet of the 28 sources used:

https://imgur.com/a/eMdkjAh

ilitirit commented on Postgres LISTEN/NOTIFY does not scale   recall.ai/blog/postgres-l... · Posted by u/davidgu
ilitirit · 2 months ago
> The structured data gets written to our Postgres database by tens of thousands of simultaneous writers. Each of these writers is a “meeting bot”, which joins a video call and captures the data in real-time.

Maybe I missed it in some folded up embedded content, or some graph (or maybe I'm probably just blind...), but is it mentioned at which point they started running into issues? The quoted bit about "10s of thousands of simultaneous writers" is all I can find.

What is the qualitative and quantitative nature of relevant workloads? Depending on the answers, some people may not care.

I asked ChatGPT to research it and this is the executive summary:

  For PostgreSQL’s LISTEN/NOTIFY, a realistic safe throughput is:

  Up to ~100–500 notifications/sec: Handles well on most systems with minimal tuning. Low risk of contention.

  ~500–2,000 notifications/sec: Reasonable with good tuning (short transactions, fast listeners, few concurrent writers). May start to see lock contention.

  ~2,000–5,000 notifications/sec: Pushing the upper bounds. Requires careful batching, dedicated listeners, possibly separate Postgres instances for pub/sub.

  >5,000 notifications/sec: Not recommended for sustained load. You’ll likely hit serialization bottlenecks due to the global commit lock held during NOTIFY.

ilitirit commented on America’s incarceration rate is in decline   theatlantic.com/ideas/arc... · Posted by u/paulpauper
ilitirit · 2 months ago
I'd like to see stats on how many people are getting arrested for petty crimes e.g. marijuana (which isn't even a crime in some contexts any more) back then vs now.
ilitirit commented on 2025 State of AI Code Quality   qodo.ai/reports/state-of-... · Posted by u/cliffly
ilitirit · 3 months ago
I currently have a big problem with AI-generated code and some of the junior devs on my team. Our execs keep pushing "vibe-coding" and agentic coding, but IMO these are just tools. And if you don't know how to use the tools effectively, you're still gonna generate bad code. One of the problems is that the devs don't realise why it's bad code.

As an example, I asked one of my devs to implement a batching process to reduce the number of database operations. He presented extremely robust, high-quality code and unit tests. The problem was that it was MASSIVE overkill.

AI generated a new service class, a background worker, several hundred lines of code in the main file. And entire unit test suites.

I rejected the PR and implemented the same functionality by adding two new methods and one extra field.

Now I often hear comments about AI can generate exactly what I want if I just use the correct prompts. OK, how do I explain that to a junior dev? How do they distinguish between "good" simple, and "bad" simple (or complex)? Furthermore, in my own experience, LLMs tend to pick up to pick up on key phrases or technologies, then builds it's own context about what it thinks you need (e.g. "Batching", "Kafka", "event-driven" etc). By the time you've refined your questions to the point where the LLM generate something that resembles what you've want, you realise that you've basically pseudo-coded the solution in your prompt - if you're lucky. More often than not the LLM responses just start degrading massively to the point where they become useless and you need to start over. This is also something that junior devs don't seem to understand.

I'm still bullish on AI-assisted coding (and AI in general), but I'm not a fan at all of the vibe/agentic coding push by IT execs.

ilitirit commented on Someone at YouTube needs glasses   jayd.ml/2025/04/30/someon... · Posted by u/jaydenmilne
Havoc · 4 months ago
Yes, for all their A/B testing they could really do with a bit more common sense.

Like why do thumbnails have an invisible overlay that appears on hover over, hijacks the click and takes you to a support page about paid product placement?

I'm clicking on the thumbnail to watch the video not for a jarring detour off the youtube page to a boring help article. Honestly WTF. Maybe the UI designers don't use youtube themselves?

This freakin page:

https://support.google.com/youtube/answer/10588440?nohelpkit...

ilitirit · 4 months ago
100%. YouTube is a chapter in a textbook about how to destroy UX.

I actually cancelled my YT Premium sub after this latest change with the thumbnails. I realised it doesn't really offer me that much value, and often using the platform just annoys me.

ilitirit commented on Bored of It   paulrobertlloyd.com/2025/... · Posted by u/NotInOurNames
ilitirit · 5 months ago
Here's some of what I think is my personal best advice:

Learn to live in the gray areas. Don't be dogmatic. The world isn't black and white. Take some parts of the black and white. And, don't be afraid to change your mind about some things.

This may sound obvious to some of you, and sure, in theory this is simple. But in practice? Definitely not, at least in my experience. It requires a change in mindset and worldview, which generally becomes harder as you age ("because your want to conserve the way of life you enjoy").

u/ilitirit

KarmaCake day2399March 3, 2008View Original