"JCW says the problem reaches into the upper echelons of Israeli politics as well. They note that Yaakov Litzman, leader of an ultra-orthodox alliance in Israel's legislature and the current minister of health, has been accused of preventing the deportation of a Malka Leifer, a former head teacher at a Jewish school in Australia, where she is wanted on multiple charges of child sexual abuse."
Why is this uncooperative behavior tolerated?
I did have some great luck producing quite useful and impactful code. But also lost time chasing tiny changes.
It seems like an idea worth exploring formally but I haven't see that done anywhere. Is this a case of "perception of winning" while one is actually losing? Or it it that the winning is in aggregate and people who like LLM-based coding are just more tolerant of the volatility to get there?
The only study I've seen testing the actual observable impact on velocity showed a modest decrease in output for experienced engineers who were using LLMs for coding.
[1] https://www-cdn.anthropic.com/58284b19e702b49db9302d5b6f135a...
Just today I was playing around with modding Cyberpunk 2077 and was looking for a way to programmatically spawn NPCs in redscript. It was hard to figure out, but I managed. ChatGPT 5 just hallucinated some APIs even after doing "research" and repeatedly being called out.
After 30 minutes of ChatGPT wasting my time I accepted that I'm on my own. It could've been 1 minute.
You're not alone in thinking this. And I'm sure this has been considered within the frontier AI labs and surely has been tried. The fact that it's so uncommon must mean something about what these models are capable of, right?
If that's the case, LLMs cannot replace these primary source summarizing clowns fast enough.
[1]https://nanda.media.mit.edu/ai_report_2025.pdf