Readit News logoReadit News
jygg4 commented on ATMs didn’t kill bank teller jobs, but the iPhone did   davidoks.blog/p/why-the-a... · Posted by u/colinprince
bwestergard · 2 days ago
Quantitative measures of this are very poor, and even those are mixed.

My subjective assessment is that agents like Copilot got better because of better harnesses and fine tuning of models to use those harnesses. But they are not improving in the direction of labor substitution, but rather in the direction of significant, but not earth-shaking, complementarity. That complementarity is stronger for more experienced developers.

jygg4 · 2 days ago
Agree. Nice to see a post with proper economic thought on the topic.
jygg4 commented on Apple to celebrate 50 years of thinking different   apple.com/newsroom/2026/0... · Posted by u/meetpateltech
chamsom · 2 days ago
In a way, they're celebrating the art of thinking slower, more deliberately. This is almost counter to the velocity-driven and metric-pulled nature of the average technology company.

It's pretty typical to see the effort to reach "the next big thing" constitute flooding your organization with inputs.

jygg4 · 2 days ago
If you’re not being deliberate, you’re not thinking in my view.

The act of thinking is to reach deep inside oneself.

jygg4 commented on Are LLM merge rates not getting better?   entropicthoughts.com/no-s... · Posted by u/4diii
marcuschong · 2 days ago
That's a big problem with very specific manifestations. My startup helps customers handle regulatory compliance, also by forwarding complex questions to a pool of consultants.

We've compared now more than a hundred replies to that of GPT Pro, and the quality is roughly the same. Sometimes a little worse, sometimes a little better. Always more detailed. Never unacceptable.

But how to convince our customers that we have the right technology and know how to use it appropriately? We're trying, but it's not easy.

Part of that's accountability. In the event of the LLM producing rubbish, as rare as it may be, who is accountable? There is not a person and her reputation attached to it.

jygg4 · 2 days ago
Yup exactly.

Being able to hold someone liable for a F up has been how we have been able to function as a society and get to where we are today.

jygg4 commented on Preliminary data from a longitudinal AI impact study   newsletter.getdx.com/p/ai... · Posted by u/donutshop
jongjong · 3 days ago
As I've said before, AI is a force multiplier. A 10x developer is now a 100x developer and a -10x developer (complexity maker/value destroyer) is now a -100x developer.

I can understand why a lot of companies are cutting junior roles. What AI does is it automates most of the stuff that juniors are good at (coding fast) but not much of the stuff that the seniors are good at.

That said, I've worked with some juniors who managed to navigate; they do this by focusing on higher order thinking and developing a sense of what's important by interacting with senior engineers. Unfortunately, it raises the talent bar for juniors; they have to become more intelligent; not in a puzzle-solving way, but in a more architectural big-picture sort of way; almost like entrepreneurial thinking but more detailed/complex.

LLMs don't have a worldview; this means that they miss a lot of inconsistencies and logical contradictions. Also, most critically, LLMs don't know what's important (at least not accurately enough) so they can't prioritize effectively and they make a lot of bad decisions.

It's kind of interesting for me because a lot of the areas where I had a contrarian opinion in the field of software development, I now see LLMs getting trapped into those and getting bad results. It's like all my contrarian opinions became much more valuable.

jygg4 · 2 days ago
Kissing your ass comments aside… first of all - define a 10x developer.

We’ve all seen this phrase thrown around and it’s useless. Define the characteristics of one.

I’d argue the hiring of juniors is dropping because why should a firm ‘invest’ in someone that is likely to leave before they become of benefit for the firm? Mobility in the software engineer profession harms it. The fanfare around LLM’s enhances the argument via leverage provided to seniors.

E.g in accounting etc people tend to stick around for a long time in one firm… hence the expense is an investment. Firms invest in juniors to have future directors, partners etc. this is not a model that applies to software engineering. It may have done at one point, but not anymore.

jygg4 commented on Are LLM merge rates not getting better?   entropicthoughts.com/no-s... · Posted by u/4diii
Havoc · 2 days ago
As they become more capable peoples commits will also become more ambitious.

So I’d say fairly flat commit acceptance numbers make sense even in the context of improving LLMs

jygg4 · 2 days ago
Indeed. Why is this post down voted? There’s always trade-offs taking place, it’s good to call them out.
jygg4 commented on Are LLM merge rates not getting better?   entropicthoughts.com/no-s... · Posted by u/4diii
orwin · 2 days ago
> People say AI is “good at front end”

I only say that because I'm a shit frontend dev. Honestly, I'm not that bad anymore, but I'm still shit, and the AI will probably generate better code than I will.

jygg4 · 2 days ago
As long as humans are needed to review code, it sounds your role evolves toward prompting and reviewing.

Which is akin to driving a car - the motor vehicle itself doesn’t know where to go. It requires you to prompt via steering and braking etc, and then to review what is happening in response.

That’s not necessarily a bad thing - reviewing code ultimately matters most. As long as what is produced is more often than not correct and legible.. now this is a different issue for which there isn’t a consensus across software engineer’s.

jygg4 commented on Are LLM merge rates not getting better?   entropicthoughts.com/no-s... · Posted by u/4diii
aerhardt · 2 days ago
I feel that two things are true at the same time:

1) Something happened during 2025 that made the models (or crucially, the wrapping terminal-based apps like Claude Code or Codex) much better. I only type in the terminal anymore.

2) The quality of the code is still quite often terrible. Quadruple-nested control flow abounds. Software architecture in rather small scopes is unsound. People say AI is “good at front end” but I see the worst kind of atrocities there (a few days ago Codex 5.3 tried to inject a massive HTML element with a CSS before hack, rather than proprerly refactoring markup)

Two forces feel true simultaneously but in permanent tension. I still cannot make out my mind and see the synthesis in the dialectic, where this is truly going, if we’re meaningfully moving forward or mostly moving in circles.

jygg4 · 2 days ago
The models lose the ability to inject subtle and nuance stuff as they scale up, is what I’ve observed.
jygg4 commented on Are LLM merge rates not getting better?   entropicthoughts.com/no-s... · Posted by u/4diii
cj · 2 days ago
I agree with your sentiment, but I think we've yet to see the full application of the current technology. (Even if LLMs themselves don't improve, there's significant opportunity for people to use it in ways not currently being done)
jygg4 · 2 days ago
The issue with llm’s is trust.

I don’t see that ever going away. Humans have learned to trust other humans over a large time scale with rules in place to control behaviour.

u/jygg4

KarmaCake day5March 12, 2026View Original