It's pretty typical to see the effort to reach "the next big thing" constitute flooding your organization with inputs.
The act of thinking is to reach deep inside oneself.
It's pretty typical to see the effort to reach "the next big thing" constitute flooding your organization with inputs.
The act of thinking is to reach deep inside oneself.
We've compared now more than a hundred replies to that of GPT Pro, and the quality is roughly the same. Sometimes a little worse, sometimes a little better. Always more detailed. Never unacceptable.
But how to convince our customers that we have the right technology and know how to use it appropriately? We're trying, but it's not easy.
Part of that's accountability. In the event of the LLM producing rubbish, as rare as it may be, who is accountable? There is not a person and her reputation attached to it.
Being able to hold someone liable for a F up has been how we have been able to function as a society and get to where we are today.
I can understand why a lot of companies are cutting junior roles. What AI does is it automates most of the stuff that juniors are good at (coding fast) but not much of the stuff that the seniors are good at.
That said, I've worked with some juniors who managed to navigate; they do this by focusing on higher order thinking and developing a sense of what's important by interacting with senior engineers. Unfortunately, it raises the talent bar for juniors; they have to become more intelligent; not in a puzzle-solving way, but in a more architectural big-picture sort of way; almost like entrepreneurial thinking but more detailed/complex.
LLMs don't have a worldview; this means that they miss a lot of inconsistencies and logical contradictions. Also, most critically, LLMs don't know what's important (at least not accurately enough) so they can't prioritize effectively and they make a lot of bad decisions.
It's kind of interesting for me because a lot of the areas where I had a contrarian opinion in the field of software development, I now see LLMs getting trapped into those and getting bad results. It's like all my contrarian opinions became much more valuable.
We’ve all seen this phrase thrown around and it’s useless. Define the characteristics of one.
I’d argue the hiring of juniors is dropping because why should a firm ‘invest’ in someone that is likely to leave before they become of benefit for the firm? Mobility in the software engineer profession harms it. The fanfare around LLM’s enhances the argument via leverage provided to seniors.
E.g in accounting etc people tend to stick around for a long time in one firm… hence the expense is an investment. Firms invest in juniors to have future directors, partners etc. this is not a model that applies to software engineering. It may have done at one point, but not anymore.
So I’d say fairly flat commit acceptance numbers make sense even in the context of improving LLMs
I only say that because I'm a shit frontend dev. Honestly, I'm not that bad anymore, but I'm still shit, and the AI will probably generate better code than I will.
Which is akin to driving a car - the motor vehicle itself doesn’t know where to go. It requires you to prompt via steering and braking etc, and then to review what is happening in response.
That’s not necessarily a bad thing - reviewing code ultimately matters most. As long as what is produced is more often than not correct and legible.. now this is a different issue for which there isn’t a consensus across software engineer’s.
1) Something happened during 2025 that made the models (or crucially, the wrapping terminal-based apps like Claude Code or Codex) much better. I only type in the terminal anymore.
2) The quality of the code is still quite often terrible. Quadruple-nested control flow abounds. Software architecture in rather small scopes is unsound. People say AI is “good at front end” but I see the worst kind of atrocities there (a few days ago Codex 5.3 tried to inject a massive HTML element with a CSS before hack, rather than proprerly refactoring markup)
Two forces feel true simultaneously but in permanent tension. I still cannot make out my mind and see the synthesis in the dialectic, where this is truly going, if we’re meaningfully moving forward or mostly moving in circles.
I don’t see that ever going away. Humans have learned to trust other humans over a large time scale with rules in place to control behaviour.
My subjective assessment is that agents like Copilot got better because of better harnesses and fine tuning of models to use those harnesses. But they are not improving in the direction of labor substitution, but rather in the direction of significant, but not earth-shaking, complementarity. That complementarity is stronger for more experienced developers.