Deleted Comment
Deleted Comment
However I guess that at least some of that can be mitigated by distilling out a system description and then running agents again to refactor the entire thing.
This has already been a problem. There's no real ramifications for it. Even for something like Cloudflare stopping a significant amount of Internet traffic for any amount of time is not (as far as I know) investigated in an independent way. There's nobody that is potentially facing charges. However, with other civil engineering endeavors, there absolutely is. Regular checks, government agencies to audit systems, penalties for causing harm, etc. are expected in those areas.
LLM-generated code is the continuation of the bastardization of software "engineering." Now the situation is not only that nobody is accountable, but a black box cluster of computers is not even reasonably accountable. If someone makes a tragic mistake today, it can be understood who caused it. If "Cloudflare2" comes about which is all (or significantly) generated, whoever is in charge can just throw their hands up and say "hey, I don't know why it did this, and the people that made the system that made this mistake don't know why it did this." It has been and will continue to be very concerning.
When I use a framework, it's because I believe that the designers of that framework are i) probably better at software engineering than I am, and ii) have encountered all sorts of problems and scaling issues (both in terms of usage and actual codebase size) that I haven't encountered yet, and have designed the framework to ameliorate those problems.
Those beliefs aren't always true, but they're often true.
Starting projects is easy. You often don't get to the really thorny problems until you're already operating at scale and under considerable pressure. Trying to rearchitect things at that point sucks.
I think you missed the whole point. This is not about you understanding a particular change. This is about the person behind the code change not understanding the software they are tasked to maintain. It's a kin to the discussion about the fundamental differences between script kiddies vs hackers.
With LLMs and coding agents, there is a clear pressure to turn developers into prompt kiddies: someone who is able to deliver results when the problem is bounded, but is fundamentally unable to understand what he did or the whole system being used.
This is not about sudden onsets of incompetence. This is about a radical change in workflows that no longer favor or allow research to familiarize with projects. You no longer need to pick through a directory tree to know where things are, or nagivate through code to check where a function is called or what component is related to which component. Having to manually open a file to read or write to it represents a learning moment that allows you to recall and understand how and why things are done. With LLMs you don't even understand what is there.
Thus developers who lean heavily on LLMs don't get to learn what's happening. Everyone can treat the project as a black box, and focus on observable changes to the project's behavior.
This is a good thing. I don’t need to focus on oil refineries when I fill my car with gas. I don’t know how to run a refinery, and don’t need to know.