I think that could work, but it can work in the same way that plenty of big companies have codebases that are a giant ball of mud and yet they somehow manage to stay in business and occasionally ship a new feature.
Meanwhile their rivals with well constructed codebases who can promptly ship features that work are able to run rings around them.
I expect that we'll learn over time that LLM-managed big ball of mud codebases are less valuable than LLM-managed high quality well architected long-term maintained codebases.
I get that even now it's very easy to let stuff get out of hand if you aren't paying close attention yourself to the actual code, so people assume that it's some fundamental limitation of all LLMs. But it's not, much like 6 fingered hands was just a temporary state, not anything deep or necessary that was enforced by the diffusion architecture.
Neural networks can at best uncover latent correlations that were already available in the inputs. Expecting anything more is basically just wishful thinking.
You never screened candidates who couldn’t act their way out of a wet paper bag?
Deleted Comment
No longer interacting with your peers but an LLM instead? The knowledge centralized via telemetry and spying on every user’s every interaction and only available thru a enshitified subscription to a model that’s been trained on this stolen data?
I have plenty of real peers I interact with, I do not need that noise when I just need a quick answer to a technical question. LLMs are fantastic for this use case.