Absolutely not.
In the short term it will, while OpenAI/Anthropic/Anysphere destroy software development as a career. But they're just running the Uber playbook - right now they're giving away VC money by funding the datacenters that're training and running the LLMs. As soon as they've put enough developers out of jobs and ensured there's no new pipeline of developers capable of writing code and building platforms without AI assistance, they will stop burning VC cash and start charging at rates that not only break even but also return the 100x the investors demand.
On the other hand, if the agent is just as capable of fixing bugs in legacy code as rewriting it, and humans are no longer in the loop, who cares if it's legacy code?
But I can see it "working". At least for the values of "working" that would be "good enough" for a large portion of the production code I've written or overseen in my 30+ year career.
Some code pretty much outlasts all expectations because it just works. I had a Perl script I wrote in around 1995-1998 that ran from cron and sent email to my personal account. I quit that job, but the server running it got migrated to virtual machines and didn't stop sending me email until about 2017 - at least three sales or corporate takeovers later (It was _probably_ running on CentOS4 when I last touched it in around 2005, I'd love to know if it was just turned into a VM and running as part of critical infrastructure on CentOS4 12 years later).
But most code only lasts as long as the idea or the money or the people behind the idea last - all the website and differently skinned CRUD apps I built or managed rarely lasted 5 years without being either shut down or rewritten from the ground up by new developers or leadership in whatever the Resume Driven Development language or framework was at the time - toss out the Perl and rewrite it in Python, toss out the Python and rewrite it in Ruby On Rails, then decide we need Enterprise Java to post about on LinkedIn, then rewrite that in Nodejs, now toss out the Node and use Go or Rust. I'm reasonably sure this year's or perhaps next years LLM coding tools can do a better job of those rewrites than the people who actually did them...
This kind of stuff is textbook broken windows fallacy.
There were times where I was close to getting fed up and just quitting during some of the high profile ops I had to deal with which would've left the entire system inoperable for an extended period of time. And frankly from talking to a lot of other engineers, it sounds like a lot of companies operate in this manner.
I fully expect a lot of these issues to come home to roost as AI compounds loss of institutional knowledge and leads to rapid system decay.
So that when one of your employers has a SaaS related outage, you can just switch to one of your other employers and keep working.
All hail the 100x AI assisted developers doing 10x jobs at 5 different companies at the same time!
This assumes your adding documentation, tests, instructions, and other scaffolding along the way, of course.
(And now I'm wondering how soon the standard AI-first response to bug reports will be a complete rewrite by AI using the previous prompts plus the new bug report? Are people already working on CI/CD systems that replace the CI part with whole-project AI rewrites?)
Humans (and I strongly suspect LLMs, since they're statistical synthesis of human production) are fairly predictable.
We tend to tackle the same problems the same way. So how something is solved, tells you a lot about why, who and when it was solved.
Still, it's a valid point that much of the knowledge is now obscured, but that could be said too of a high employee churn organization.
That's very scale related.
I rarely have any trouble reading and understanding Arduino code. But that's got a hard upper limit (at least on the common/original Arduinos) of 32kB of (compiled) code.
It's many weeks or months worth of effort, or possibly impossible, for me to read and understand a platform with a hundred or so interdependent microservices written in several languages. _Perhaps_ there was a very skilled and experienced architect for all of that, who demanded comprehensive API styles and docs? But if all that was vibe coded and then dropped on me to be responsible? I'd just quit.
I wonder. With a sufficiently sociopathic point of view, every high end car theft almost certainly represents a subsequent insurance claim and new car purchase. And every insurance claim results in upward pressure on insurance prices. If you just look at car theft and export through a "economic impact to the state" lens, there are without doubt a lot of industry and political people who see it as being new revenue and _good_ for the state.
Someone made a mistake. These things happen.
> and it happen to be the one exploited?
Why would the vulnerable service be the service that is exploited? It seems to me that's a far more likely scenario than the non-vulnerable service being exploited... no?
> Someone made a mistake. These things happen.
Some company didn't have appropriate processes in place.
For ISO27001 certification you at least need to pay lip service to having documents and policies about how you deploy secure platforms. (As annoying as ISO certification is, it does at least try to ensure you have thought about andedocumented stuff like this.)