It's just like the cars driving themselves but you need to be able to jump in if there is a mistake, humans are not going to react as fast as if they were driving, because they aren't going to be engaged, and no one can stay as engaged as they were when they were doing it themselves.
We need to stop pretending we can tell people they "just" need to check things from LLMs for accuracy, it's a process that inevitably leads to people not checking and things slipping through. Pretending it's the people's fault when essentially everyone using it would eventually end up doing that is stupid and won't solve the core problem.
If this is happening on a widespread basis in the economy we should see evidence of it sometime this year and that's what investors are anticipating with SaaS stocks.
Strongly agree with this. It took me awhile to realize that "agentic engineering" wasn't about writing software it was about being able to very quickly iterate on bespoke tools for solving a very specific problem you have.
However, as soon as you start unblocking yourself from the real problem you want to solve, the agentic engineering part is no longer interesting. It's great to be solving a problem and then realize you could improve it very quickly with a quick request to an agent, but you should largely be focused on solving the problem.
Yet I see so many people talking about running multiple agents and just building something without much effort spent using that thing, as though the agentic code itself is where the value lies. I suspect this is a hangover from decades where software was valuable (we still have plenty of highly valued, unprofitable software companies as a testament to this).
I'm reminded a bit of Alan Watts' famous quote in regards to psychedelics:
> If you get the message, hang up the phone.
If you're really leveraging AI to do something unique and potentially quite disruptive, very quickly the "AI" part should become fairly uninteresting and not the focus of your attention.
As the author says, there will certainly be a number of people who decide to play with LLM games or whatever, and content farms will get even more generic while having less writing errors, but I don't think that the age of communicating thought, person to person, through text is "over".