I'm sprry, but I disagree with this claim. That is not my experience, nor many others. It's true that you can make them do something without learning anything. However, it takes time to learn what they are good amd bad at, what information they need, and what nonsense they'll do without express guidance. It also takes time to know what to look for when reviewing results.
I also find that they work fine for languages without static types. You need need tests, yes, but you need them anyway.
I really like the word "assistant" for what we have today. The AI code assistant tools available today, like Claude Code and GitHub Copilot, can't replace humans in doing software development. Not even close. But they are often useful to human developers, and today, that's the more important measure.
I've been spending time with various AI tools, especially Claude Code and GitHub Copilot. They're amazing one minute, and they make bone-headedly bad recommendations the next. It takes effort to learn how to create good prompts, and if you want the results to be good, you have to review and critique the results. I'm particularly concerned about security. They're definitely happy to write insecure code. If you know what you're doing, prompt them well, and review their results, you can get good results.
I don't know if they'll ever reach "full autonomy". They don't need to get there to be useful.