Edit: I am saying it as a developer who is using LLMs for coding, so I feel that I can constructively criticize them. Also, sometimes the code actually works when I put enough effort to describe what I expect; I guess I could just write the code myself but the problem is that I don't know which way it will result in a quicker delivery.
From where I sit, right now, this does not seem to be the case.
This is as if writing down the code is not the biggest problem, or the biggest time sink, of building software.
Edit: I also feel stumped why so many people give in to this hype that LLMs are good at coding when they can't even do seemingly simple tasks of plain English language summarization accurately as evidenced in https://www.youtube.com/watch?v=MrwJgDHJJoE. If the AI summarizes the code in its own context incorrectly then it will not be able to write it correctly either.