Once they hit some threshold of project size, they overcommit, bite off too much or don't recognize that they're missing some context. Agents help this by allowing them to see their mistakes and try again, but eventually you hit some death loop.
I think someone around now-ish will realize that there should be two separate RLHF tunes--one for the initial prototype, and another for the hard engineering that follows. I doubt its that hard to make an methodical, engineering-minded tune, but the emphasis has been on the flashy demos and the quick wins. Cursor and folks should be collecting this data as we speak, and I expect curmudgeony agents to start appearing within a year.
Combine this with better feedback loops (e.g. mcp-accessible debuggers), the agent doing its own stackoverflow/github searches, and continued efficiency work driving token costs down by an order-of-magnitude every year or so, and agents will get very very good very fast.
In this atmosphere, humans will shortly exist to get context for the agent that it can't fetch itself, either for security reasons, or because no one's built the integration yet. And that will be short-lived, because integrations will always be built.
So I guess there's a window for the "copilot" reality, but it feels very very brief. I don't think agents will need humans for very long.
(Hey fizx!)