I haven't had anything as severe as OP, but I have had minor issues. For instance, claude dropped a "production" database (it was a demo for the hackerspace, I had previously told claude the project was "in development" because it was worried too much about backwards compatibility, so it assumed it could just drop the db). Sometimes a file is dropped, sometimes a git commit is made and pushed without checking etc despite instructions.
I'm building a personal repo with best practices and scripts for running claude safely etc, so I'm always curious about usage patterns.
If they make the LLMs more productive, it is probably explained by a less complicated phenomenon that has nothing to do with the names of the roles, or their descriptions. Adversarial techniques work well for ensuring quality, parallelism is obviously useful, important decisions should be made by stronger models, and using the weakest model for the job helps keep costs down.
For instance, if an agent only has to be concerned with one task, its context can be massively reduced. Further, the next agent can just be told the outcome, it also has reduced context load, because it doesn't need to do the inner workings, just know what the result is.
For instance, a security testing agent just needs to review code against a set of security rules, and then list the problems. The next agent then just gets a list of problems to fix, without needing a full history of working it out.