Except they don't correct themselves when asked.
I'm sure we've all been there, many, many, many,many,many times ....
- User: "This is wrong because X"
- AI: "You're absolutely right ! Here's a production-ready fixed answer"
- User: "No, that's wrong because Y"
- AI: "I apologise for frustrating you ! Here's a robust answer that works"
- User: "You idiot, you just put X back in there"
- and so continues the vicious circle....
They tend to very quickly lose useful context of the original problem and stated goals.
Is there anything else I can help you with?