I agree with your first point, maybe AI will close some of those gaps with future advances, but I think a large part of the damage will have been done by then.
Regarding the memory of reasoning from LLMs, I think the issue is that even if you can solve it in the future, you already have code for which you've lost the artifacts associated with the original generation. Overall I find there's a lot of talks (especially in the mainstream media) about AI "always learning" when they don't actually learn new anything until a new model is released.
> Why does it require 100% accuracy 100% of the time? Humans are not 100% accurate 100% of the time and we seem to trust them with our code.
Correct, but humans writing code don't lead to a Bus Factor of 0, so it's easier to go back, understand what is wrong and address it.
If the other gaps mentioned above are addressed, then I agree that this also partially goes away.
Document. Whenever a member of the team leaves, you should always make them document what is in their head and not already documented (or even better, make documenting a required part of normal development).
The thing to understand is that every (new) AI chat session is effectively a member entering and leaving your team. So to make that work well you need great onboarding (provide documentation) and great offboarding (require documentation).
Now what you've got is a junk-soup where the original problem is buried somewhere in the pile.
Best approach I've found is to start a fresh conversation with the original problem statement and any improvements/negative reinforcements you've gotten out of the LLM tacked on.
I typically have ChatGPT 5 Thinking, Claude 4.1 Opus, Grok 4, and Gemini 2.5 Pro all churning on the same question at once and then copy-pasting relevant improvements across each.
That means that positively worded instructions ("do x") work better than negative ones ("don't do y"). The more concepts that you don't want it to use / consider show up in the context, the more they do still tend to pull the response towards them even with explicit negation/'avoid' instructions.
I think this is why clearing all the crap from the context save for perhaps a summarizing negative instruction does help a lot.