This is a good question, but also how do we make sure that humans understand the code that _other humans_ have (supposedly) written? Effective code review is hard as it implies that the reviewer already has their own mental model about how a task could/would/should have been done, or is at the very least building their own mental model at reading-time and internally asking 'Does this make sense?'.
Without that basis code review is more like a fuzzy standards compliance, which can still be useful, but it's not the same as review process that works by comparing alternate or co-operatively competing models, and so I wonder how much of that is gained through a quiz-style interaction.
Proof that the busy beaver function is not computable.
http://computation4cognitivescientists.weebly.com/uploads/6/...
We've got a huge LGTM problem where people approve PRs they clearly don't understand.
Recently we had a bug in some code of an employee that got laid off. The people who reviewed it are both still with the company, but neither of them could explain what the code did.
That triggered this angry tweet
https://x.com/donatj/status/1945593385902846118