I think this points out a key point.. but I'm not sure the right way to articulate it.
A human-written comment may be worth something, but an LLM-generated is cheap/worthless.
The nicest phrase capturing the thought I saw was: "I'd rather read the prompt".
It's probably just as good to let an LLM generate it again, as it is to publish something written by an LLM.
Maybe. The naturally curious will also typically be slower to arrive at a solution due to their curiosity and interest in making certain they have all the facts.
If everyone else is racing ahead, will the slowpokes be rewarded for their comprehension or punished for their poor metrics?
It's always possible to go slower (with diminishing benefits).
Or I think putting it in terms of benefits and risks/costs: I think it's fair to have "fast with shallow understanding" and "slower but deeper understanding" as different ends of some continuum.
I think what's preferable somewhat depends on context & attitude of "what's the cost of making a mistake?". If making a mistake is expensive, surely it's better to take an approach which has more comprehensive understanding. If mistakes are cheap, surely faster iteration time is better.
The impact of LLM tools? LLM tools increase the impact of both cases. It's quicker to build a comprehensive understanding by making use of LLM tools, similar to how stuff like autocompletion or high-level programming languages can speed up development.