And yes, I recognize that AI has already created profound change, in that every software engineer now depends heavily on copilots, in that education faces a major integrity challenge, and in that search has been completely changed. I just don't think those changes are on the same level as the normalization of cutting-edge computers in everyone's pockets, as our personal relationships becoming increasingly online, nor as the enablement for startups to scale without having to maintain physical compute infrastructure.
To me, the treating of AI as "different" is still unsubstantiated. Could we get there? Absolutely. We just haven't yet. But some people start to talk about it almost in a way that's reminiscent of Pascal's Wager, as if the slight chance of a godly reward from producing AI means it is rational to devote our all to it. But I'm still holding my breath.
With many engineers using copilots and since LLMs output the most frequent patterns, it's possible that more and more software is going to look the same, which would further reinforce the same patterns.
For example, emdash thing, requires additional prompts and instructions to override it. Doing anything unusual would require more effort.
I'm not convinced that review comments as commits make thing easier, but I think storing them in git in some way is a good idea (i.e. git annotations or in commit messages after merge etc)