Yep, that's how you get better output from AI. A lot of devs haven't learned that yet. They still see it as 'better autocomplete'.
A good AI development framework needs to support a tail of deprecated choices in the codebase.
Skills are considerable better for this than design docs.
A slightly sarcastic (or perhaps not so slightly..) mental model of legal conflict resolution is that much of it boils down to throwing lots of content at the opposing side, claiming that it shows that the represented side is right and creating a task for the opposite side to find a flaw in that material. I believe that this game of quantity fits through the whole range from "I'll have my lawyer repeat my argument in a letter featuring their letter head" all the way to paper-tsunamis like the Google-Oracle trial.
Now give both sides access to LLM... I wonder if the legal profession will eventually settle on some format of in-person offline resolution with strict limits to recess and/or limits to word count for both documents and notes, because otherwise conflicts fail to get settled in anyone's lifetime (or won by whoever does not run out of tokens first - come thinking of it, the technogarchs would love this, so I guess this is exactly what will happen barring a revolution)
I am referring to the act of merely pasting the output of a model as a comment.
Have the decency to understand what the LLM is writing and write your own message.