I have friends who are highly educated professionals (PhDs, MDs) who just assume that AI\LLMs make no mistakes.
They were shocked that it's possible for hallucinations to occur. I wonder if there's a halo effect where the perfect grammar, structure, and confidence of LLM output causes some users to assume expertise?
Highly educated professionals in my experience are often very bad at applied epistemology -- they have no idea what they do and don't know.
But I also think that if a maintainer asks you to jump before submitting a PR, you politely ask, “how high?”