The primary difference I see in people who get the most value from AI tools is that they expect it to make mistakes: they always carefully review the code and are fine with acting, in some cases, more like an editor than an author. They also seem to have a good sense of where AI can add a lot of value (implementing well-defined functions, writing tests, etc.) vs. where it tends to fall over (e.g. tasks where large scale context is required). Those who can't seem to get value from AI tools seem (at least to me) less tolerant of AI mistakes, and less willing to iterate with AI agents, and they seem more willing to "throw the baby out with the bathwater", i.e. fixate on some of the failure cases but then not willing to just limit usage to cases where AI does a better job.
To be clear, I'm not saying one is necessarily "better" than the other, just that the reason for the dichotomy has a lot more to do with the programmers than the domain. For me personally, while I get a lot of value in AI coding, I also find that I don't enjoy the "editing" aspect as much as the "authoring" aspect.
The problem with this perspective is that anyone who works on more niche programming areas knows the vast majority of programming discussion online aren't relevant to them. E.g., I've done macOS/iOS programming most of my career, and I now do work that's an order of magnitude more niche than that, and I commonly see programmers saying thing like "you shouldn't use a debugger", which is a statement that I can't imagine a macOS or iOS programmer saying (don't get me wrong they're probably out there, I've just never met or encountered one). So you just become use to most programming conversations being irrelevant to your work.
So of course the majority of AI conversations aren't relevant to your work either, because that's the expectation.
I think a lot of these conversations are two people with wildly different contexts trying to communicate, which is just pointless. Really we just shouldn't be trying to participate in these conversations (the more niche programmers that is), because there's just not enough shared context to make communication effective.
We just all happen to fall under this same umbrella of "programming", which gives the illusion of a shared context. It's true there's some things that are relevant across the field (it's all just variables, loops, and conditionals), but many of the other details aren't universal, so it's silly to talk about them without first understanding the full context around the other persons work.
Sorry, but who TF says that? This is actually not something I hear commonly, and if it were, I would just discount this person's opinion outright unless there were some other special context here. I do a lot of web programming (Node, Java, Python primarily) and if someone told me "you shouldn't use a debugger" in those domains I would question their competence.
Here's a good specific example https://news.ycombinator.com/item?id=26928696