I believe that the root cause for the "fatigue" and tension around AI is much more embedded in the societal context.
US (and the rest of the western world) citizens face real life problems in youth employment, social/political instability, unaffordable housing, internet addiction (yes, I believe that it is real problem that people spend 5 hours on their phones daily) and social atomisation. Meanwhile all resources are put in a rush into building technology that does not fundamentally improve people's well being. Advanced societies have had pretty good capabilities of doing writing, design, coding, searching information, etc. Now we are pouring all available resources, at any cost, to automate these processes even more. The costs of this operation are tremendous and it doesn't yield any results that improve everyday lives.
In 2020 there was a ton of UI/UX/graphics companies that could produce copious amount of visual content for the society while providing work to many people. Now we are about to automate this process and be able to generate infinite amount of graphics on demand. To what end? Were our capabilities to create graphics any kind of bottleneck before? I don't think so.
The stock market and tech leadership are completely decoupled from the problems that the majority of people faces. The real effect of AI at hand is to commoditise intellectual work that previously functioned well dispersed in society. This does not bring benefit to the majority of people.
IMO: We should be using AI for the common good, IE, lower the cost of living, increase our living standards, improve health, improve access to food and shelter, ect.
I still have yet to see any LLM that appears to do that. They all seem to allow me to have a conversation with data; IE, a better Google search, or a quick way to make clip art.
> We should be using AI for the common good, IE, lower the cost of living, increase our living standards, improve health, improve access to food and shelter, ect.
I think the issue is that these aren't really technical problems, they're social problems.
I can't express how disappointed I am in the societal backlash to AI. It used to rightfully be something we looked forward to. I've been fascinated by it for as long as I've known what a computer was, from watching CyberChase as a kid in the early 2000s to reading the Asimov books to making my own silly sentence-mixing chatbot with a cult following on IRC.
I never thought a computer would pass the turing test in our lifetime (my bot did by accident sometimes, which was always amusing). I spoke to an AI professor who's been at this since the 80s and he never thought a computer would pass the turing test in our lifetime. And for it to happen and the reaction to be anything short of thunderous applause betrays a society bankrupt of imagination, forward thinking, and wonder.
We let pearl-clutching loom smashers hijack the narrative to the point where a computer making a drawing based on natural language is "slop" and you're a bad person if you like it, instead of it being the coolest thing in the fucking world which it actually is. We have chatbots that can do extraordinary feats of research and pattern-matching but all we can do is cluck over some idiot giving himself bromide poisoning. The future is here and it's absolutely amazing, and I'm tired of pretending it isn't. I can't wait for this "AI users DNI", "This video proudly made less efficiently than it could have been because I'm afraid of AI" social zietgeist to die off.
> instead of it being the coolest thing in the fucking world
Some people think a M-16 is the coolest thing in the world. Nobody thinks we should be handing them out to school children. The reaction is because most people think AI will compound our current problems. Look at video generation. Not only does it put a lot of people out of work, it also breaks the ability of people to post a video as proof of something. Now we have to try to determine if the very real looking video is from life or a neural net. That is very dangerous and the tech firms released it without any real thought or discussion as to the effect it would have. They make illegal arms dealers look thoughtful by comparison. You ignoring this (and other effects) is just childish.
I think that AI capabilities are now really impressive, nevertheless my point is not about it being “cool” it is, but rather what kind of society we are going to produce with it and how it impacts people’s lives.
> It used to rightfully be something we looked forward to
This is rather unimportant, but I would say that media has usually portrayed AI as a dangerous thing. Space oddysey, Terminator, Mass Effect, Her, Alien, Matrix, Ex Machina, you name it.
AI isn't solving the problems that our society needs to solve, and its potentially making some of them worse. If you can't understand why people feel that way by now, then you are out of touch with their struggle. Instead of being disappointed in your fellow humans who contain the same capacity for wonder as you do, perhaps you should wonder why you are so quick to dismiss them as luddites. BTW you might want to read more about those guys, they didn't actually hate technology just because it was progress. They hated the intentional disruption of their hard-earned stability in service of enriching the wealthy.
>It used to rightfully be something we looked forward to
Science fiction has always been mixed. In Star Trek the cool technology and AGI like computer is accompanied by a post-scarcity society where fundamental needs are taken care of. There are countless other stories where technology and AI is used as a tool to enrich some at the expense of others.
>We let pearl-clutching loom smashers hijack the narrative to the point where a computer making a drawing based on natural language is "slop" and you're a bad person if you like it
I don't strongly hold one opinion or the other, but I think fundamentally the roots of people's backlash is that it is something that jeopardizes their livelihood. Not in some abstract "now the beauty and humanity of art is lost" sort of way, but much more concretely, in that because of LLM adoption (or at least hype), they are out of a job and cannot make money—which hurts their quality of life much more than the increase in quality of life from access to LLMs. Then those people see the "easy money" pouring into this bubble, and it would be hard not to get demoralized. You can claim that people just need to find a different job, but that's ignoring the reality that the over the past century the skill-floor has basically risen and the ladder pulled up; and perhaps even worse, trying to reach for that higher bar still results in one "treading water" without any commensurate growth in earnings.
> I never thought a computer would pass the turing test in our lifetime
Are we talking about the single non-peer reviewed study that show that a random person might only be about 1/3 in guessing correctly that a GPT 4.5 text is a computer and not a human?
Learning to recognize the artifacts, style and logical nonsense of an LLM is a skill. People are slowly learning those and through that the turing results will natural drop, which strongly imply a major fault in how we measure turing completeness.
There is push back and not being able to enjoin it effectively doesn’t invalidate it.
As a concrete example: Here on HN, there are always debates on what the hell people mean when they say LLMs helped them code.
I’ve seen it happen enough that I now have a boiler plate request for posters: Share your level of seniority, experience, domain familiarity, language familiarity, project result, along side how the LLM helped.
I am a nerd through and through, and still read copious amounts of science fiction on a weekly basis. I lack no wonder and love for tech.
To make that future, the jagged edges of AI output need to be mapped and tamed. That needs these kinds of precise conversations so that we have a shared reality to work on.
Doing that job badly, is the root cause of people talking past each other. Dismissing it as doomerism, is to essentially miss market and customer feedback.
Is there a single AI corporation working for the public good? “Open”AI just shed the last vestiges of its non-profitdom, and every single AI CEO sounds like a deranged cultist.
Wake me up when we have the FSM equivalent for AI. What we have now is a whole lot of corporate wank.
Tech customers are massively AI hype fatigued at this point.
The tech isn’t going away, but a hard reset is overdue to bring things back down for a cold hard reality check. Article yesterday about MSFT slashing quotas on AI sales as customers aren’t buying is in line with this broader theme.
Morgan Stanley also quietly trying to offload its exposure to data center financing in a move that smells very summer of 2008-ish. CNBC now talks about the AI bubble multiple times a day. OpenAI looks incredibly vulnerable and financially over-extended.
I don’t want a hard bubble pop such that it nukes the tech ecosystem, but we’re reaching a breaking point.
I think your wording is the correct wording, not the "AI fatigue" because I don't want to go to pre-AI era and I can't stand another "OMG It's over" tweet at the same time.
Yeah. Hype fatigue is a good description. Every time I see them talking about having AI book flights and hotels I think about the digital assistants on phones. Didn't they promise us the same thing back then?
I won't believe any of the claims until I see them working (flawlessly).
> I don’t want a hard bubble pop such that it nukes the tech ecosystem, but we’re reaching a breaking point.
Some days I wonder if we'd be better off or worse off if we had a complete collapse of technology. I think it'd be painful with a massive drop in standard of living, but we could still recover. I wonder if the same will be true in a couple more generations.
I think it's dangerous to treat younger generations like replaceable cogs. What happens when there's no one around that knows how the cogs are supposed to fit together?
Yup. The tech giants surely know the correction is coming by now. They are just trying to milk it just a tiny bit longer before it all comes crashing down.
Keep your eyes out on the skies, I forecast executives in golden parachutes in the near future
Yes. IPO talks suggests there will be rushed attempts to cash out before this all implodes, but all signs are pointing to that ship having sailed.
I don’t see any big AI company having a successful IPO anytime soon which is going to leave some folks stuck holding the financial equivalent of nuclear waste.
The annoying part is that every tech company made an internal mandate for every team to stuff AI into every product. There are some great products that use AI (Claude Code, ChatGPT, Nano-banana, etc). But we simply haven't had time to come up with good ways of integrating AI into every software product. So instead every big tech company spent two years forcing AI into everything with minimal thought. Obviously people are not happy with this.
Some AI was done tastefully. Apple Photos search comes to mind. I can search for objects across your photos, and it does a reasonable job finding what I want. It's an example of AI that's so well done, the end user doesn't even know it's there.
Now Microsoft pushing "Copilot" is the complete opposite. It's so badly integrated with any standard workflow, it's disruptive in the worst of ways.
It's not just that they are adding AI to every single product, it's being pushed on customers in incredibly intrusive and irritating ways that makes it seem as though they're desperate for their AI investments to pan out. If your AI productivity enhancement stuff is so amazing, shouldn't you be turning away customers at the door due to demand instead of brow beating me into finally signing up for it in submission?
A lot of this AI backlash feels less about the tech itself and more about people feeling economically exposed. When you think your job or livelihood is on thin ice, it is easier to direct that fear at AI than at the fact that our elected reps have not offered any real plan for how workers are supposed to survive the transition.
AI becomes a stand-in for a bigger problem. We keep arguing about models and chatbots, but the real issue is that the economic safety net has not been updated in decades. Until that changes, people will keep treating AI as the thing to be angry at instead of the system that leaves them vulnerable.
A major factor in the backlash is that the AI is obnoxiously intrusive because companies are forcefully injecting it into everything. It pops up everywhere trying to be "helpful" when it is neither needed nor helpful. People often experience AI as an idiot constantly jabbering next to them while they are trying to get work done.
AI would be much more pleasant if it only showed up when summoned for a specific task.
I've mentioned this elsewhere on HN yet it bears repeating:
The core issue is that AI is taking away, or will take away, or threatens to take away, experiences and activities that humans would WANT to do. Things that give them meaning and many of these are tied to earning money and producing value for doing just that thing. As someone said "I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes".
Much of the meaning we humans derive from work is tied to the value it provides to society. One can do coding for fun but doing the same coding where it provides value to others/society is far more meaningful.
Presently some may say: AI is amazing I am much more productive, AI is just a tool or that AI empowers me. The irony is that this in itself shows the deficiency of AI. It demonstrates that AI is not yet powerful enough to NOT need to empower you to NOT need to make you more productive. Ultimately AI aims to remove the need for a human intermediary altogether that is the AI holy grail. Everything in between is just a stop along the way and so for those it empowers stop and think a little about the long term implications. It may be that for you right now it is comfortable position financially or socially but your future you in just a few short months may be dramatically impacted.
I can well imagine the blood draining from peoples faces, the graduate coder who can no longer get on the job ladder. The law secretary whose dream job is being automated away, a dream dreamt from a young age. The journalist whose value has been substituted by a white text box connected to an AI model.
> A lot of this AI backlash feels less about the tech itself and more about people feeling economically exposed.
This is what it is for me. I can see the value in AI tech, but big tech has inserted themselves as unneeded middlemen in way too much of our lives. The cynic in me is convinced this is just another attempt at owning us.
That leaked memo from Zuckerberg about VR is a good example. He's looking at Google and Apple having near absolute control over their mobile users and wants to get an ecosystem like that for Facebook. There's nothing about building a good product or setting things up so users are in control. It's all about wanting to own an ecosystem with trapped users.
If they can, big tech will gate every interaction or transaction and I think they see AI as a way to do that at scale. Don't ask your neighbour how to change a tire on your car. Ask AI. And pay them for the "knowledge".
Eh, it's way simpler than that. AI doesn't know when to STFU. When I write an email or document, I don't need modern-day Clippy constantly guessing (and second-guessing) my thoughts. I don't need an AI sparkle button plastered everywhere to summarize articles for me. It's infantilizing and reeks of desperation. If AI is a truly useful tool, then I'll integrate it into my workflow on my own terms and my own timeline.
Part of this the behavior around it too from some users. Like that guy spamming FOSS projects on github with 13k LOC of code nobody asked for and then acting forwarding the criticism from people forced to review it to the
Claude and copy pasting the response back to .
Triumphant Posts on linkedin from former seo/cryptoscam people telling everyone they'll be left behind if they don't adopt the latest flavor text/image generator.
All these resources being spent too on huge data centres for text generators when things like protein folding would be far more useful, billion dollar salaries for "AI Gurus" that are just throwing sh*t at the wall and hoping their particular mix of models and training works, while laying people off.
The constant stream of exaggerated bragging "hahaha we will fire and replace you all" from AI companies is not helping.
This tech cycle does not even pretend to be "likable guys". They are framing themselves as sociopaths due to, well, being interested only in millionaires money.
I think the anger towards AI is completely fabricated.
Where are the new luddites, really? I just don't see them. I see people talking about them, but they never actually show up.
My theory is that they don't actually exist. Their existence would legitimize AI, not bring it down, so AI people fantasize about this imaginary nemesis.
Data centers are better guarded than some government institutions. New luddites can't exactly go in smashing the servers.
The actual "new luddites" have been screaming on here for years complaining about losing their careers over immature tech for the sake of reducing labor costs.
> a Pew Research center survey found that nearly one in five Americans saw AI as a benefit rather than a threat. But by 2025, 43 percent of U.S. adults now believe AI is more likely to harm them than help them in the future, according to Pew.
Am I stupid or is this a stupid line that proves the antithesis of what they want? It went from 4 in 5 being negative to less than half?
Periodic reminder that Newsweek no longer exists. What you're reading is essentially an SEO play run by a religious cult that bought Newsweek's branding in a fire sale. A useful thing to do with any Newsweek story is to take a minute to look into the background of whoever the author of the story is.
Notably, this story is pitched as a "News Story", but it's not really that at all; it's an opinion piece with a couple of quotes from AI opponents. Frustratingly, not many people understand what "Newsweek" is today, so they're always going to be able to collect some quotes for whatever story they're running.
Is this still accurate? Wikipedia says that Newsweek was acquired by IBT Media (a front for a religious movement) in 2013 but returned to independent ownership under Dev Pragad and Jonathan Davis in 2018 following a criminal investigation into embezzlement. I was not able to confirm or reject any links still existing between Newsweek and its current owners and IBT Media.
It does appear that the new owners are very much leaning into a "new media" business model and the old journalistic staff is probably gone.
Dev Pragad was involved with the IBT ownership of Newsweek. The whole thing is a mess. Cards on the table, I'm throwing an elbow with the "religious cult" thing; the cult has not much to do with why you should be careful with Newsweek. Rather, it's that Newsweek as it exists today has nothing to do with Newsweek as people understand it. Whoever owns it, it's basically an actual clickbait farm now.
Periodic reminder that there are people seeking to derail discussions critical of AI and divert attention away from the actual substance of these issues.
US (and the rest of the western world) citizens face real life problems in youth employment, social/political instability, unaffordable housing, internet addiction (yes, I believe that it is real problem that people spend 5 hours on their phones daily) and social atomisation. Meanwhile all resources are put in a rush into building technology that does not fundamentally improve people's well being. Advanced societies have had pretty good capabilities of doing writing, design, coding, searching information, etc. Now we are pouring all available resources, at any cost, to automate these processes even more. The costs of this operation are tremendous and it doesn't yield any results that improve everyday lives.
In 2020 there was a ton of UI/UX/graphics companies that could produce copious amount of visual content for the society while providing work to many people. Now we are about to automate this process and be able to generate infinite amount of graphics on demand. To what end? Were our capabilities to create graphics any kind of bottleneck before? I don't think so.
The stock market and tech leadership are completely decoupled from the problems that the majority of people faces. The real effect of AI at hand is to commoditise intellectual work that previously functioned well dispersed in society. This does not bring benefit to the majority of people.
I still have yet to see any LLM that appears to do that. They all seem to allow me to have a conversation with data; IE, a better Google search, or a quick way to make clip art.
I think the issue is that these aren't really technical problems, they're social problems.
I never thought a computer would pass the turing test in our lifetime (my bot did by accident sometimes, which was always amusing). I spoke to an AI professor who's been at this since the 80s and he never thought a computer would pass the turing test in our lifetime. And for it to happen and the reaction to be anything short of thunderous applause betrays a society bankrupt of imagination, forward thinking, and wonder.
We let pearl-clutching loom smashers hijack the narrative to the point where a computer making a drawing based on natural language is "slop" and you're a bad person if you like it, instead of it being the coolest thing in the fucking world which it actually is. We have chatbots that can do extraordinary feats of research and pattern-matching but all we can do is cluck over some idiot giving himself bromide poisoning. The future is here and it's absolutely amazing, and I'm tired of pretending it isn't. I can't wait for this "AI users DNI", "This video proudly made less efficiently than it could have been because I'm afraid of AI" social zietgeist to die off.
Some people think a M-16 is the coolest thing in the world. Nobody thinks we should be handing them out to school children. The reaction is because most people think AI will compound our current problems. Look at video generation. Not only does it put a lot of people out of work, it also breaks the ability of people to post a video as proof of something. Now we have to try to determine if the very real looking video is from life or a neural net. That is very dangerous and the tech firms released it without any real thought or discussion as to the effect it would have. They make illegal arms dealers look thoughtful by comparison. You ignoring this (and other effects) is just childish.
> It used to rightfully be something we looked forward to
This is rather unimportant, but I would say that media has usually portrayed AI as a dangerous thing. Space oddysey, Terminator, Mass Effect, Her, Alien, Matrix, Ex Machina, you name it.
Science fiction has always been mixed. In Star Trek the cool technology and AGI like computer is accompanied by a post-scarcity society where fundamental needs are taken care of. There are countless other stories where technology and AI is used as a tool to enrich some at the expense of others.
>We let pearl-clutching loom smashers hijack the narrative to the point where a computer making a drawing based on natural language is "slop" and you're a bad person if you like it
I don't strongly hold one opinion or the other, but I think fundamentally the roots of people's backlash is that it is something that jeopardizes their livelihood. Not in some abstract "now the beauty and humanity of art is lost" sort of way, but much more concretely, in that because of LLM adoption (or at least hype), they are out of a job and cannot make money—which hurts their quality of life much more than the increase in quality of life from access to LLMs. Then those people see the "easy money" pouring into this bubble, and it would be hard not to get demoralized. You can claim that people just need to find a different job, but that's ignoring the reality that the over the past century the skill-floor has basically risen and the ladder pulled up; and perhaps even worse, trying to reach for that higher bar still results in one "treading water" without any commensurate growth in earnings.
Are we talking about the single non-peer reviewed study that show that a random person might only be about 1/3 in guessing correctly that a GPT 4.5 text is a computer and not a human?
Learning to recognize the artifacts, style and logical nonsense of an LLM is a skill. People are slowly learning those and through that the turing results will natural drop, which strongly imply a major fault in how we measure turing completeness.
“let” nothing.
There is push back and not being able to enjoin it effectively doesn’t invalidate it.
As a concrete example: Here on HN, there are always debates on what the hell people mean when they say LLMs helped them code.
I’ve seen it happen enough that I now have a boiler plate request for posters: Share your level of seniority, experience, domain familiarity, language familiarity, project result, along side how the LLM helped.
I am a nerd through and through, and still read copious amounts of science fiction on a weekly basis. I lack no wonder and love for tech.
To make that future, the jagged edges of AI output need to be mapped and tamed. That needs these kinds of precise conversations so that we have a shared reality to work on.
Doing that job badly, is the root cause of people talking past each other. Dismissing it as doomerism, is to essentially miss market and customer feedback.
Wake me up when we have the FSM equivalent for AI. What we have now is a whole lot of corporate wank.
The tech isn’t going away, but a hard reset is overdue to bring things back down for a cold hard reality check. Article yesterday about MSFT slashing quotas on AI sales as customers aren’t buying is in line with this broader theme.
Morgan Stanley also quietly trying to offload its exposure to data center financing in a move that smells very summer of 2008-ish. CNBC now talks about the AI bubble multiple times a day. OpenAI looks incredibly vulnerable and financially over-extended.
I don’t want a hard bubble pop such that it nukes the tech ecosystem, but we’re reaching a breaking point.
I think your wording is the correct wording, not the "AI fatigue" because I don't want to go to pre-AI era and I can't stand another "OMG It's over" tweet at the same time.
I won't believe any of the claims until I see them working (flawlessly).
Some days I wonder if we'd be better off or worse off if we had a complete collapse of technology. I think it'd be painful with a massive drop in standard of living, but we could still recover. I wonder if the same will be true in a couple more generations.
I think it's dangerous to treat younger generations like replaceable cogs. What happens when there's no one around that knows how the cogs are supposed to fit together?
Keep your eyes out on the skies, I forecast executives in golden parachutes in the near future
I don’t see any big AI company having a successful IPO anytime soon which is going to leave some folks stuck holding the financial equivalent of nuclear waste.
Now Microsoft pushing "Copilot" is the complete opposite. It's so badly integrated with any standard workflow, it's disruptive in the worst of ways.
AI becomes a stand-in for a bigger problem. We keep arguing about models and chatbots, but the real issue is that the economic safety net has not been updated in decades. Until that changes, people will keep treating AI as the thing to be angry at instead of the system that leaves them vulnerable.
AI would be much more pleasant if it only showed up when summoned for a specific task.
The core issue is that AI is taking away, or will take away, or threatens to take away, experiences and activities that humans would WANT to do. Things that give them meaning and many of these are tied to earning money and producing value for doing just that thing. As someone said "I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes".
Much of the meaning we humans derive from work is tied to the value it provides to society. One can do coding for fun but doing the same coding where it provides value to others/society is far more meaningful.
Presently some may say: AI is amazing I am much more productive, AI is just a tool or that AI empowers me. The irony is that this in itself shows the deficiency of AI. It demonstrates that AI is not yet powerful enough to NOT need to empower you to NOT need to make you more productive. Ultimately AI aims to remove the need for a human intermediary altogether that is the AI holy grail. Everything in between is just a stop along the way and so for those it empowers stop and think a little about the long term implications. It may be that for you right now it is comfortable position financially or socially but your future you in just a few short months may be dramatically impacted.
I can well imagine the blood draining from peoples faces, the graduate coder who can no longer get on the job ladder. The law secretary whose dream job is being automated away, a dream dreamt from a young age. The journalist whose value has been substituted by a white text box connected to an AI model.
This is what it is for me. I can see the value in AI tech, but big tech has inserted themselves as unneeded middlemen in way too much of our lives. The cynic in me is convinced this is just another attempt at owning us.
That leaked memo from Zuckerberg about VR is a good example. He's looking at Google and Apple having near absolute control over their mobile users and wants to get an ecosystem like that for Facebook. There's nothing about building a good product or setting things up so users are in control. It's all about wanting to own an ecosystem with trapped users.
If they can, big tech will gate every interaction or transaction and I think they see AI as a way to do that at scale. Don't ask your neighbour how to change a tire on your car. Ask AI. And pay them for the "knowledge".
Triumphant Posts on linkedin from former seo/cryptoscam people telling everyone they'll be left behind if they don't adopt the latest flavor text/image generator.
All these resources being spent too on huge data centres for text generators when things like protein folding would be far more useful, billion dollar salaries for "AI Gurus" that are just throwing sh*t at the wall and hoping their particular mix of models and training works, while laying people off.
Deleted Comment
Deleted Comment
This tech cycle does not even pretend to be "likable guys". They are framing themselves as sociopaths due to, well, being interested only in millionaires money.
Makes up bad optics.
Where are the new luddites, really? I just don't see them. I see people talking about them, but they never actually show up.
My theory is that they don't actually exist. Their existence would legitimize AI, not bring it down, so AI people fantasize about this imaginary nemesis.
The actual "new luddites" have been screaming on here for years complaining about losing their careers over immature tech for the sake of reducing labor costs.
Dead Comment
Deleted Comment
Am I stupid or is this a stupid line that proves the antithesis of what they want? It went from 4 in 5 being negative to less than half?
What even is journalism now.
Deleted Comment
Notably, this story is pitched as a "News Story", but it's not really that at all; it's an opinion piece with a couple of quotes from AI opponents. Frustratingly, not many people understand what "Newsweek" is today, so they're always going to be able to collect some quotes for whatever story they're running.
It does appear that the new owners are very much leaning into a "new media" business model and the old journalistic staff is probably gone.
The article accurately reflects opinions in YouTube comments and opinions of the population at large.