> Some large companies’ pilots and younger startups are really excelling with generative AI,” … “It’s because they pick one pain point, execute well, and partner smartly with companies who use their tools,”
Everyone victory lapping this as a grand failure should pay attention to the above snippet.
i think thats a bit too defensive. the reasonable take has been that AI is definitely a game changer, like the internet was, but it was still in a bubble because people extrapolated the S-curve as though it was an exponential explosion. just like they did with the internet.
so yeah, targeted well thought out usecases that are handled well by LLMs will deliver value, but it wont replace developers or anything like that, which is what these people with barely an understanding of the tech's limitations have been claiming.
OpenAI hasnt "internally achieved" AGI. thats what people are calling bullshit on
There is also the cost of inference that has been made artificially cheap. Alot of "gamechanging workflows" may end up being economically too expensive to maintain if the true cost of that compute snaps back.
It’s like everyone started thinking being an influencer is the actual job as opposed to solving problems via automation. Like what is software if it’s not that?
I genuinely think it’s because they are invested in or otherwise making money off the ecosystem, but it really only pans out if they succeed at selling it. Kind of like the rust drones
Why would everyone do a victory lap if they are losing time and money
Software developers commenting on HN and elsewhere routinely focus on majorities, e.g., "80/20" memes, references to Zipf's Law, etc., and conclude without hesitation that if a small minority, say 5%, of software users do not follow a pattern that a large majority, say 95%, follow, the minority can be safely disregarded
Is it really suprising that people reading the MIT report might focus on the 95% instead of the 5%
IMO, the report is mostly about the 5% but as it happens people care about majorities like 95%
Definitely agree that it’s not a magic bullet, the hype is huge and a bubble burst is quite possible.
On the other hand, its ability to eliminate toilsome work in a variety of areas (it can generate a basic legal contract as well as a basic rails app) is pretty astounding. There are many other industries besides software dev where having tools that can understand and communicate in human language and context could be totally transformative, and they have barely begun to look into it. I think this is where startups should be focused.
I mean, “it works occasionally, in extremely restrictive circumstances” could be said of nearly any previous tech bubble (crypto may be the one example that just never really delivered anything much at all); this even works for the _previous_ AI bubbles. Expert systems are still, slightly, a thing, say.
LLMs are receiving a level of investment that appears to be based on them being world-changing, and that just doesn’t seem to be working out.
Theyre world changing beyond doubt for one industry in particular; scams, fake news, propaganda, forum bots. The industry has evolved beyond recognition.
We just received a call at work using the voice of the head of accounting.
I really hope the good of all the other uses offset the harm done.
> Researchers at MIT published a report showing that 95% of the generative AI programs launched by companies failed to do the main thing they were intended for
I think everyone had a gut feel for something along those lines, but those numbers are even starker than I would've imagined. Granted, many (most?) people trying to vibe code full apps don't know much about building software, so they're bound to struggle to get it to do what they want. But this quote is about companies and code they've actually put into production. Don't get me wrong, I've vibe coded a bunch of utilities that I now use daily, but 95% is way higher than I would've expected.
Read the paper. The media is not providing a lot of missing context. The paper points out problems like leadership failures for those efforts, lack of employee buy-in (potentially because they use their personal LLM), etc.
A huge fraction of people at my work use LLMs, but only a small fraction use the LLM they provided. Almost everyone is using a personal license
> Despite the rush to integrate powerful new models, about 5% of AI pilot programs achieve rapid revenue acceleration; the vast majority stall, delivering little to no measurable impact on P&L. [1]
"We wanted to make money with it, but we didn't immediately make a lot of money" feels very different from "the project failed to deliver what it set out to".
If AI improved as quickly as hardware used to do then most of these efforts would succeed, since what would have been on the horizon of plausibility one year would be very easy to do a year or two later.
But that improvement didn't come, the technology plateaued so most of these efforts failed.
It's roughly in line with Sturgeon's Law: 90% of everything is crap.
Except in this case, where AI can enable people with absolutely no experience in some area to produce something that at least superficially can seem plausibly viable, it's no surprise that the percentage of crap is even higher.
This is an interesting point. As LLMs are not intelligent and trained on what already exists, their output is necessarily mediocre if not bad. We simply have found a way to increase the amount of crap in the digital world, to the point that Sturgeon’s 90% will become a very low estimate.
I’ve heard the story that SQL was originally sold as a language that non tech people could use to query databases. It’s mostly an utter failure at that, yet still immensely popular.
I’m expecting a similar future for AI, it will not deliver the “deprecating devs” part but it will still be a useful and ubiquitous tool.
Yeah, it was a 4GL. Roughly since the 1950s, every ten years or so someone has come along with “this will allow unskilled people to write programs, and destroy those awful programmers forever” (_COBOL_ was originally basically marketed as this!) SQL is by _far_ the most successful thing to actually come out of this recurrent trend, in that it is actually useful, and unskilled people can actually use it to an extent. Most of the rest of it, 4GLs and 5GLs and drag and drop programming and no-code and… was just kinda useless; at most it made for a good demo, but attempts to make actual workable maintainable software with it broke down fast.
You make „utilities“, those initiatives are about replacing complex processes, business-people-engineer communication and the engineers themselves with AI.
e.g. I waste a lot of time with converting business requirements into a proprietary rule language. It should be simple tasks, but the requirements are freaky, the language is limited and I often need to look up internals of systems that produce data the rules act upon.
My bosses boss currently wants me to replace my work with AI. It can not work. It‘s setup for failure.
5% saying it's helping their company is roughly in line with the lizard man constant. [1] There will always be people who will never admit a thing didn't work out as planned and those who just like to answer sarcastically. It's not unreasonable to assume that, if this report is even remotely accurate, it's pretty much 100% of people finding AI fairly disappointing.
I mean, I assume the blogspam industry is thrilled with it…
For the time being, and the foreseeable future, LLM’s sweet spot seems to be low-grade translation, and ultra-low-grade bottom barrel ‘content generation’. Which is… not nothing, but also not what you’d call world-changing. As a number of people said, there probably is an industry here; it’s just that it’s worth on the order of tens of billions, not trillions as the markets currently appear to believe.
(Some people will claim it’s a great programming tool. Personally sceptical, but even if it’s the greatest, most amazingest programming tool ever, well, “we might be even more important than Borland and Jetbrains were” is not going to thrill the markets too much. Current valuations are built on mass-market applicability, and if that doesn’t show up soon there will be trouble.)
That's a quote from Ed Zitron, whose entire schtick is that AI is a scam. It's independent of the article itself, and in particular the list of bearish observations near the top, all of which are independently verifiable.
A few HN members did submit the MIT report [PDF]^1 but HN discussion has instead centered around articles written about the report and the market's apparent reaction to it
I work at an ewaste recycling company. I expect we'll see some high end Nvidia Tesla GPUs coming through, just like the Ant Miners (Bitcoin ASICs) a few weeks ago.
>Researchers at MIT published a report showing that 95% of the generative AI programs launched by companies failed to do the main thing they were intended for — ginning up more revenue.
AI startups were meant to solve problems in novel ways not to amass revenue.
I smile with glee when these people fail. The fundamental issue of modern capitalism is that its a coerce and exploitative system, true believers (who are in charge unfortunately) ignore this, and think money and value are the same thing.
Let me show you what I mean: Let's someone runs a grocery, and they want to make it more profitable. After looking at the value chain, they conclude the person growing the lettuces makes 10% of the profit, logistics makes 40%, and retail 50%.
So they conclude that the best way to improve the business, is to optimize the retail side.
Then you walk into the store and see the tiny withered lettuce on the gleaming fancy shelves.
If they decided to focus on where the value is created, and helped the farmer grow better groceries, everybody would've been happy.
In capitalism everybody specializes in something, retailers in trading, logistics in storage and transport and producers in producing. The most efficient way to improve all that is to have vertically integrated business where you do all of the above. In my country we had one big retailer like that but it became so huge and in the end it imploded. And yes I agree that capitalism is exploitive and I think that instead of working for salary people should work for equity. I would certainly be more motivated if I own piece of the business instead of merely getting a salary.
If you're alluding to the fact that a lot of startups run at a loss to capture as much of the market as they can, that is true. But I don't think that's the point here.
Revenue is probably the wrong measure, it should be profit. And a startup that doesn't somehow turn into profit for its _customers_ usually doesn't see much traction.
They can either increase revenue (there's a lot of AI sales tools that promise just that), or, more commonly, reduce costs, which also increases profits. If it saves time or money, it reduces costs. If it doesn't do either of these things, you'd have to really enjoy the product to still pay for it.
Everyone victory lapping this as a grand failure should pay attention to the above snippet.
https://fortune.com/2025/08/18/mit-report-95-percent-generat...
so yeah, targeted well thought out usecases that are handled well by LLMs will deliver value, but it wont replace developers or anything like that, which is what these people with barely an understanding of the tech's limitations have been claiming.
OpenAI hasnt "internally achieved" AGI. thats what people are calling bullshit on
Software developers commenting on HN and elsewhere routinely focus on majorities, e.g., "80/20" memes, references to Zipf's Law, etc., and conclude without hesitation that if a small minority, say 5%, of software users do not follow a pattern that a large majority, say 95%, follow, the minority can be safely disregarded
Is it really suprising that people reading the MIT report might focus on the 95% instead of the 5%
IMO, the report is mostly about the 5% but as it happens people care about majorities like 95%
Fixes one pain point good. Can’t really be applied to everything.
So just another tool, not a magic bullet like it is being marketed.
On the other hand, its ability to eliminate toilsome work in a variety of areas (it can generate a basic legal contract as well as a basic rails app) is pretty astounding. There are many other industries besides software dev where having tools that can understand and communicate in human language and context could be totally transformative, and they have barely begun to look into it. I think this is where startups should be focused.
LLMs are receiving a level of investment that appears to be based on them being world-changing, and that just doesn’t seem to be working out.
We just received a call at work using the voice of the head of accounting.
I really hope the good of all the other uses offset the harm done.
I think everyone had a gut feel for something along those lines, but those numbers are even starker than I would've imagined. Granted, many (most?) people trying to vibe code full apps don't know much about building software, so they're bound to struggle to get it to do what they want. But this quote is about companies and code they've actually put into production. Don't get me wrong, I've vibe coded a bunch of utilities that I now use daily, but 95% is way higher than I would've expected.
A huge fraction of people at my work use LLMs, but only a small fraction use the LLM they provided. Almost everyone is using a personal license
"We wanted to make money with it, but we didn't immediately make a lot of money" feels very different from "the project failed to deliver what it set out to".
[1] https://fortune.com/2025/08/18/mit-report-95-percent-generat...
We're a few years in. It takes time to figure things out and see returns.
The web and dot com boom and bust still led to several trillion dollar companies, eventually.
AI will transform my industry, but not overnight. My employer is within that 95%... but won't be forever.
But that improvement didn't come, the technology plateaued so most of these efforts failed.
Except in this case, where AI can enable people with absolutely no experience in some area to produce something that at least superficially can seem plausibly viable, it's no surprise that the percentage of crap is even higher.
I’m expecting a similar future for AI, it will not deliver the “deprecating devs” part but it will still be a useful and ubiquitous tool.
It was all the hype at the time, like LLMs are now. Most of them died because it was a bad idea.
And the reason we still use some, like SQL, is not because of the sintaxt.
e.g. I waste a lot of time with converting business requirements into a proprietary rule language. It should be simple tasks, but the requirements are freaky, the language is limited and I often need to look up internals of systems that produce data the rules act upon.
My bosses boss currently wants me to replace my work with AI. It can not work. It‘s setup for failure.
[1] https://en.m.wiktionary.org/wiki/Lizardman%27s_Constant
For the time being, and the foreseeable future, LLM’s sweet spot seems to be low-grade translation, and ultra-low-grade bottom barrel ‘content generation’. Which is… not nothing, but also not what you’d call world-changing. As a number of people said, there probably is an industry here; it’s just that it’s worth on the order of tens of billions, not trillions as the markets currently appear to believe.
(Some people will claim it’s a great programming tool. Personally sceptical, but even if it’s the greatest, most amazingest programming tool ever, well, “we might be even more important than Borland and Jetbrains were” is not going to thrill the markets too much. Current valuations are built on mass-market applicability, and if that doesn’t show up soon there will be trouble.)
Junior developers require guidance but are still producing value. And with good guidance, they will do amazing work.
Reading stuff like this makes me question the entirety of the article.
Edit: I mean the one discussed here, and in countless other recently submitted articles:
95% of Companies See 'Zero Return' on $30B Generative AI Spend - https://news.ycombinator.com/item?id=44974104 - Aug 2025 (413 comments)
95% of generative AI pilots at companies are failing – MIT report - https://news.ycombinator.com/item?id=44941118 - Aug 2025 (167 comments)
95 per cent of organisations are getting zero return from AI according to MIT - https://news.ycombinator.com/item?id=44956648 - Aug 2025 (14 comments)
Dead Comment
https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Bus...
A few HN members did submit the MIT report [PDF]^1 but HN discussion has instead centered around articles written about the report and the market's apparent reaction to it
1. For example,
https://news.ycombinator.com/item?id=44941374
https://news.ycombinator.com/item?id=44972204
https://news.ycombinator.com/item?id=44978557
Is it mostly rarer and more expensive materials like gold/lithium, or is it mainly bulk plastic and aluminium?
AI startups were meant to solve problems in novel ways not to amass revenue.
Let me show you what I mean: Let's someone runs a grocery, and they want to make it more profitable. After looking at the value chain, they conclude the person growing the lettuces makes 10% of the profit, logistics makes 40%, and retail 50%.
So they conclude that the best way to improve the business, is to optimize the retail side.
Then you walk into the store and see the tiny withered lettuce on the gleaming fancy shelves.
If they decided to focus on where the value is created, and helped the farmer grow better groceries, everybody would've been happy.
Revenue is probably the wrong measure, it should be profit. And a startup that doesn't somehow turn into profit for its _customers_ usually doesn't see much traction.
They can either increase revenue (there's a lot of AI sales tools that promise just that), or, more commonly, reduce costs, which also increases profits. If it saves time or money, it reduces costs. If it doesn't do either of these things, you'd have to really enjoy the product to still pay for it.