Well run large companies often waste a lot, in order to (1) hedge risks of being left behind, (2) ensure they have options in the future in possible growth or new efficiency areas, and (3) to start on long learning curves for skills and capabilities that appear likely to be a baseline necessity in the long run.
Bonfires of money.
Predictably. Because all three of those concerns require highly speculative action to properly address.
That doesn't make those reasons invalid. Failures are expected, especially in early days. And are not a sign they are making spurious bets, or starry eyed about industry upheavals. The minimal return is still experience gained and a ramped up institutional focus.
How many of us here speed up our overall development by coding early on new projects before we have complete clarity? Writing code we will often throw away?
Agreed. Smug dismissal of new ideas is such a lazy shortcut to trying to look smart. I'd much rather talk to someone enthusiastic about something than someone who does nothing but sit there and say "this thing sucks" every time something happens even if person #2 is incidentally right a lot of the time.
Smug acceptance of new ideas is such a lazy shortcut to trying to look smart. I'd much rather talk to someone who has objectively analyzed the concept rather than someone who does nothing but sit there and say "this thing is great" for no real reason other than "everyone is using it".
Incidentally, "everyone" is wrong a lot of the time.
While I agree in principle, someone has to make decisions about resource allocation and decide that some ideas are better than others. This takes a finely tuned sense of BS detector and technical feasibility estimation, which I would argue is a real skill. It thus becomes subtly different to be an accurate predictor of success vs default cynic if 95% of ideas aren’t going to work.
You put my thoughts into words better than I could.
This reminds me of the exploration-exploration trade-off in reinforcement learning: you want to maximise your long term profits but, since your knowledge is incomplete, you must acquire new knowledge, which companies do by trying stuff. Prematurely dismissing GenAI could mean missing out on new efficiencies, which take time to be identified.
You're giving company executives way too much credit in general. I'm sure there are unicorns out there where conscientious stewards of the company's long-term health are making measured, rational choices that may pay off in a decade, but it's a tiny minority of companies. Most are run by narcissistic short-term corporate raiders whose first priority is looting the company for their own profit and second priority is cosplaying as a once-in-a-generation genius "thought leader" in a never-ending battle to beat away the nagging (and correct) thought that they're nepo-babies who have no clue what they're doing. These morons are burning money because they are stupid and/or because it benefits them and their buddies in the short-term. They burned money on blockchain bullshit, they burned money on Web 2.0 bullshit, they are burning money on AI, and they will be burning money on the next fad too. The fact that AI might actually turn out to be something real is complete serendipity; it has nothing to do with their insight or foresight. The only reason they ever look smart is because they're great at taking credit for every win, everyone else immediately forgets all their losses, and op-ed writers and internet simps all compete to write the most sycophantic adulations of their brilliance. They could start finger-painting their office windows with their own feces and the Wall Street Journal would pump out op-eds saying "Here's why smearing poop on your windows is actually a Really Brilliant Business Move made by Really Good-Looking Business Winners!" Just go back and re-read your comment but think "blockchain" instead of "AI" and you'll see clearly how silly and sycophantic it really is.
And depending on how you look at it, science itself is experimentation, but at least it mostly results in publications in the end, that may or may not be read, but at least serve as records of areas explored.
Scientists and mathematicians often burn barrels of time and unpublished ideas, not to mention following their curiosities into random pursuits, that give their subconscious the free space to crystalize slippery insights.
With their publishable work somehow gelling out of all that.
Well, at least AI is going to be better than the blockchain hype. No one knew what “blockchain” was, how it worked, or what could be used for.
I had a very hard time explaining once you put something in the chain, you can’t easily pull it back out. If you wanted to verify documents, all you have to do is put a hash in a database table. Which we already had.
It has exactly one purpose: prevent any single entity from controlling the contents. That includes governments, business executives, lawyers, judges, and hackers. The only good thing is every single piece of data can be pulled out into a different data structure once you realize your mistake.
Note, I’m greatly oversimplifying all the details and I’m not referring to cryptocurrency.
> has exactly one purpose: prevent any single entity from controlling the contents.
I'd like to propose a different characterization: "Blockchain" is when you want unrestricted membership and participation.
Allowing anybody to spin up any number of new nodes they desire us the fundamental requirement which causes a cascade of other design decisions and feedback systems. (Mining, proof-of-X, etc.)
In contrast, deterring one entity from taking over can also be done with a regular distributed database, where the nodes--and which entities operate them--are determined in advance.
Sure, blockchain development has always been deeply tied to ideas of open membership and participation. I like those ideas too.
But that's a poor definition of a blockchain. A blockchain is merely a distributed ledger with certain properties from cryptography.
If you spin up a private bitcoin network, it's a blockchain even if nobody else knows or cares about it. Now, are non-open blockchains at all useful? I suspect so, but I don't know of any great examples.
The wide space between 'membership is determined in advance' and 'literally anyone can make a million identities at a whim' is worth exploring, IMO.
> No one knew what “blockchain” was, how it worked, or what could be used for.
Not the blockchain itself, but the concept of an immutable, append only, tamper-proof ledger underpinning it is a very useful one in many contexts where the question of authenticity of datasets arises – the blockchain has given us a ledger database.
The ledger database is more than just a hash as it also provides a cryptographic proof that the data is authentic via hash chaining, no ability to delete or modify a record, and the entire change history of any record. All these properties make the ledger databases very useful in many contexts, especially the ones where official documents are involved.
I often feel that immutability is very much over-rated and goes against real word. Lot of legal system is build on reverting things. Thus things being harder to revert is not actually desirable property.
> but the concept of an immutable, append only, tamper-proof ledger underpinning it is a very useful one
Which is why blockchains have become such a ubiquitous technology. They're literally everywhere. Can't swing a cat without hitting a blockchain nowadays.
As long as you can't guarauntee that the data you put onto a blockchain is trustworth in the first place, whatever you put on a blockchain is not 'tamper-proof'.
Therefore the ONLY thing you can handle on a blockchain 'tamper-proof' is stuff only existing on the blockchain itself. Which means basically nothing.
And there is a second goal post which was moved: the ignorance about a blockchain being 'tamper-proof'. 51% attack are real, you don't know if a country just owns and controls a lot of nodes and the latest rumor: NSA was involved in blockchain creation. You don't know if something is hidden in the system which gives one entity an edge over others.
Yes, that kind of database was so in demand, the AWS version, "Amazon Quantum Ledger Database" was hugely successful. Oh wait, it was a flop and is being shut down....
Article doesn't understand that cutting "between five and 20 percent of support and admin processing" is really valuable, instead it seems to want to dismiss that as a dull failure.
Business process outsourcing companies are valued at $300bn according to the BPO Wikipedia page. So 5%-20% of that is 15-60bn. So even if we're valuing all the other GenAI impact at zero the impact on admin and support alone could plausibly justify this investment.
What is missing even in this article is the install and expected failure rate of the dominant GB300 servers. Numbers I heard were, "~15% annual failure and it's not worth trying to swap/repair". That means in 5 years these entire installs are down more than half. Of course they can install the NEW GX500turbo servers which are 4x the compute, but 2x more power hungry. How much will that cost? What is the hyperscaler write down ~$200B/yr? Better have some income to make that up. They've got only 3 years to get there.
That still means All New data centers. They aren't being built for for this now, and so the old ones'll have to get ripped out and rebuilt (in place?) before they get the new servers. I do think they've planned the external power delivery, but not cooling or IP infra. It's a CF.
The article is right to focus on the end customer and not on the hyperscalars.
The hyperscalars are not the ones having trouble generating income. They have plenty of paying customers. They certainly understand capital depreciation and the need to refresh hardware. Premature hardware failure will be charged back to Nvidia who are not exactly struggling for cash either.
The article author was also too tired to do any napkin math but unfortunately that does not seem to have stopped them from confidently declaring $40bn has been lit on fire.
BPO is a growing industry that isn't shrinking call centers have expanded as well.
If it was true these were replacing anything it would be very clear in those sectors and it isn't. The real effect of end to end automation from LLMs is small to negligible. The entire "boring" industry is still chunging along and growing as it did before.
> Article doesn't understand that cutting "between five and 20 percent of support and admin processing" is really valuable, instead it seems to want to dismiss that as a dull failure.
To whom? From the customer perspective, it sounds like a shittier level of service is coming, which is a kind of failure.
I'm not shocked. It reminds me a bit of the way some people talk about their personal investing. They'll talk about wins (often exaggerated) and leave out the travails and failures. Next think you know, your friend is telling you about their new day trading plan. :)
Unfortunately, the same thing is playing out here. Nobody likes being the guy that points out the gains are incremental when everyone is bragging about their 100x gains.
And everyone in the management side starts getting, understandably, afraid that their company will miss out on these magical gains.
It is all a recipe for wild overspending on the wrong things.
$40B seems very low. I wouldn't be surprised to find the annual corporate churn on innovation / transformation / IT/IM new initiatives whatever you call it, is way higher than that. There's some subset of corporate spend that's just chasing new stuff and keeping up on what the hot topics are.
I think there is a bubble, if it's really just $40B maybe I'm wrong.
Yes and no. Its clear from the article that there is industrial integration. But only workers who are very highly skilled at utilizing the technology--and there are very few of those--are seeing benefits, and only managers who have experience effectively utilizing it are adopting it well. Time will tell, but yes, most projects aren't going anywhere, because they make the fundamental error of thinking that its equivalent to a human in terms of intelligence.
There is no real source for this data other than "executives" that only think in numbers and of course those are the types that collude with their CFOs to come up with great ways to get giant tax write-offs. I would imagine they are not "burning billions" they are coming up with new ways to describe how they ALREADY burned billions.
Do you have reason to believe that the MIT researchers didn't interview who they say they did? Or that those people don't have the credentials the researchers claim they do?
It's possible the study is flawed, or is more limited than the claims being made. But some evidence is necessary to get there.
I can just speak for myself, but my F500 has been setting money on fire chasing AI. Truly terrible ideas are being pursued just for the sake of movement. Were “AI” not part of the pitch, it would have been immediately tossed in the bin as a waste of time.
Ideas which are not terrible, instead have awful ROIs. Nobody has a use case beyond generating text, so lots of ideas about automating some text generation in certain niches. Not appreciating that those bits represent 0.1% of the business ventures. Yet they are technically feasible, so full steam ahead.
Same here. We are apparently obsessed with chatbots that no one asked for. If I brought up the same idea minus the AI a couple years ago, management would have been very confused as to why I wanted to build things no one asked for.
The funniest thing is that management has no idea how AI works so they're pretty much just Copilot Agents with a couple docs and a prompt. It's the most laughable shit I've ever seen. Management is so proud while we're all just shaking our heads hoping this trend passes.
Don't get me wrong, AI definitely has its use cases, but tossing a doc about company benefits into a bot is about the most worthless thing you could do with this tech.
I know first hand companies that have replaced parts of CS with Elevenlabs with measurable wins in customer acquisition and satisfaction. Generally speaking, I agree that a lot of people are chasing something that doesn't exist, but there are real use cases in the current environment.
The arms race to throw money at anything has "AI" in their business name is the same thing I saw back in 2000. No business plan, just some idea to somehow monetize the internet and VC's were doing the exact same thing. Throwing tons of good money after bad.
Although you can make an argument this is different, in a lot of ways, its just feels the same thing. The same energy, the same half baked ideas trying to get a few million to get something off the ground.
Yeah the market of 2000 crashed and burned but we were left with a bunch of dark fiber and developed tech that laid the foundation for the next decade.
I would say a bit more like 1997. I was young but kept thinking, "That which is good is not novel, and that which is novel isn't good." That said, we had the giant crash, but based on the dow jones the low in 2002 was the same as August 1998 but you can really look at the lost decade as to the true impact of the bubble.
The question isn't if there will be a crash - there will - but there are always crashes. And there are always recoveries. It's all about "how long." And what happens to the companies that blow the money and then find out they can't fire all their white collar workers?
Bonfires of money.
Predictably. Because all three of those concerns require highly speculative action to properly address.
That doesn't make those reasons invalid. Failures are expected, especially in early days. And are not a sign they are making spurious bets, or starry eyed about industry upheavals. The minimal return is still experience gained and a ramped up institutional focus.
How many of us here speed up our overall development by coding early on new projects before we have complete clarity? Writing code we will often throw away?
Incidentally, "everyone" is wrong a lot of the time.
This reminds me of the exploration-exploration trade-off in reinforcement learning: you want to maximise your long term profits but, since your knowledge is incomplete, you must acquire new knowledge, which companies do by trying stuff. Prematurely dismissing GenAI could mean missing out on new efficiencies, which take time to be identified.
Yes, of course. Incompetent leaders do incompetent things.
No argument or surprise.
The point I made was less obvious. Competent leaders can also/often appear to throw money away, but for solid reasons.
I've never heard of such a thing
And depending on how you look at it, science itself is experimentation, but at least it mostly results in publications in the end, that may or may not be read, but at least serve as records of areas explored.
Scientists and mathematicians often burn barrels of time and unpublished ideas, not to mention following their curiosities into random pursuits, that give their subconscious the free space to crystalize slippery insights.
With their publishable work somehow gelling out of all that.
I had a very hard time explaining once you put something in the chain, you can’t easily pull it back out. If you wanted to verify documents, all you have to do is put a hash in a database table. Which we already had.
It has exactly one purpose: prevent any single entity from controlling the contents. That includes governments, business executives, lawyers, judges, and hackers. The only good thing is every single piece of data can be pulled out into a different data structure once you realize your mistake.
Note, I’m greatly oversimplifying all the details and I’m not referring to cryptocurrency.
I'd like to propose a different characterization: "Blockchain" is when you want unrestricted membership and participation.
Allowing anybody to spin up any number of new nodes they desire us the fundamental requirement which causes a cascade of other design decisions and feedback systems. (Mining, proof-of-X, etc.)
In contrast, deterring one entity from taking over can also be done with a regular distributed database, where the nodes--and which entities operate them--are determined in advance.
But that's a poor definition of a blockchain. A blockchain is merely a distributed ledger with certain properties from cryptography.
If you spin up a private bitcoin network, it's a blockchain even if nobody else knows or cares about it. Now, are non-open blockchains at all useful? I suspect so, but I don't know of any great examples.
The wide space between 'membership is determined in advance' and 'literally anyone can make a million identities at a whim' is worth exploring, IMO.
Not the blockchain itself, but the concept of an immutable, append only, tamper-proof ledger underpinning it is a very useful one in many contexts where the question of authenticity of datasets arises – the blockchain has given us a ledger database.
The ledger database is more than just a hash as it also provides a cryptographic proof that the data is authentic via hash chaining, no ability to delete or modify a record, and the entire change history of any record. All these properties make the ledger databases very useful in many contexts, especially the ones where official documents are involved.
Which is why blockchains have become such a ubiquitous technology. They're literally everywhere. Can't swing a cat without hitting a blockchain nowadays.
It only moved the goal post.
As long as you can't guarauntee that the data you put onto a blockchain is trustworth in the first place, whatever you put on a blockchain is not 'tamper-proof'.
Therefore the ONLY thing you can handle on a blockchain 'tamper-proof' is stuff only existing on the blockchain itself. Which means basically nothing.
And there is a second goal post which was moved: the ignorance about a blockchain being 'tamper-proof'. 51% attack are real, you don't know if a country just owns and controls a lot of nodes and the latest rumor: NSA was involved in blockchain creation. You don't know if something is hidden in the system which gives one entity an edge over others.
Deleted Comment
Dead Comment
Business process outsourcing companies are valued at $300bn according to the BPO Wikipedia page. So 5%-20% of that is 15-60bn. So even if we're valuing all the other GenAI impact at zero the impact on admin and support alone could plausibly justify this investment.
That still means All New data centers. They aren't being built for for this now, and so the old ones'll have to get ripped out and rebuilt (in place?) before they get the new servers. I do think they've planned the external power delivery, but not cooling or IP infra. It's a CF.
Deleted Comment
The hyperscalars are not the ones having trouble generating income. They have plenty of paying customers. They certainly understand capital depreciation and the need to refresh hardware. Premature hardware failure will be charged back to Nvidia who are not exactly struggling for cash either.
Klarna also cut costs replacing support with AI. Didn't work well so ha to rehire.
If it was true these were replacing anything it would be very clear in those sectors and it isn't. The real effect of end to end automation from LLMs is small to negligible. The entire "boring" industry is still chunging along and growing as it did before.
To whom? From the customer perspective, it sounds like a shittier level of service is coming, which is a kind of failure.
Unfortunately, the same thing is playing out here. Nobody likes being the guy that points out the gains are incremental when everyone is bragging about their 100x gains.
And everyone in the management side starts getting, understandably, afraid that their company will miss out on these magical gains.
It is all a recipe for wild overspending on the wrong things.
I think there is a bubble, if it's really just $40B maybe I'm wrong.
It's possible the study is flawed, or is more limited than the claims being made. But some evidence is necessary to get there.
Ideas which are not terrible, instead have awful ROIs. Nobody has a use case beyond generating text, so lots of ideas about automating some text generation in certain niches. Not appreciating that those bits represent 0.1% of the business ventures. Yet they are technically feasible, so full steam ahead.
The funniest thing is that management has no idea how AI works so they're pretty much just Copilot Agents with a couple docs and a prompt. It's the most laughable shit I've ever seen. Management is so proud while we're all just shaking our heads hoping this trend passes.
Don't get me wrong, AI definitely has its use cases, but tossing a doc about company benefits into a bot is about the most worthless thing you could do with this tech.
The arms race to throw money at anything has "AI" in their business name is the same thing I saw back in 2000. No business plan, just some idea to somehow monetize the internet and VC's were doing the exact same thing. Throwing tons of good money after bad.
Although you can make an argument this is different, in a lot of ways, its just feels the same thing. The same energy, the same half baked ideas trying to get a few million to get something off the ground.
The question isn't if there will be a crash - there will - but there are always crashes. And there are always recoveries. It's all about "how long." And what happens to the companies that blow the money and then find out they can't fire all their white collar workers?
(Or, what happens if they find out they can?)