No company can afford to spend $100B on something that will be obsolete a year later, you just can't recover the investment from sales that quickly.
$100m is manageable, if you've got 100m paying subscribers or companies using your API for a year you can recoup the costs, but there aren't many companies with 100m users to monetise for it. $1B feels like it's pushing it, only a few companies in the world can monetise, and realistically it's about lasting through the next round to be able to continue competing, not about making the money back.
$100B though, that's a whole different game again. That's like asking for the biggest private investment ever made, for capex that depreciates at $50B a year. You'd have to be stupid to do it. The public markets wouldn't take it.
Investing that much in hardware that depreciates over 5+ years and is theoretically still usable at the end, maybe, but even then the biggest companies in the world are still spending an order of magnitude less per year, so the numbers end up working out very differently. Plus that's companies with 1B users ready to monetise.
The business model here is the same as semi-conductor fabrication/design. 2022 kick started the foundation model race, teams were readily able to raise 5-25 MM to chase foundation models. In early 2024, several of those teams began to run out of money due to the realization that competitive modeling efforts were in the 1-10 Billion dollar range.
If a 100 Billion dollar training run produces the highest quality model in the land across all metrics and capabilities. That will be the model thats used, at most there would be 1-2 other firsm willing to spend 100 Billion to chase the market.
This. The seemingly neverending run for foundation models only works as long as companies can afford it. If one of them spends 100+B, it will be a long time before compute catches up to the point that a competitor could reproduce it at reasonable budgets. This is essentially the race of who's going to own AGI and it shouldn't be surprising that people are willing to spend these amounts.
> Investing that much in hardware that depreciates over 5+ years and is theoretically still usable at the end, maybe
Isn’t that exactly what’s happening?
A $300k 8x H100 pod with 5kW power supply burns at most $6k per year in power at $0.15/kWh. The majority of the money is going to capital equipment for the first time in the software industry in decades.
These top of the line chips last for much longer in the depreciation game. The A100 was released in 2020 but cloud providers still have trouble meeting demand and charge a premium for them.
That’s true for AI, but it is not the right way to think about AGI.
For AGI, the bet is that someone will build an AI capable enough to automate AI development. Once we get there it will pay for itself. The question is what the cost-speed tradeoff to get there looks like.
Pay for itself? Who will pay for this? I don’t think you realize how much $100B is. To put it in perspective, a cutting edge fab costs almost $10B (TSMC) and only three companies can barely afford that.
I agree with you. Large tech corporations are making big bets to reach AGI first. As an example, if you are CEO of Google, do you want Microsoft or Meta to achieve AGI first.
This seems less like doing business as usual and more like betting big to be part of something really transformative.
for 100B they would probably want a realistic description of how they get to AGI. Thats a bit too much money for the handwavy answers we have right now for the path between LLMs and AGI (which doesn't even have a great definition)
it may never happen, especially with this current approach
at which point you've burnt hundreds of billions of dollars, emitted millions of tonnes of CO2 and all you've got out of it is a marginally better array of doubles
automating “development” does not necessarily lead to AGI. An LLM could make
minor efficiency improvements all day long and still not change the fundamental approach.
I agree and do not think any company would make that investment directly. Nvidia selling to Microsoft renting to OpenAI, I'm sure you could make that add up to $100B on paper. In the long run the economics are likely much more complicated and consist of "agreements worth $x".
Even if they did, they would be the largest target for hackers or corporate espionage. I would find it hard to believe, that they would get any sort of good return on this before it was all over the internet, or at least in the hands of several competitors.
This is the Anthropic CEO talking up his company's capital needs to the Norwegian Sovereign Wealth Fund ( Norges Bank Investment Management ) and trying to justify some absurd 100bn valuation.
Yes. The release of GPT-5 will make or break the AI movement. If the capabilities are not another quantum leap, it will become clear the scaling laws are not all. These investments will be unsustainable on the basis of any economic metrics you use.
> If the capabilities are not another quantum leap
While I don't disagree 100%, my question to you is:
who/what says this is the case/why? GPT-3.5 was released/made popular "to the masses" not too long ago. Where do you feel the pressure for a quantum leap "quickly" is coming from?
Well, I guess the question I have is, what exactly does he mean by the "cost to train"? As in, just the cost of the electricity used to train that one model? That seems really excessive.
Or is it the total overall cost of buying TPUs / GPUs, developing infrastructure, constructing data centers, putting together quality data sets, doing R&D, paying salaries, etc. as well as training the model itself? I could see that overall investment into AI scaling into the tens of billions over the next few years.
I could see the US subsidizing most of that $100B, just because they can, and more importantly, it would be the kind of tactical advantage that’s needed to make sure US tech companies stay relevant in a world where there’s a growing desire to breakaway from them in-favor of homegrown solutions.
What will the benefit be of more expensive models? More facts, because it's consumed more information? More ability to, say, adjust writing style? Or is this all necessary just to filter out the garbage recycled AI content it's now consuming?
Right around the time GPT-4 was first announced, OpenAI published a paper that basically said that training can "just keep going" with no obvious end in sight. Recently, Meta tried to train a model 75x as long as is naively optimal, and it just kept getting better.
Better in this case means some combination of "less errors for the same size" and/or "bigger and smarter". Fundamentally, they're still the same thing, just more and better.
Unfortunately, the scaling is (roughly) logarithmic. So for every 10x increase in scale you get a +1 better model. Scaling up 1,000x gets you just a +3 improvement, and so on.
And what, exactly is the ROI on "better"? Who cares if the model is better, is it 100B better? Who are going to buy these services, what consumer will pay for it?
$100m is manageable, if you've got 100m paying subscribers or companies using your API for a year you can recoup the costs, but there aren't many companies with 100m users to monetise for it. $1B feels like it's pushing it, only a few companies in the world can monetise, and realistically it's about lasting through the next round to be able to continue competing, not about making the money back.
$100B though, that's a whole different game again. That's like asking for the biggest private investment ever made, for capex that depreciates at $50B a year. You'd have to be stupid to do it. The public markets wouldn't take it.
Investing that much in hardware that depreciates over 5+ years and is theoretically still usable at the end, maybe, but even then the biggest companies in the world are still spending an order of magnitude less per year, so the numbers end up working out very differently. Plus that's companies with 1B users ready to monetise.
If a 100 Billion dollar training run produces the highest quality model in the land across all metrics and capabilities. That will be the model thats used, at most there would be 1-2 other firsm willing to spend 100 Billion to chase the market.
Isn’t that exactly what’s happening?
A $300k 8x H100 pod with 5kW power supply burns at most $6k per year in power at $0.15/kWh. The majority of the money is going to capital equipment for the first time in the software industry in decades.
These top of the line chips last for much longer in the depreciation game. The A100 was released in 2020 but cloud providers still have trouble meeting demand and charge a premium for them.
NVIDIA claim 10.2kW for a DGX H100 pod. https://docs.nvidia.com/dgx/dgxh100-user-guide/introduction-....
Your point still stands though where power is a fraction of the cost.
The bigger issue is power + cooling and how many units are needed to train the better models.
For AGI, the bet is that someone will build an AI capable enough to automate AI development. Once we get there it will pay for itself. The question is what the cost-speed tradeoff to get there looks like.
This seems less like doing business as usual and more like betting big to be part of something really transformative.
it may never happen, especially with this current approach
at which point you've burnt hundreds of billions of dollars, emitted millions of tonnes of CO2 and all you've got out of it is a marginally better array of doubles
This is the Anthropic CEO talking up his company's capital needs to the Norwegian Sovereign Wealth Fund ( Norges Bank Investment Management ) and trying to justify some absurd 100bn valuation.
While I don't disagree 100%, my question to you is:
who/what says this is the case/why? GPT-3.5 was released/made popular "to the masses" not too long ago. Where do you feel the pressure for a quantum leap "quickly" is coming from?
Or is it the total overall cost of buying TPUs / GPUs, developing infrastructure, constructing data centers, putting together quality data sets, doing R&D, paying salaries, etc. as well as training the model itself? I could see that overall investment into AI scaling into the tens of billions over the next few years.
Better in this case means some combination of "less errors for the same size" and/or "bigger and smarter". Fundamentally, they're still the same thing, just more and better.
Unfortunately, the scaling is (roughly) logarithmic. So for every 10x increase in scale you get a +1 better model. Scaling up 1,000x gets you just a +3 improvement, and so on.
Bleed investors dry before the next fad pops up
A G650 to fly to your 85m yacht in the med doesnt come cheap.