> "Look, the way this works is we're going to tell you it's totally hopeless to compete with us on training foundation models. You shouldn't try, and it's your job to try anyway, and I believe both of those things."
He’s only saying that he’s not incentivized to make a small and scrappy team compete with OpenAI. I don’t think you should read this as “Sam Altman says small companies will never produce valuable AI models”.
Yes, this is really weird story, because what he actually said seems like the most banal possible thing for a tech executive to say about the prospect of competition.
He was speaking specifically about Indian startups, and he did say they should try anyway, Which means he clearly thought there was some non-zero (but not very non-zero) chance one could succeed.
It sort of is the most banal possible thing, and I'm sure he never even would have guessed that comment would have been repeated or been controversial (it was long ago). It's like if Toyota said you couldn't make a dependable car for $5,000 but hey, go try and prove me wrong. It's nothing
> the VC asks whether a trio of super-smart engineers from India "with say, not $100M, but $10M – could build something truly substantial?"
I don't know where you got "incentivized to make a small and scrappy team" - The question was simple and clear, and Sam's response was pretty clear as well. He was/is wrong, and is now finding out.
“Sam Altman says small companies will never produce valuable AI models”.
sure sounds like
"Look, the way this works is we're going to tell you it's totally hopeless to compete with us on training foundation models"
The DeepSeek v3 model had a net training cost of >$5m for the final training run, the paper lists over 100 authors[1], meaning highly-paid engineers. This is also one of a sequence of models (v1, v2, math, coder) trained in order to build the institutional knowledge necessary to get to the frontier , and this ends up still far above the $10m mark. It's hardly a "trio of super-smart engineers".
Incidentally, Altman's comments were in response to a question a out about a hypothetical startup with $10m. So you've made the argument even more cogent.
It's popular to dunk on Sam, but I don't think he's wrong here. There's now hundreds of companies that have attempted to train a foundational models and almost every single one of them has failed to build a viable business around it (Even Mistral looks to be in rough shape).
Deepseek has done something remarkable, but they have the resources of a multi billion dollar quant fund at their disposal. Telling startups they have a chance is sending them to near certain death. There's way more promising niches to fill.
My personal opinion on society, is that many many businesses have massive inefficiencies and could be wiped off the map if people understood those weaknesses. But there is a culture of "that CEO is so smart, no chance you could compete". Reality is, they are just hiring random people with fancy degrees. I bet most OpenAI "ai engineers" have no clue how low level GPU CUDA programming even works. They are just tweaking pytorch configs, blowing billions on training.
In the past, tech got away with the above because capital meant if you hired enough people, you ended up with something valuable. But AI levels the playing field, reducing the value of capital and increasing the value of the individual contributor.
My opinion is that organizing capable people to accomplish goals is incredibly difficult, and that includes keeping a business running. Inefficiencies are unavoidable, even among engineers instead of "engineers."
Right but traditionally the difficulty in organizing people is solved with money. Just keep hiring until the product gets done. Thats what we saw here, OpenAI wanted 500 Billion! Reality is the money wasn't necessary, what they really needed was innovation. AI will obsolete people that solve problems with brute force money, which is the modus operandi in VC backed startups.
The fact that the human brain can still do better on certain types of problems than SOTA LLMs while using less energy than a nice LED lightbulb continues to bolster my belief that ultimately it all comes down to the right algorithm.
That’s not to say data isn’t necessary, but rather that the algorithm is currently the critical bottleneck of further AI progress, which a $10M startup absolutely has a chance of outcompeting big tech companies on. Heck, I wouldn’t even put it past an individual to discover an algorithm that blows existing approaches out of the water. We just have to hope that whoever achieves this first has a strong sense of morality...
> Look the way this works is: we're gonna tell you 'its totally hopeless to compete with us on building foundational models' and its your job to try anyways
Stratechery said that Altman's greatest crime is to seek regulatory capture. I think it's spot on. Altman portrays himself as a visionary leader, a messiah of the AI age. Yet when the company was so small and that the progress in AI just got started, his strategic move was to suffocate innovation in the name of AI safety. For that, I question his vision, motive, and leadership.
But Deepseek took some huge and hugely expensive models that others had paid $Ms to train (well those who didn't already only the hardware mostly used investor cloud credits, but still) and distilled them, rather than training from scratch?
They trained their own V3 and then trained R1 Zero from V3 purely as RL, it didn't completely work so then they took some CoT examples and trained R1 on RL + some SFT. You're thinking of the finetunes based on R1 outputs - those are not actually distills, and they're all far worse than the original R1. And yes, they're finetuned based on other models.
He’s only saying that he’s not incentivized to make a small and scrappy team compete with OpenAI. I don’t think you should read this as “Sam Altman says small companies will never produce valuable AI models”.
It sort of is the most banal possible thing, and I'm sure he never even would have guessed that comment would have been repeated or been controversial (it was long ago). It's like if Toyota said you couldn't make a dependable car for $5,000 but hey, go try and prove me wrong. It's nothing
I don't know where you got "incentivized to make a small and scrappy team" - The question was simple and clear, and Sam's response was pretty clear as well. He was/is wrong, and is now finding out.
“Sam Altman says small companies will never produce valuable AI models”.
sure sounds like
"Look, the way this works is we're going to tell you it's totally hopeless to compete with us on training foundation models"
But in the end your conclusion is the exact opposite of what he said, and I don't see what justifies that interpretation.
[1] https://arxiv.org/abs/2412.19437v1
https://arstechnica.com/ai/2025/01/why-the-markets-are-freak...
Deepseek has done something remarkable, but they have the resources of a multi billion dollar quant fund at their disposal. Telling startups they have a chance is sending them to near certain death. There's way more promising niches to fill.
Deleted Comment
In the past, tech got away with the above because capital meant if you hired enough people, you ended up with something valuable. But AI levels the playing field, reducing the value of capital and increasing the value of the individual contributor.
Being charismatic, good talker and not having too strong moral principles will often also work most of the time, unfortunately.
That’s not to say data isn’t necessary, but rather that the algorithm is currently the critical bottleneck of further AI progress, which a $10M startup absolutely has a chance of outcompeting big tech companies on. Heck, I wouldn’t even put it past an individual to discover an algorithm that blows existing approaches out of the water. We just have to hope that whoever achieves this first has a strong sense of morality...
> Look the way this works is: we're gonna tell you 'its totally hopeless to compete with us on building foundational models' and its your job to try anyways
He's clearly being glib
Deleted Comment