Nvidia, Microsoft, Apple, Alphabet/Google, Amazon, Meta/Facebook, Broadcom, Tesla, Berkshire Hathaway, and Walmart.
A commonality to most of them (and to a lesser extent all of them) is they write software.
If a company not on the list like Ford has an F-150 truck come off the assembly line, some of that $40,000 cost is in the capital expenditure for the plant, any automation it has, the software in the car and so on. But Ford has to pay for the aluminum, steel and glass for each truck. It has to pay for thousands of workers on the assembly line to attach and assemble parts for each truck.
Meanwhile, at Apple a team writes iOS 18, mostly based on iOS 17, and it ships with the devices. Once it is written that's it for what goes off on iPhone 16. There may be some additional tweaks up until iOS 18.6. The relatively small team working on iOS has it going out with tens of millions of units. Their work is not as connected to the process of production as the assembly line people attaching and assembling the F-150 truck. If some inessential feature is not done as a phone is being made, it will be punted to next release. This can't be done with an F-150 truck.
Software properly done is just much more profitable than non-software work. We can see this here. Yes, some of the latest boost is due to AI hype (which may or may not come to fruition in the near future), but these companies got to this position before all of that.
I was watching a speech by Gabe Newell talking about the (smaller) software industry of the 1990s, and the idea back then to outsource and try to save on salary costs. He said he and his partners went the other way and decided to look for the most expensive and best programmers they could find, and Valve has had great success with that. Over the past 2 1/2 years we've seen a lot of outsourcing to cheaper foreign labor, FAANG layoffs (including Microsoft's recent Xbox layoffs), and more recently attempts to lower costs by having software produced by less experienced vibe coders using "AI". I have seen myself at Fortune 100 companies, especially non-tech ones, that the lessons of the late 1960s NATO software engineering conferences, or the lessons learned by Fred Brooks while managing the OS/360 project in the 1960s haven't been learned. Software can be a very, very profitable enterprise, and it is sometimes done right, but companies are still often doing things in the same way they were attempting such projects in the early 1960s. Even attempts to fix things like agile and scrum get twisted around as window dressing to doing things in the old-fashioned corporate way.
I've been reading this website for probably 15 years, its never been this bad. many threads are completely unreadable, all the actual educated takes are on X, its almost like there was a talent drain
People here were pretty skeptical about AlexNet, when it won the ImageNet challenge 13 years ago.
What could actually drag Nvidia down and make them spend decades in the dark like Cisco still does? So far the two things I've come up with are: (a) general disillusionment in AI and companies not being able to monetize enough to justify spending on GPUs. (b) Big companies designing their own chips in-house lowering demand for Nvidia GPUs.
I don't think Nvidia can counter (a), but can they overcome (b) by also offering custom chip design services instead of insisting on selling a proprietary AI stack?
The monetary push is very LLM based. One thing being pushed that I am familiar with is LLM assisted programming. LLMs are being pushed to do other things as well. If LLMs don't improve more, or if companies don't see the monetary benefits of using them in the short/medium term, that would drag Nvidia down.
Nvidia has a lot of network effects. Probably only Google has some immunity to that (with its TPUs). I doubt Nvidia will have competition in training LLMs for a while. It is possible a competitor could start taking market share on the low end for inference, but even that would take a while. People have been talking about AMD competition for over two years, and I haven't seen anything that even seems like it might have potential yet, especially on the high end.
Nvidia would need to move on the order of 4,000,000,000 units to hit $4T in revenue, more than triple that to realize $4T in profits. Even if the average per-unit costs are 2-3x my estimated $1k, as near as I’ve been able to tell they “only” move a few million units each year for a given sku.
I am struggling to work out how these markets get so inflated, such that it pins a company’s worth to some astronomical figure (some 50x total equity, in this case) that seems wholly untethered to any material potential?
My intuition is that the absence of the rapid, generationally transformative, advances in tech and industry that were largely seen in the latter half of the 20th-century (quickly followed with smartphones and social networking), stock market investors seem content to force similar patterns onto any marginally plausible narrative that can provide the same aesthetics of growth, even if the most basic arithmetic thoroughly perforates it.
That said, I nearly went bankrupt buying a used car recently, so this is a whole lot of unqualified conjecture on my part (but not for nothing, my admittedly limited personal wealth isn’t heavily dependent on such bets).
A year ago both its trailing and forward P/E were higher. So the stock is relatively a bargain compared to what it was a year ago.
The price implies that revenues and profits are expected to continue to grow.
> My intuition is that the absence of the rapid, generationally transformative, advances in tech and industry that were largely seen in the latter half of the 20th-century (quickly followed with smartphones and social networking), stock market investors seem content to force similar patterns onto any marginally plausible narrative that can provide the same aesthetics of growth
I wouldn't disagree with this.
So gamers have to pay much more and wait much longer than before, which they resent.
Some youtubers make content that profit from the resentment so they play fast and loose with the fundamental reasons in order to make gamers even more resentful. Nvidia has "crazy prices" they say.
But they're clearly not crazy. 2000 dollar gpus appear in quantities of 50+ from time to time at stores here but they sell out in minutes. Lowering the prices would be crazy.
Also, in some sense there can be some fear 5090s could cannibalize the data center hardware in some aspects - my desktop has a 3060 and I have trained locally, run LLMs locally etc. It doesn't make business sense at this time for Nvidia to meet consumer demand.
Right, the current paradigm of requiring an LLM to do arbitrary digit multiplication will not work and we shouldn’t need to. If your task is “do X” and it can be reliably accomplished with “write a python program to do X” that’s good enough as far as I’m concerned. It’s preferable, in fact.
Btw Chollet has said basically as much. He calls them “stored programs” I think.
I think he is onto something. The right atomic to approach these problems is probably not the token, at least at first. Higher level abstraction should be refined to specific components, similar to the concept of diffusion.
One is - Google, Facebook, OpenAI, Anthropic, Deepseek etc. have put a lot of capital expenditure into train frontier large language models, and are continuing to do so. There is a current bet that growing the size of LLMs, with more or maybe even synthetic data, with some minor breakthroughs (nothing as big as the Alexnet deep learning breakthrough, or transformers), will have a payoff for at least the leading frontier model. Similar to Moore's law for ICs, the bet is that more data and more parameters will yield a more powerful LLM - without that much more innovation needed. So the question for this is whether the capital expenditure for this bet will pay off.
Then there's the question of how useful current LLMs are, whether we expect to see breakthroughs at the level of Alexnet or transformers in the coming decades, whether non-LLM neural networks will become useful - text-to-image, image-to-text, text-to-video, video-to-text, image-to-video, text-to-audio and so on.
So there's the business side question, of whether the bet that spending a lot of capital expenditure training a frontier model will be worth it for the winner in the next few years - with the method being an increase in data, perhaps synthetic data, and increasing the parameter numbers - without much major innovation expected. Then there's every other question around this. All questions may seem important but the first one is what seems important to business, and is connected to a lot of the capital spending being done on all of this.
It's very clear that the ramp for GPUs continues.