Investing billionS over the course of CUDA and its predecessors. The whole concept about wanting to use GPU for not just Graphics but High performance or Highly parallel computing started before 2000s. CUDA announced in 2007, and most of the work predate back to Cg in early 2000s. Even Intel who were already very late to the party made the decision to go invest and start Larrabee in 2005. And there was PS3 Cell, which started development in 2001.
And yet all these work and success from Nvidia was because of, if you read 90% of HN comments for the past 2 years; Luck.
They could have given up at any point in time for the past 20 years and simply not do anything CUDA or GPGPU related. Because who would want to do that when vast majority of those investment were not even bringing in much revenue. Like Intel decided to cancel Larrabee. They persevere and hit the Jackpot some 10-15 years later. But all of this was because of; Luck.
Yes. Luck plays a big part. They could have continue another 10 years and they may never find the Killer App for it. But to ignore all the investment and work for such a long time and pin it down to Luck was about as rude and as disrespectful it can be. Especially on a forum which was started by VC with the spirit of entrepreneurship.
This is likely a testament to the need for large tech companies to choose core differentiated competencies, and consistently invest in those competencies over long periods of time. It's so tempting for the finance minded to take a large high margin business like NVidia and cut R&D spending to boost profits. Or alternately allow the R&D spend become lazy and ineffective - This happened to Intel, Boeing, and arguably Google.
That NVidia has maintained this push for the last 2 decades makes one wonder what other tricks they'll have up their sleeve.
Sometimes boosting profits and dividends is the right play if the company isn’t set up to innovate. Innovation isn’t for every company and it’s good that some boards focus on redistributing profits.
It does help to have billions to invest. AMD in the last 20 years was basically broke and they bet the farm on Zen. If they split their limited resources between Zen and CUDA they probably would have failed at both.
But their Zen bet has paid off and now they're playing catch up.
>It does help to have billions to invest. AMD in the last 20 years was basically broke
You're making it sound like AMD being broke at that time was some unfortunate accident due to external events, and not the result of their own blunders.
Nvidia had billions due to great products and great business decisions on their part, and AMD was broke entirely due to it's own actions, by having average products on the CPU side and making bad business decisions at the time by spending way too much money acquiring ATI, and then selling off their golden goose, the Imageon mobile GPU division to Qualcomm for pennies right at the beginning of the smartphone revolution.
It's a miracle they managed to turn things around and not end up like SGI and 3dfx, bankrupt and having their carcass devoured by Intel and Nvidia.
IMHO, Intel lost a couple of years because the CEO had a consensual affair with a subordinate so they fired him and put the CFO in charge for a while and puttered around unable to fix the serious process manufacturing problems getting to 10nm they encountered which caused them to fall behind in manufacturing to TSMC. Finally, after a few years of little progress, Gelsinger was hired, and we'll see what he can do.
Maybe what happened to Intel is what happened to GE under Jack Welch.
Edit: just saw the CFO put in charge reference in other comment. There's your answer: financialization of the company.
I think it is obviously true that they have worked hard and built great devices. That was evident even when they only sold products for gaming. But this is what most companies do, or try to do: take their core products and invest in making them better. CUDA was impressive but not shocking.
When people say it's luck, I think they are reacting to the reality that Nvidia couldn't know, when they were doing this investing, that there was a big AI market waiting to take off. They were doing good work, but they were also very, very lucky that circumstances granted them this opportunity. There is no shame in that -- few companies achieve great success without some opportunity manifesting.
But it's a mistake to pat yourself on the back too hard, either. Without the opportunity, they'd still be making gpus with some other applications.
As you note, the "luck" debate often bogs down into a false dichotomy of extremes when the reality is usually in-between and complicated. In my experience, most people (and companies) have the opportunity to encounter approximately similar amounts of "good" and "bad" luck, when averaged over the long run. However...
* This is gated by the ability to recognize those opportunities when they appear, willingness to act decisively to maximize the probability of positive outcomes and the preparedness to exploit such advantages. This tends to require mental preparedness, emotional maturity and a willingness to invest scarce resources and/or time - in advance - toward maintaining situational awareness and some excess reserve resources. Doing this is hard but these traits are learnable.
* Similarly, a portion of available conscious effort and scarce resources must be continuously expended toward being resilient to bad luck when it inevitably strikes. The net impact of misfortune can vary substantially depending on mitigation steps taken in advance. This requires accurate awareness of ambient risk factors and careful balancing of where you choose to place your limited 'air bags' and 'ounces of prevention.'
Most of these things are at least somewhat within your ability to influence, with the exception of initial conditions. At the "opening deal" of life some people are dealt better cards and some people are dealt worse cards. This is not fair, but it is what it is. The silver-lining is that, after the initial cards are dealt, it can still be a long game with many rounds. How you choose to play the cards you have in each of those rounds can lead to substantially different outcomes. Because it's a game like poker with randomness, hidden variables, subtle cues and second-order probabilities - it's easy to conclude it's almost all luck. This is unfortunate because not understanding the 'meta' of the game, or even knowing there is a meta, does make it mostly luck for some.
I think NVidia's 'good fortune' is the cumulative result of playing the meta-game effectively for a long-time and thus leading to them having the capability to maximize their outcomes when eventually finding themselves in a high-opportunity environment (aka "lucky").
Re: Intel, it's worth mentioning their Phi coprocessor board. They zig-ed when they should have zag-ed. The Phi essentially had thousands of i486 CPUs on it, but apparently that isn't as useful for ML because (apparently) it's all about the FLOPs, whereas the less-mathematical but more logical Phi cores could do (something?) better?
I just think it's an honorable mention because had things gone a little differently, perhaps Intel could have been king of the hill instead.
Intel didn't just miss by a little, they missed by a lot. Nvidia has more software engineers than hardware engineers. Building the software ecosystem is the thing that makes the whole thing work. Intel didn't just have the wrong product with Phi, they had a product with no ecosystem. It was impossible to develop for. None of Nvidia's competitors have a serious answer to the ecosystem problem. Its not clear that Intel has even ever understood this.
> success from Nvidia was because of, if you read 90% of HN comments for the past 2 years; Luck.
Never once read a comment attributing Nvidia success to luck.
The only luck Nvidia has is the luck that AMD fell asleep at the wheel and couldn't get bothered to put more than 2 engineers on a CUDA competitor even when it was getting apparent that AI was worth billions (i.e. ~4-5 years ago).
Yeah, I wouldn't call it just luck, but I refuse to accept that they saw AI coming and blowing up like that any more than anybody else. It worked out for them that they created cuda (btw how much of that was related to acquiring that company that made physX and the according accelerator card?) and decided to stick with it, improve on it, and supply developers with superb tools. But again, I don't think there ever specifically was AI in that planning. It seemed like a step-by-step thing, provide additional value eg to games with physics simulation that would give gamers a reason to buy Nvidia over ati (as if driver quality alone wasn't already a strong argument ;)).
The wise decision was to keep pushing cuda and not dropping the ball on it to cut costs in the short run, and realizing the potential of using it for scientific computation early on. Then one thing led to another. When AI came around, cuda was the only mature and serious framework for the job.
But it has nothing to do with AI, it was an investment in optimized hard for floating point calculations for completely unrelated applications to AI. Not that they don't deserve it. It's great hardware. But it would be a shame for them to tie their business to AI and then there is a huge bust from creditors who realize that LLMs don't really do anything.
The hand wringing about luck can be annoying. Bringing it up is mostly useful to empathize with other people and be humble. Hard work without luck rarely works out. But luck without hard work rarely works out either (unless you're insanely lucky). Never doing anything because "luck" will guarantee you're never lucky.
Use luck to be humble, use luck to empathize, use luck to build people up, but don't use luck to tear people down.
Yes many are very eager to blame the winner, usually without seeing the work that took them there, and even more relevant, the competition keeps doing a mess.
Precisely. And this isn't just about Nvidia either. It is every single topic on successful companies. These sort of discussion are common or the norm on 99.999999999% of internet. But we are on HN, the bar needs to be higher, a lot higher. And I cant accept this without some rebuttal.
At least judging from the upvote it does seems I have the backing from silent majority. All is not lost.
not luck, but the idea that they predicted the ML/AI renaissance 10-20 years out is laughable considering it was google that kicked it off because google had big data and showed it could be done.
They had their reasons for doing what they did and I'm sure they eventually realized they were well positioned for ML/AI, but there's no way they planned that out before ML/AI was a viable thing.
believe it or not they did predict it 10 years out, NVIDIA bet the farm on AI/ML after seeing AlexNet. This has been covered in other articles too.
And sure, AlexNet was good but remember this is the maxwell days, tensor cores aren't even a thing yet, it was at minimum a very bold bet on the basis of "some image classifier model thing". Nobody else saw it as more than an academic toy (obviously, or they'd have jumped in too).
> Within a couple of years, every entrant in the ImageNet competition was using a neural network. By the mid-twenty-tens, neural networks trained on G.P.U.s were identifying images with ninety-six-per-cent accuracy, surpassing humans. Huang’s ten-year crusade to democratize supercomputing had succeeded. “The fact that they can solve computer vision, which is completely unstructured, leads to the question ‘What else can you teach it?’ ” Huang said to me.
> The answer seemed to be: everything. Huang concluded that neural networks would revolutionize society, and that he could use CUDA to corner the market on the necessary hardware. He announced that he was once again betting the company. “He sent out an e-mail on Friday evening saying everything is going to deep learning, and that we were no longer a graphics company,” Greg Estes, a vice-president at Nvidia, told me. “By Monday morning, we were an A.I. company. Literally, it was that fast.”
No, Nvidia just found themselves in a lucky situation.
They were already building GPUs mainly for gaming. Crypto then came along and swiped up a bunch of GPUs. When the self mining craze somewhat waned the AI craze started as the boundaries on its ethicality were broken down in an uncertain economic environment where corporations started racing to see who will get to the top.
As Seneca is said to have quipped- “Fortuna est quae fit cum praeparatio in occasionem incidit." or "Luck is what happens when preparation meets opportunity.”
Nvidia has been doing the hard work in preparing to succeed in this market. CUDA has been meticulously developed and maintained, creating an adhesion to their hardware that would not otherwise exist in the AI market.
It also has been willing and capable of creating lines of business hardware aimed at maximizing utility for their customers.
They also have hired and maintained a roster of the best engineers in their specialties, including the software part of the equation.
There is no part of their success that they weren't prepared to take advantage of when the opportunity presented itself. They didn't control the size of the opportunity itself, but no greatly successful company does.
This. Jensen (Nvidia CEO) is being a bit humble here given the massive investments the company has made into CUDA for the past decade. Now they're simply reaping the benefits. Sure, ChatGPT's arrival last year was the spark but arguably it was inevitable sooner or later.
Case in point: Nvidia has been hosting "GPU Conferences" annually to build awareness and drive adoption of CUDA. These events surely aren't free to host but necessary to build momentum and give an edge to your custom stack.
Luck is more applicable to the cryptomining boom and bust cycles that Nvidia also profited from. Their gaming GPUs (along with AMD's) just happened to be the best available at the proof of work.
Nvidia in the past made only GPUs, CUDA is used to program GPUs and get use out of them. It doesn't take any magical foresight to want to invest in CUDA, it is the most obvious way to expand your TAM.
and yet, if they had chosen to go in a different direction they wouldn't have been ready for ML/AI.
The point isn't that they didn't invest in the direction that ended up being right, it's that they didn't do it specifically with ML/AI in mind years before it was even a twinkle in google's eyes.
IIRC were on CUDA 5 or something when imagenet came out and changed the world.
They might not have imagined LLMs when they decided to invest in making their GPUs programmable but I guarantee you they extrapolated the future compute potential of vector programmable machines and decided it was not a huge risk to enable it as it is simply betting that some important application would be around to tap into it.
Didn't even need to extrapolate future potential. The world of supercomputing was massively sunsetting the old vector processors (Cray, Sparc etc.) and switched to commodity x86 hardware back in that timeframe when CUDA came out. Perfect opening for a new vector processor on steroids, for which the chip development was already payed by gamers...
I think that's oversimplifying things. The acquired podcast has covered Nvidia's different growth periods in depth. I highly recommend anyone interested to give them a listen:
>I'm a great believer in luck. The harder I work, the more of it I seem to have.
Nvidia has been working hard(er than their competition) on the software side for almost 2 decades to be in the position they find themselves today. 16 years ago, they released CUDA for general-purpose computing on GPUs, and then 9 years ago they followed that up with cuDNN. They have a consistent pattern of making a intentional, long-term bets to diversify their market exposure and unlock new product areas while building a software ecosystem moat.
Yes, they obviously got super lucky with the cryptocurrency frenzy, but there's a reason all the miners were mostly buying Nvidia cards instead of AMD cards.
No, Nvidia decided to change their GPU architectures to be more suitable for neural networks, so they did have to bet that this shift would have to pay off. They spoke to many leading AI experts and came to this conclusion. They should be commended for the risk they took. If Nvidia solely ended up being "lucky", then how come AMD didn't take off?
Because they were broke and didn't have the resources to invest properly even if they wanted to.
That's not quite true -- they bet the farm on Zen, and that bet paid off. Which means that now they have the resources to also invest in AI. I'm fairly sure if they had bet the farm on AI instead of Zen or if they had tried splitting that bet they'd be bankrupt now.
> No, Nvidia just found themselves in a lucky situation.
Well yes and no. They were certainly lucky to be at right place in the right time. But they were also consistently investing into CUDA and the AI/ML ecosystem while their competitor(s) ignored it to such an extent that NVDIA became the only real option (and deservedly so).
This is why they can effectively behave like a monopoly these days and just almost inconceivably high margins.
Then this luck should have equally found AMD, who even today are struggling to pick up the ball they've been dropping for a decade now. My last PC had a Radeon, and I waited the life time of that PC assuming AMD support was just around the corner, all the while renting Nvidia cards in the cloud for any serious projects.
I've been in the ML space long enough to remember when people were just speculating about doing ML/computation on GPUs. Nvidia made that much easier and has continued to improve support and features for the past decade+ There insane success is certainly part luck, but I wouldn't be so quick to dismiss all of it as merely happenstance.
This timeline seems completely wrong to me. Nvidia’s cudnn has been the only game in town for NN research since I have been in the field, and predates ethereum and the crypto bull run by a few years. If anything they didn’t waver and jump too hard on the crypto bandwagon when the craze was at its height.
No, Nvidia was not lucky. They are a strong engineering culture who also has deep marketing expertise in 3D in all its facets for decades .
Nvidia wisely recognized that only so much horsepower could be used by a conventional 3d graphics pipeline with a given screen resolution, and they needed to invest in growing future compute-heavy adjacent markets.
They invested in generalizing their GPU into a more flexible vector coprocessor for HPC and then adjacent markets. They convinced fundamental engineers and researchers in this area to come work for them.
There was deep fundamental work done by Ian Buck in 2004 on leveraging GPUs as general vector processors ( https://graphics.stanford.edu/papers/brookgpu/ ) and that leadership and deep thinking went to Nvidia, not to Intel. Intel did not have the passion from the top to care about this. They couldn't even care enough to field competitive 3D chips (and associated software), much less extend their thinking to generalize beyond it skillfully. Nvidia did.
Anyone who spent every day thinking about how to grow the vector coprocessor market would have pursued crypto and AI when they came along but Nvidia's strong engineering and profits from a leading 3D position gave them competitive advantages which they are, for now, reaping.
Not really. They did push CUDA and GPGPU on their own hardware while AMD and Intel offered a barely functional OpenCL. Of course they had no idea about how big AI and crypto would become, but they were there offering their cards to whoever wanted to run calculations on them.
Don't forget the "Artificial Intelligence and Japan's Fifth Generation [Project]" [1] launched in 1982. Bad timing ;-). Wikipedia includes a specific page [2]. Finally, we cannot forget Transputers [3].
No, Nvidia has spent considerable resources to support ML for I think a decade now? They certainly got lucky in the crypto craze and of course the timing of AI off the failings of crypto is extremely lucky but you are wrong in the rest of the argument.
If it was pure luck why did they build CUDA over a decade ago?
Think you're just looking at this from the angle of a gamer and not someone who's been paying attention to GPGPU compute earlier than the past 6 months.
I'm watching nvidia researchers doing a TON of AI at a wide spectrum, that your 'lucky situation' is a hard understatement what nvidia is doing on Research & Development area.
You are right. Even more interesting, is the timing itself:
The crypto craze waned just as LLMs were picking up. Any more delay with LLMs, and perhaps nvidia would have been overextended with no demand to take their inventory. There may in fact , be no nvidia in that future depending on the level of the bet.
Or take it 1 step further . if COVID hadnt happened, the crypto craze would never have occurred, and nvidia would have only been a bit-level player in the LLM craze we are now in.
I still remember nvidia ads in pc game magazines. That and 3dfx. Who knew?
How are you pinning the crypto craze to COVID? People were mining long before 2020, and that demand was so high putting pressure on GPU inventory that the manufacturers were artificially crippling their performance when used as miners.
Chalking it up to dumb luck is kind of ridiculous. They invested massively in developing an ecosystem for GPU driven computing, which was a well reasoned gamble, and it paid off.
> In 2003, a team of researchers led by Ian Buck unveiled Brook, the first widely adopted programming model to extend C with data-parallel constructs. Ian Buck later joined NVIDIA and led the launch of CUDA in 2006, the world's first solution for general-computing on GPUs.
you're missing the software part, which is essential to this story. Nvidia was the only GPU maker betting the farm on positioning for GPGPU/HPC all the way back in ~2007. What is happening now and since the last couple of years is just payoff for that massive R&D and software maintenance cost they've been fronting. They bet big and won big.
I clearly remember AI becoming a big tech subject in the late 2000s/early 2010s following the release of CUDA on consumer hardware, around the same time as Bitcoin and much before GPU-mining was a thing.
It wasn't luck. For example, the book "Good Strategy/Bad Strategy" has a chapter about the NVidia strategy in its early days. You'll find a lot of similarities to what happened with CUDA.
In its early days, NVidia focused on its delivery speed by having a unified software driver/integration strategy. The TNT video card was meh, the TNT2 a little better but inferior to 3Dfx... by the time they launched GeForce, 3Dfx didn't have a product ready to compete. Their driver/integration/product test strategy made that speed possible.
With that background you can understand why CUDA wasn't lucky, they repeated the same approach: combine their hardware with the software.
The article mentions this choice being made in 2018. I don't know the ML industry, but as a gamedev I had been mystified about Nvidia's strategy since about 2018.
It felt like they weren't leaning into crypto, which surprised me. Instead it looked like they were trying to maintain gamer goodwill by not increasing consumer card costs during the boom. Of course scarcity raised secondary market princes, but Nvidia kept MSRP lower than the boom dictated.
It seemed like they were betting against crypto during the craze. And sticking to their strategy on the consumer side. So maybe that's how they stuck to a ML strategy too.
So they had all this CUDA stuff, which they must have invested in heavily because AMD showed what happens when you don't. That led to a software ecosystem for ML.
Maybe it was all luck, but a strategic choice explains some of this in hindsight.
> So they had all this CUDA stuff, which they must have invested in heavily because AMD showed what happens when you don't. That led to a software ecosystem for ML.
CUDA was already digital gold in 2018. ML had moved to GPUs several years prior, and CUDA was a primary enabler of that transition.
Nvidia had already been through one crypto bust at that point. The long view was that there will be another one and they wanted to keep their more stable markets viable for the future and not get demand slaughtered by mass dumping of used cards on the market when crypto dipped again.
Of course, the AI thing exploded afterwards this time so the demand dip didn't happen.
> Instead it looked like they were trying to maintain gamer goodwill by not increasing consumer card costs during the boom.
They've been selling mid-sized GPUs like the GTX 1080 at 800$ 8 years ago. A 300 mm2 GPU.
They have been rising and rising costs for a long time to milk as many dollars as they could.
People seem to forget even before the crypto craze the company had insane margins in their revenue. They weren't selling 800$ chips because those costed 600 or 500$ to produce...
Even the most expensive 4090 is hardly more than a 400$ chip, memories included to build.
I think kids these days definitely will understand this ancient and cliche joke. Its not like its some unkown twencen joke it gets said everytime nvidia comes out with the newest gaming behemoth cards
Always a divide between people in these threads who assign pure genius as the sole reason for success and those who live in reality and accept that you also need allot of luck and cash on the way.
Yeah. I can absolutely believe that NVIDIA saw the potential for GPUs (and associated software) that went beyond hardcore gaming nerds building their own PCs. But HPC generally was not a very profitable market for most vendors and I don't really believe that crypto and LLMs as they have played out to date was especially foreseeable at a detailed level.
I think forum post quality probably is well in decline when posters start making popular meta posts about how people with their view are in reality and people without are delusional.
It was an observation of how every thread like this plays out not a “popular meta post”. I wouldn’t go as far as to say delusional I was more going for it’s naive to assign success purely to genius in the same way it’s naive to attribute a single cause or solution to climate change. In fact I think even that’s generous given that’s a future prediction that was not a guess or a gamble. Perhaps a better example would be for me to write a post about a massive successful stock trade or crypto where A) I had the cash to do it and B) things I gambled might happen went even better than I thought. There would be people assigning all the success to me when likely there were other similar plays I made that I lost big on as well as plenty of other people who saw it but simply didn’t have the means or timing to make it happen. So yes I think it’s naive to assign this and most to purely genius. You need a little luck and cash in reality on the way.
Still seems like they were in the right place at the right time. When they first developed Cuda it was a hammer in search of a nail. Then they had the huge windfall from crypto-mining. That probably led to a lot of discussions on looking for GPGPU opportunities. Then AI came along and Cuda was just sitting there.
Then they put two and two together and started investing heavily as they saw momentum build.
>Then they put two and two together and started investing heavily as they saw momentum build.
There's been a decade or more of deep learning models breaking records in almost every single research field, powered (indirectly) through CUDA, cuDNN and other NVIDIA software.
AI didn't "come along" when OpenAI released ChatGPT. DNNs that have been 99% NVIDIA-focused have been beating the state-of-the-art for years and years.
Also for the record the ADA architecture (very dominant AI accelerator) was released when the stock price was like $100 (compared to the $500 now).
I don’t think what you’re saying disagrees with my point. NVidia saw the momentum in AI research building long before ChatGPT arrived. AlexNet came out in 2012 and drew a ton of attention back then.
You could say that, for commercial purposes, AI actually did land with ChatGPT. It existed long before that, but that was unquestionably the event that started diverting billions of investment dollars into AI applications.
Lot of things were happening before you started paying attention to it. GPGPU definitely wasn't just there being useless before crypto and AI didn't just "Come along" after crypto.
CUDA has been heavily utilized for AI for many many years now, whole reason Nvidia is so entrenched in it is because they were they only ones taking GPGPU seriously like OpenCL (1) rose and was abandoned before we even get to your interpretation of the timeline.
(1): Easy to forget now that AMD and Apple had an common standard competitor to CUDA and completely fumbled it.
As the article implies, they were lucky and good. I was surprised how quickly they were able to implement Tensor cores and head off alternate architectures.
When they first developed CUDA, they did not yet have that huge of a windfall from crypto-mining. It was first released in June 2007. I would say it was more of the result of seeing initiatives of ham-fisting general-purpose computation onto graphics-specific API like OpenGL and such.
They'll also benefit from the boom on the graphics side as well.
Once gen AI gets good enough to generate high quality video games, virtual worlds, etc. in real time, that will redefine gaming and entertainment.
Why wait for the next Mission Impossible movie to come out when you can experience it...as Tom Cruise...with your own storyline with all of your friends. And get a new one every day.
I agree, but the art (e.g. what a good author does) comes from the structure, of which thousands of combinations could be created. Gen AI would then be able to endlessly fill in details for unique variations (different characters, locations, allegiances, time periods, decision trees, etc.).
Think about any good story. A million details could be different, and it would still work well.
You know what may be an interesting way for full immersion in 3D/AR - would be a Subnautica (SCUBA) style AR game. Because you would have 360 of movement, and critters could be projected through the water for you to fight, but youre actually wearing scuba gear... and being pulled around by an underwater "speederbike" with a laser.
I have to admit I regret not buying their stock. It should've been obvious this would be a boon for them, but I got distracted by the crypto hype and didn't even think about the impact AI would have on them.
True story, decades ago in HS we did the whole stock market competition. As a big gamer, I somehow convinced my team to dump everything into Nvidia.
We were 2nd in the state in Illinois, and I wanted to cement our place and possibly win by selling everything to lock in gains in the final week. The person executing the trade on our team accidentally shorted it and it went up a considerable amount in the last week knocking us down quite a bit.
At that point I seriously considered dumping my savings into Nvidia. I'd be retired right now if I had done so.
And yet all these work and success from Nvidia was because of, if you read 90% of HN comments for the past 2 years; Luck.
They could have given up at any point in time for the past 20 years and simply not do anything CUDA or GPGPU related. Because who would want to do that when vast majority of those investment were not even bringing in much revenue. Like Intel decided to cancel Larrabee. They persevere and hit the Jackpot some 10-15 years later. But all of this was because of; Luck.
Yes. Luck plays a big part. They could have continue another 10 years and they may never find the Killer App for it. But to ignore all the investment and work for such a long time and pin it down to Luck was about as rude and as disrespectful it can be. Especially on a forum which was started by VC with the spirit of entrepreneurship.
That NVidia has maintained this push for the last 2 decades makes one wonder what other tricks they'll have up their sleeve.
But their Zen bet has paid off and now they're playing catch up.
Intel, OTOH, I'm not sure what their excuse is.
You're making it sound like AMD being broke at that time was some unfortunate accident due to external events, and not the result of their own blunders.
Nvidia had billions due to great products and great business decisions on their part, and AMD was broke entirely due to it's own actions, by having average products on the CPU side and making bad business decisions at the time by spending way too much money acquiring ATI, and then selling off their golden goose, the Imageon mobile GPU division to Qualcomm for pennies right at the beginning of the smartphone revolution.
It's a miracle they managed to turn things around and not end up like SGI and 3dfx, bankrupt and having their carcass devoured by Intel and Nvidia.
When people say it's luck, I think they are reacting to the reality that Nvidia couldn't know, when they were doing this investing, that there was a big AI market waiting to take off. They were doing good work, but they were also very, very lucky that circumstances granted them this opportunity. There is no shame in that -- few companies achieve great success without some opportunity manifesting.
But it's a mistake to pat yourself on the back too hard, either. Without the opportunity, they'd still be making gpus with some other applications.
* This is gated by the ability to recognize those opportunities when they appear, willingness to act decisively to maximize the probability of positive outcomes and the preparedness to exploit such advantages. This tends to require mental preparedness, emotional maturity and a willingness to invest scarce resources and/or time - in advance - toward maintaining situational awareness and some excess reserve resources. Doing this is hard but these traits are learnable.
* Similarly, a portion of available conscious effort and scarce resources must be continuously expended toward being resilient to bad luck when it inevitably strikes. The net impact of misfortune can vary substantially depending on mitigation steps taken in advance. This requires accurate awareness of ambient risk factors and careful balancing of where you choose to place your limited 'air bags' and 'ounces of prevention.'
Most of these things are at least somewhat within your ability to influence, with the exception of initial conditions. At the "opening deal" of life some people are dealt better cards and some people are dealt worse cards. This is not fair, but it is what it is. The silver-lining is that, after the initial cards are dealt, it can still be a long game with many rounds. How you choose to play the cards you have in each of those rounds can lead to substantially different outcomes. Because it's a game like poker with randomness, hidden variables, subtle cues and second-order probabilities - it's easy to conclude it's almost all luck. This is unfortunate because not understanding the 'meta' of the game, or even knowing there is a meta, does make it mostly luck for some.
I think NVidia's 'good fortune' is the cumulative result of playing the meta-game effectively for a long-time and thus leading to them having the capability to maximize their outcomes when eventually finding themselves in a high-opportunity environment (aka "lucky").
I just think it's an honorable mention because had things gone a little differently, perhaps Intel could have been king of the hill instead.
Never once read a comment attributing Nvidia success to luck.
The only luck Nvidia has is the luck that AMD fell asleep at the wheel and couldn't get bothered to put more than 2 engineers on a CUDA competitor even when it was getting apparent that AI was worth billions (i.e. ~4-5 years ago).
“No, Nvidia just found themselves in a lucky situation.”
The vast majority of the comments on this article prior to the one you were replying to were attributing it solely/largely to luck.
Deleted Comment
The wise decision was to keep pushing cuda and not dropping the ball on it to cut costs in the short run, and realizing the potential of using it for scientific computation early on. Then one thing led to another. When AI came around, cuda was the only mature and serious framework for the job.
Use luck to be humble, use luck to empathize, use luck to build people up, but don't use luck to tear people down.
At least judging from the upvote it does seems I have the backing from silent majority. All is not lost.
I remember talking to Jason Fried about this with 37 signals. A lot of people didn’t realize the grind that team had before things finally clicked.
It’s a common pattern to see people gloss over the insane lengths and foresite required to achieve overnight success. :)
They had their reasons for doing what they did and I'm sure they eventually realized they were well positioned for ML/AI, but there's no way they planned that out before ML/AI was a viable thing.
And sure, AlexNet was good but remember this is the maxwell days, tensor cores aren't even a thing yet, it was at minimum a very bold bet on the basis of "some image classifier model thing". Nobody else saw it as more than an academic toy (obviously, or they'd have jumped in too).
https://www.newyorker.com/magazine/2023/12/04/how-jensen-hua...
> Within a couple of years, every entrant in the ImageNet competition was using a neural network. By the mid-twenty-tens, neural networks trained on G.P.U.s were identifying images with ninety-six-per-cent accuracy, surpassing humans. Huang’s ten-year crusade to democratize supercomputing had succeeded. “The fact that they can solve computer vision, which is completely unstructured, leads to the question ‘What else can you teach it?’ ” Huang said to me.
> The answer seemed to be: everything. Huang concluded that neural networks would revolutionize society, and that he could use CUDA to corner the market on the necessary hardware. He announced that he was once again betting the company. “He sent out an e-mail on Friday evening saying everything is going to deep learning, and that we were no longer a graphics company,” Greg Estes, a vice-president at Nvidia, told me. “By Monday morning, we were an A.I. company. Literally, it was that fast.”
They were already building GPUs mainly for gaming. Crypto then came along and swiped up a bunch of GPUs. When the self mining craze somewhat waned the AI craze started as the boundaries on its ethicality were broken down in an uncertain economic environment where corporations started racing to see who will get to the top.
Nvidia has been doing the hard work in preparing to succeed in this market. CUDA has been meticulously developed and maintained, creating an adhesion to their hardware that would not otherwise exist in the AI market.
It also has been willing and capable of creating lines of business hardware aimed at maximizing utility for their customers.
They also have hired and maintained a roster of the best engineers in their specialties, including the software part of the equation.
There is no part of their success that they weren't prepared to take advantage of when the opportunity presented itself. They didn't control the size of the opportunity itself, but no greatly successful company does.
Case in point: Nvidia has been hosting "GPU Conferences" annually to build awareness and drive adoption of CUDA. These events surely aren't free to host but necessary to build momentum and give an edge to your custom stack.
Luck is more applicable to the cryptomining boom and bust cycles that Nvidia also profited from. Their gaming GPUs (along with AMD's) just happened to be the best available at the proof of work.
The point isn't that they didn't invest in the direction that ended up being right, it's that they didn't do it specifically with ML/AI in mind years before it was even a twinkle in google's eyes.
IIRC were on CUDA 5 or something when imagenet came out and changed the world.
They might not have imagined LLMs when they decided to invest in making their GPUs programmable but I guarantee you they extrapolated the future compute potential of vector programmable machines and decided it was not a huge risk to enable it as it is simply betting that some important application would be around to tap into it.
https://www.acquired.fm/episodes/nvidia-the-gpu-company-1993...
https://www.acquired.fm/episodes/nvidia-the-machine-learning...
https://www.acquired.fm/episodes/nvidia-the-dawn-of-the-ai-e...
>I'm a great believer in luck. The harder I work, the more of it I seem to have.
Nvidia has been working hard(er than their competition) on the software side for almost 2 decades to be in the position they find themselves today. 16 years ago, they released CUDA for general-purpose computing on GPUs, and then 9 years ago they followed that up with cuDNN. They have a consistent pattern of making a intentional, long-term bets to diversify their market exposure and unlock new product areas while building a software ecosystem moat.
Yes, they obviously got super lucky with the cryptocurrency frenzy, but there's a reason all the miners were mostly buying Nvidia cards instead of AMD cards.
Because they were broke and didn't have the resources to invest properly even if they wanted to.
That's not quite true -- they bet the farm on Zen, and that bet paid off. Which means that now they have the resources to also invest in AI. I'm fairly sure if they had bet the farm on AI instead of Zen or if they had tried splitting that bet they'd be bankrupt now.
Well yes and no. They were certainly lucky to be at right place in the right time. But they were also consistently investing into CUDA and the AI/ML ecosystem while their competitor(s) ignored it to such an extent that NVDIA became the only real option (and deservedly so).
This is why they can effectively behave like a monopoly these days and just almost inconceivably high margins.
Then this luck should have equally found AMD, who even today are struggling to pick up the ball they've been dropping for a decade now. My last PC had a Radeon, and I waited the life time of that PC assuming AMD support was just around the corner, all the while renting Nvidia cards in the cloud for any serious projects.
I've been in the ML space long enough to remember when people were just speculating about doing ML/computation on GPUs. Nvidia made that much easier and has continued to improve support and features for the past decade+ There insane success is certainly part luck, but I wouldn't be so quick to dismiss all of it as merely happenstance.
https://www.youtube.com/watch?v=WLq9zv3k5n0
The race was on, but nobody else was running.
https://www.youtube.com/watch?v=Yhg3IEpl60M
It didn't wane, it was decimated when ETH switched to PoS, and off of GPU mining entirely.
All of the other mined coins dropped in value as miners moved to them and dumped all their rewards, making mining those coins unprofitable as well.
It was ETH that was propping up the entire GPU mining ecosystem.
Nvidia wisely recognized that only so much horsepower could be used by a conventional 3d graphics pipeline with a given screen resolution, and they needed to invest in growing future compute-heavy adjacent markets.
They invested in generalizing their GPU into a more flexible vector coprocessor for HPC and then adjacent markets. They convinced fundamental engineers and researchers in this area to come work for them.
There was deep fundamental work done by Ian Buck in 2004 on leveraging GPUs as general vector processors ( https://graphics.stanford.edu/papers/brookgpu/ ) and that leadership and deep thinking went to Nvidia, not to Intel. Intel did not have the passion from the top to care about this. They couldn't even care enough to field competitive 3D chips (and associated software), much less extend their thinking to generalize beyond it skillfully. Nvidia did.
Anyone who spent every day thinking about how to grow the vector coprocessor market would have pursued crypto and AI when they came along but Nvidia's strong engineering and profits from a leading 3D position gave them competitive advantages which they are, for now, reaping.
[1] https://www.jstor.org/stable/26861060
[2] https://en.wikipedia.org/wiki/Fifth_Generation_Computer_Syst...
[3] https://en.wikipedia.org/wiki/Transputer
Think you're just looking at this from the angle of a gamer and not someone who's been paying attention to GPGPU compute earlier than the past 6 months.
The crypto craze waned just as LLMs were picking up. Any more delay with LLMs, and perhaps nvidia would have been overextended with no demand to take their inventory. There may in fact , be no nvidia in that future depending on the level of the bet.
Or take it 1 step further . if COVID hadnt happened, the crypto craze would never have occurred, and nvidia would have only been a bit-level player in the LLM craze we are now in.
I still remember nvidia ads in pc game magazines. That and 3dfx. Who knew?
Why? They were the only ones taking GPGPU seriously? AMD completely ignored it for years and Intel only became serious about GPUs very recently.
Copy-pasting a comment from a discussion a little while[1] ago: CUDA was first released in 2007:
* https://en.wikipedia.org/wiki/CUDA
* https://developer.download.nvidia.com/compute/cuda/1.0/NVIDI...
Two years before the Bitcoin paper (2009):
* https://en.wikipedia.org/wiki/Bitcoin
They had a presentation called "The Era of the Personal Supercomputing" at SIGGRAPH 2007:
* https://dl.acm.org/doi/10.1145/1281500.1281647
* https://www.nvidia.com/content/events/siggraph_2007/supercom...
Ian Buck (co-?)creator of CUDA speaking in 2008:
> Ian Buck talks about his background developing Brook for GPUs at Stanford university and what paths were taken for developing a C platform for GPUs.
* https://www.youtube.com/watch?v=Cmh1EHXjJsk
> In 2003, a team of researchers led by Ian Buck unveiled Brook, the first widely adopted programming model to extend C with data-parallel constructs. Ian Buck later joined NVIDIA and led the launch of CUDA in 2006, the world's first solution for general-computing on GPUs.
* https://developer.nvidia.com/cuda-zone
* http://graphics.stanford.edu/~ianbuck/
Nvidia purposefully went after parallel computing. Specific applications (cryptocurrency, ML/AI) appeared later.
[1] https://news.ycombinator.com/item?id=38446957#unv_38447944
It is true that they got lucky several times in a big way. But CUDA was an expensive R&D for many years without clear payouts.
In its early days, NVidia focused on its delivery speed by having a unified software driver/integration strategy. The TNT video card was meh, the TNT2 a little better but inferior to 3Dfx... by the time they launched GeForce, 3Dfx didn't have a product ready to compete. Their driver/integration/product test strategy made that speed possible.
With that background you can understand why CUDA wasn't lucky, they repeated the same approach: combine their hardware with the software.
It felt like they weren't leaning into crypto, which surprised me. Instead it looked like they were trying to maintain gamer goodwill by not increasing consumer card costs during the boom. Of course scarcity raised secondary market princes, but Nvidia kept MSRP lower than the boom dictated.
It seemed like they were betting against crypto during the craze. And sticking to their strategy on the consumer side. So maybe that's how they stuck to a ML strategy too.
So they had all this CUDA stuff, which they must have invested in heavily because AMD showed what happens when you don't. That led to a software ecosystem for ML.
Maybe it was all luck, but a strategic choice explains some of this in hindsight.
CUDA was already digital gold in 2018. ML had moved to GPUs several years prior, and CUDA was a primary enabler of that transition.
Of course, the AI thing exploded afterwards this time so the demand dip didn't happen.
They've been selling mid-sized GPUs like the GTX 1080 at 800$ 8 years ago. A 300 mm2 GPU.
They have been rising and rising costs for a long time to milk as many dollars as they could.
People seem to forget even before the crypto craze the company had insane margins in their revenue. They weren't selling 800$ chips because those costed 600 or 500$ to produce...
Even the most expensive 4090 is hardly more than a 400$ chip, memories included to build.
Woah, only one more year and this reference will be old enough to vote.
Then they put two and two together and started investing heavily as they saw momentum build.
There's been a decade or more of deep learning models breaking records in almost every single research field, powered (indirectly) through CUDA, cuDNN and other NVIDIA software.
AI didn't "come along" when OpenAI released ChatGPT. DNNs that have been 99% NVIDIA-focused have been beating the state-of-the-art for years and years.
Also for the record the ADA architecture (very dominant AI accelerator) was released when the stock price was like $100 (compared to the $500 now).
CUDA has been heavily utilized for AI for many many years now, whole reason Nvidia is so entrenched in it is because they were they only ones taking GPGPU seriously like OpenCL (1) rose and was abandoned before we even get to your interpretation of the timeline.
(1): Easy to forget now that AMD and Apple had an common standard competitor to CUDA and completely fumbled it.
Yes, and it takes lot of effort to be in the right place.
Once gen AI gets good enough to generate high quality video games, virtual worlds, etc. in real time, that will redefine gaming and entertainment.
Why wait for the next Mission Impossible movie to come out when you can experience it...as Tom Cruise...with your own storyline with all of your friends. And get a new one every day.
The curation of an experience is undervalued by many.
Just like coding isn't about mechanics. It is about understanding requirements and coming up with solutions that meet needs
Think about any good story. A million details could be different, and it would still work well.
We were 2nd in the state in Illinois, and I wanted to cement our place and possibly win by selling everything to lock in gains in the final week. The person executing the trade on our team accidentally shorted it and it went up a considerable amount in the last week knocking us down quite a bit.
At that point I seriously considered dumping my savings into Nvidia. I'd be retired right now if I had done so.