Readit News logoReadit News
TheAlchemist · 2 years ago
What's also pretty interesting that they actually didn't sell more chips this quarter - they ... just pretty much doubled the prices (hence the huge margin).

This is what having a monopoly looks like !

This is also why companies that manufacture their cards didn't report any uptick in profits. I'm wondering how this play out in some months ? Do they have any pricing power with respect to NVidia ? Or NVidia could just switch to another manufacturer ?

PheonixPharts · 2 years ago
> This is what having a monopoly looks like !

As someone who has been in the AI/ML space for over a decade, and even had an AMD/Radeon card for more than half of that, I can't help but feel that this is partially AMD's own fault.

For many, many years it seemed to me that AMD just didn't take AI/ML seriously whereas, for all it's faults, NVIDIA seemed to catch on very early that ML presented a tremendous potential market.

To this day getting things like Stable Diffusion to run on an AMD card requires extra work. At least from my perspective it seems like dedicating a few engineers to getting ROCm working on all major OSes with all major scientific computing/deep learning libraries would have been a pretty good investment.

Is there some context I'm missing for why AMD never caught up in this space?

ken47 · 2 years ago
Until very recently, AMD was struggling for survival. Rather than making the big bet on AI, they went for the sure thing by banking on revolutionary CPU tech. I'm sure if they were in a better financial position 5 years ago, they would have gone bigger on AI.
tho23iu423o4324 · 2 years ago
Yes, this is true.

However, a lot of this has to do with the fact that AMD was on the brink of bankruptcy before the launch of Zen in 2016 (when their share price was ~$10). They simply did not have the capital to the kind of things Nvidia was doing (since '08 ?).

The bet on OpenCL and the 'open-source' community failed. However, ROCM/HIP etc. really seems to be catching up (I even see them packaged on Arch linux).

Dalewyn · 2 years ago
What really strikes me is Nvidia's been working hard on doing practical work on their GPUs even just 10~15 years ago with PhysX, while both Intel and AMD just existed.

Nvidia's dominance today is the product of at least over a decade of work and investments to make better products. Today they are finally reaping their rewards.

edgyquant · 2 years ago
100% this. I, and many others, bought multiple AMD cards due to disliking NVidia and tried to get ROCm set up to no avail. It just never worked except under hard to maintain configurations. I switched to an nvidia card and within the hour import tensorflow just worked
uluyol · 2 years ago
Catching up in this space requires a significant, sustained investment over multiple years and competent software engineers. It's not a simple thing for a hardware company to suddenly become competitive with Nvidia in AI/ML.

Instead, they've been going after the CPU market (and winning), HPC/scientific computing (high FP64 performance, in contrast to Nvidia's focus on low-precision ML compute), and integrating Xilinx.

However, I agree that it's an unfortunate situation, and I hope AMD becomes competitive in this space soon.

FuriouslyAdrift · 2 years ago
AMD has an entire line specifically for AI/ML... https://www.amd.com/en/graphics/instinct-server-accelerators

They just don't have those capabilities in their consumer GPUs.

AMD is also nearly 50/50 with nVidia for supercomputers in the Top500 (and dominates at the top)

It took a few years after completeing the massive purchase of Xilinx to get going, but they are picking up speed rapidly.

xiphias2 · 2 years ago
Look at the good thing instead: they are catching up, and open source devs are starting to be serious about AMD because of its price/performance.

I believe it's a highly undervalued stock right now.

htrp · 2 years ago
AMD gave up on the market for parallel compute entirely
JB_Dev · 2 years ago
Nvidia and really all chip designers are limited by the fab companies who are trying to scale as fast as they can. But all the cutting edge fabs are limited by one single supplier - ASML. ASML make the lithography machines and have a total monopoly. Even they cannot make lithography machines fast enough to satisfy demand - their lithography machines are sold out 2 years in advance
paulmd · 2 years ago
The current limitations are not about litho at all but actually about CoWoS stacking capacity.
thfuran · 2 years ago
There probably isn't another manufacturer they can switch high end stuff to. They recently tried moving at least some of their cards to Samsung but switched back last generation due to yield issues.
wmf · 2 years ago
You have to distinguish between fabs and AIBs.
kelvie · 2 years ago
For my sake, what number did you look at to come to this conclusion? I'm not used to reading these quarterly reports.
TheAlchemist · 2 years ago
You can look at cost of revenue to get an idea.
tomnipotent · 2 years ago
> just pretty much doubled the prices

They prices were already double, it was just scummy resellers capturing that value rather than Nvidia.

dekhn · 2 years ago
nvidia deserves their monopoly and it's in the US's best interest to let it continue.

the other companies have to up their game to compete with a company that has been executing well for 20+ years.

BbzzbB · 2 years ago
Isn't this more a case of supply and demand? Huge ramp on chip demand by every FAAMG, every dev and their grandmothers for AI with a mostly inelastic supply (foundry constrained and very specialized atoms tech involved).

It's not like Intel and AMD don't exist, but if everyone is pushing each other at the door for Nvidia chips..

jfoutz · 2 years ago
Amazon, Alphabet, Meta. Who are the F and G, and where is apple?
ttunguz · 2 years ago
I'm curious where did you find the data point that they sold an equal number of units quarter over quarter?
hospitalJail · 2 years ago
>This is what having a monopoly looks like !

Yep and we are suffering as a result. Want the best in computing and CUDA? Give M$ and Nvidia money.

Linux and Nvidia don't play well. Apple doesn't even attempt to try.

The absolute state of computing right here.

Mistletoe · 2 years ago
How does the Nvidia stranglehold on AI compare to US Steel, Standard Oil and Bell Telephone etc., other monopolies that were broken up?
zirgs · 2 years ago
There isn't a monopoly. Their competitors just are really bad at it.

Dead Comment

issafram · 2 years ago
Not that it's much better, but wouldn't it be a duopoly considering that AMD is also a big player?

Hopefully Intel continues to improve it's GPU offerings

capableweb · 2 years ago
> Not that it's much better, but wouldn't it be a duopoly considering that AMD is also a big player?

Not sure AMD would be considered a big player, what would be the percentage threshold for that?

According to the Steam Hardware (& Software) Survey (https://store.steampowered.com/hwsurvey/Steam-Hardware-Softw...), ~75% of computers with Steam running has a NVIDIA GPU, while ~15% has a AMD GPU.

AMD is the closest to a competitor NVIDIA has, but they are also very far away from even being close to their market-share.

I'm sure in AI/ML spaces, NVIDIA holds a even higher market-share due to CUDA and the rest of the ecosystem, at least in gaming things are pretty much "plug and play" when it comes to switching between AMD/NVIDIA hardware, but no such luck in most cases with AI/ML.

tric · 2 years ago
> wouldn't it be a duopoly considering that AMD is also a big player?

I don't think GPUs are commoditized. You can't swap a Nvida GPU with a AMD GPU, and get the same performance/results.

srj · 2 years ago
I interned at NVIDIA in 2009 on the kernel mode driver team. Was super fun there in terms of the project work and the people. If the code still exists, I created the main class that schedules work out to the GPU on Windows.

That level of programming gave such rewarding moments in between difficult debugging sessions. When I wanted to test a new kernel driver build I needed to walk into some massive room with all of these interconnected machines that emulated the non-yet-fabricated GPU hardware. One of the full time people on my team was going insane trying to track down a memory corruption issue between GPU memory and main memory when things paged out the entire time I was there.

Back then the stock was around $7/share and the CEO announced a 10% paycut across the board (even including my intern salary) and had an all hands with everyone in the cafeteria. It's pretty cool they went from that vulnerable state, with Intel threatening to build in GPU capabilities, to the powerhouse they are today.

alanfranz · 2 years ago
> the CEO announced a 10% paycut across the board

Which is still better than a 10% layoff, anyway!

M3L0NM4N · 2 years ago
Only for 10% of the workers.
granshaw · 2 years ago
Interned there Summer of 08. Remember mentions of “this CUDA thing” then, that was during its infancy.

Midway through, our intern friend group found out one of the smaller buildings had a buffet lunch and started taking the shuttle there often.

Saw this tweet just now and that lanyard holder really brought back memories, hasn’t changed at all: https://x.com/jimcramer/status/1694465908234699243?s=46&t=NA...

1-6 · 2 years ago
Do you still own stock?
the_svd_doctor · 2 years ago
He probably didn’t get any as an intern.
radium3d · 2 years ago
I do wonder though, why has moore's law stopped in its tracks? Using CS:GO benchmark, a 1070 got 218 FPS, while a 4090 is at 477 FPS. Only a ~2.2x increase in FPS in 6 years? :(
dahart · 2 years ago
Between those 2 GPUs, the fp32 perf went up 12.7x according to TechPowerUp’s specs. The SM count went up 8.5x (which represents Moore’s law and note is almost exactly in line with Moore’s prediction), and the clock rate went up 1.5x. The FPS of CSGO (or any game) is not a good measure of Moore’s law. Games have all kinds of complexities and caveats that will prevent them from scaling linearly. I used to write some of those bottlenecks :P What are the 2080 and 3080 datapoints for CSGO? Did it approach 400-500 fps on the 2080 and never get any faster after that?
HWR_14 · 2 years ago
At low frame rates, the GPU is (usuallly) the bottleneck most of them time. At very high frame rates, there are other more significant bottlenecks.

A much better benchmark would be to take a game designed for 120 or fewer FPS on a 4090 and try it on a 1070.

AnotherGoodName · 2 years ago
That's because single thread perf of CPUs hasn't progressed and ~500fps is where CPUs cap out on that game still to this day. GPUs are doing fine.
ATMLOTTOBEER · 2 years ago
Fps in CSGO isn’t really so dependent on the GPU as more modern games, so comparing with a different game might be more accurate.
fragmede · 2 years ago
GS:go aside, the 1070 is said to do 6.5 TFlops to the 4090's 1,321 TFlops, for a 203x improvement in 6 years. Not bad!
bexsella · 2 years ago
FPS of one specific game isn't a great indicator of GPU grunt.
devnullbrain · 2 years ago
Moore's law is about transistors, not CS:GO. CS:GO benchmarks stopped advancing as quickly because Moore's law is - objectively, despite the protestations of semiconductor companies - dead.
Mawr · 2 years ago
You can't just use an arbitrary game to test your hypothesis like that. You need to test with something that caps out the 4090 at 100% usage and then see how a 1070 performs in comparison. It's not quite that simple of course, so just look up some benchmarks ;)
transcriptase · 2 years ago
Admittedly knowing nothing, I’m going to assume that a great deal of the advantages of the latest generation aren’t going to improve performance in a decade old game since the engine won’t touch them.

How does a 1070 fare versus a 4090 in Hogwarts Legacy or another modern game at 1440 or 4K?

newZWhoDis · 2 years ago
I’m more worried about how human development has seemingly stopped in its tracks, backtracking more like it
spoonjim · 2 years ago
Physics stopped Moore’s law
Mechanical9 · 2 years ago
My guess is that we've reached a peak in the amount of new investment that can be made annually, based on tech nearing total proliferation throughout society. The tech can sort of only advance as fast as tech companies can grow, and they can't grow exponentially after they account for some percentage of the global workforce.

My guess is that we'll see improvements at closer to the current rate rather than at an increasing rate.

xnx · 2 years ago
The good new is that Nvidia's high GPU prices motivate everyone (Intel, AMD, ARM, Google, etc.) to try and tackle the problem by making new chips, making more efficient use of current chips, etc. For all the distributed computing efforts that have existed (prime factorization, SETI@Home, Bitcoin, etc.), I'm surprised there isn't some way for gamers to rent out use of their GPU's when idle. It wouldn't be efficient, but at these prices it could still make sense.
Uehreka · 2 years ago
They’re all pretty motivated, they’ve been motivated for years, and almost nothing is happening. This situation isn’t exactly a poster child for the Efficient Markets Hypothesis.

Every year just sounds like “Nvidia’s new consumer GPUs are adding new features, breaking previous performance ceilings, running games at huge resolutions and framerates. Their datacenter cards are completely sold out because they can spin straw into gold, and Nvidia continues to develop new AI and graphics techniques built on their proprietary CUDA framework (that no one else can implement). Meanwhile AMD has finally sorted out raytracing, and their consumer GPUs are… well not as good as Nvidia’s but they’re a better value if you’re looking for a competitor to one of Nvidia’s 60 or 70 line GPUs!”

lotsofpulp · 2 years ago
Efficient market hypothesis is unrelated to Nvidia’s competitors being unable to offer a competing product so far.

https://www.investopedia.com/terms/e/efficientmarkethypothes...

> The efficient market hypothesis (EMH), alternatively known as the efficient market theory, is a hypothesis that states that share prices reflect all information and consistent alpha generation is impossible.

ericmay · 2 years ago
> This situation isn’t exactly a poster child for the Efficient Markets Hypothesis.

I'm unsure why you're criticizing the Efficient Markets Hypothesis or even using it here, but you need to also analyze this with some time horizon because the market and marketplaces are not static.

kimixa · 2 years ago
Tech design and development seems, to me at least, pretty much naturally opposed to the "being kept in check with competition" state - as design isn't really a cost that scales per-unit, the company that sells slightly more can afford to put more into development at the same per-unit margin, which snowballs. At some point, they own the entire market - or enough that they functionally control it, and start leveraging this position. I'd argue we're seeing this from Nvidia right now.

People talk about AMD being competition - but from most stats I've seen, they're ~10% of dGPU sales, with Nvidia being the other 90% (with new Intel offerings being pretty much noise now). That means that if they invest the same proportion into development, NVidia nearly have 10x the resources.

It may be that tech companies like this would "naturally" form a monopoly without outside (IE government) interference, as the only reason that multiplier of development resources doesn't completely crush new entrants is rather extreme mismanagement, or a new segment is created where the design resource don't really cross over that much.

I don't see anything like that happening in the short term, if anything there seems to be more opportunity for cross pollination of development within these corporations, as there's a fair bit of design similarities between various silicon (GPUs, CPUs, accelerators for the current ML techniques etc.) that may encourage more consolidation in the whole semi market to take advantage of that, not less. But again the only thing stepping in the way of that seems to be governments trying to keep national interests, like the blocking of NVidia buying ARM to pull in one of the big CPU players. Plus all their other IP that they may benefit from, like low power GPU designs or other accelerators ARM have designed.

creer · 2 years ago
It's not like designing this kind of product is easy; or that Nvidia's designers are sitting idle; or that everybody else's design team is not busy building something else. There are in fact many competent design teams, chipping at their own business.

There are in fact startups, also doing what they can (and probably not trying to go head on against the most productive competitor they can find.) And it has been reported countless times that some of the biggest customers of Nvidia are actually trying to design their own.

If you want to point out a market with broken competition, this isn't it.

fulafel · 2 years ago
To play the free market advocate:

The situation is created by artificial restrictions on free market (namely state enforced monopolies on "IPR", or as some call it, imaginary property).

mort96 · 2 years ago
Aren't AMD's competing against nvidia's 80 series GPUs these days?
sph · 2 years ago
Are they motivated? Seems like a massive coincidence how the big two of the GPU world are cousins, and one has been having massive success on the CPU, the other on GPU/AI, and every attempt from both side to enter the other's niche has been pretty weak.

AMD compute is nowhere compared to NVIDIA. NVIDIA wanted to buy ARM, has got its finger in RISC-V, but apart from that, they don't really care. To be fair AMD has done decent with GPUs, but never enough to dethrone NVIDIA, whose playbook for the past few gens is "just make everything bigger than last gen and increase the frequency." Surely AMD could have chosen the same lazy approach to surpass the 4090 only just, but instead they didn't, so it's still NV undefeated in its space because AMD forgot to squeeze the last 1% out of their card.

The market is powerless if the competitors aren't really competing. Intel is the only chance, unless they manage to get their own Taiwanese CEO somehow related to Huang and Su.

tzhenghao · 2 years ago
> motivate everyone (Intel, AMD, ARM, Google, etc.) to try and tackle the problem by making new chips

Yes, there has been repeated efforts to chip at Nvidia's market share, but there's also a graveyard full of AI accelerator companies that fail to find product market fit due to lack of software toolchain support - and that applies even for older Nvidia GPUs and their compatible toolchains, let alone other players like AMD. This isn't a hit on Nvidia, I'm just saying things move so quickly in the space that even the only-game-in-town is trying to catch up.

Nvidia is also leading by being one or two hardware cycles ahead of their competition. I'm pretty confident AI workloads in enterprise is their next major focus [1]. I think this more than anything else will accelerate AI adoption in enterprise if well executed.

To your point, I think the industry needs to focus more on the toolchains that sit right between the deep learning frameworks (PyTorch, Tensorflow etc.) and hardware vendors (Nvidia, AMD, Intel, ARM, Google TPU etc.) Deep learning compilers will dictate if we allow all AI workloads run on just Nvidia or several other chips.

[1] - https://www.nvidia.com/en-us/data-center/solutions/confident...

tric · 2 years ago
> I'm surprised there isn't some way for gamers to rent out use of their GPU's when idle.

https://rendernetwork.com/

"The Render Network® Provides Near Unlimited Decentralized GPU Computing Power For Next Generation 3D Content Creation."

"Render Network's system can be broken down into 2 main roles: Creators and Node Operators. Here's a handy guide to figure out where you might fit in on the Render Network:

Maybe you're a hardware enthusiast with GPUs to spare, or maybe you're a cryptocurrency guru with a passing interest in VFX. If you've got GPUs that are sitting idle at any time, you're a potential Node Operator who can use that GPU downtime to earn RNDR."

euazOn · 2 years ago
Also the Horde for Stable Diffusion, pretty good concept: https://github.com/Haidra-Org/AI-Horde/blob/main/FAQ.md
Conscat · 2 years ago
I am certain that several years ago, I was given an ad for exactly such a service and even tried it out, but I cannot for the life of me remember its name. It had some cute salad motif, and its users are named "chefs".

EDIT: It was just named Salad. https://salad.com/https://salad.com/download

NavinF · 2 years ago
You can do that for inference, but most gamers have a single GPU with <24GB VRAM which kinda sucks for training. 3090 or 4090 is the minimum to use reasonable batch sizes
Goronmon · 2 years ago
The good new is that Nvidia's high GPU prices motivate everyone (Intel, AMD, ARM, Google, etc.) to try and tackle the problem by making new chips...

Or their dominance leads to competition throwing in the towel and investing resources in a market with less stiff competition.

I wouldn't be surprised to see AMD start to pair back ivnestment on high-end GPUs if things continue down this path. I would say Intel likely keeps pushing, but I'm less convinced they can actually make much headway in the near future.

xnx · 2 years ago
As was mentioned in another thread on a slightly different topic, it wouldn't be surprising to see all non-Nvidia parties unit around some non-CUDA open standard.
haldujai · 2 years ago
> I would say Intel likely keeps pushing, but I'm less convinced they can actually make much headway in the near future.

It seems that Intel is making great headway on their fabs and may somehow pull off 5 nodes in 4 years. Intel 3 is entering high volume production soon and according to Gelsinger 20A is 6 months ahead of schedule and planned for H2 2024.

If they do pull this off and regain leadership that would change outlook.

WanderPanda · 2 years ago
With interconnect being the biggest limitation these days I don’t think this would work.
xnx · 2 years ago
I'm not familiar with all the varied uses of GPUs but it seems like image generation could feasibly be distributed: large upfront download of models, then small inputs of text and settings, and small output of resulting images.
ElectricalUnion · 2 years ago
> I'm surprised there isn't some way for gamers to rent out use of their GPU's when idle.

The main reason why you need massive ammounts of fast VRAM in the first place is that the main limitation of AI is memory bandwidth. Can't simply distribute an algorithm that is already throughput limited by memory bandwidth and distribute it with awful latency and bandwidth and hope for any improvement.

myth_drannon · 2 years ago
vast.ai allows you to rent out gpu
EVa5I7bHFq9mnYK · 2 years ago
In bitcoin mining, GPU phase lasted only two years, before been outcompeted first by specialized FPGAs and then by ASICs. Nobody used GPUs for bitcoin mining since 2013. Maybe ML will follow similar path. But the computation is much different from ML, doesnt need memory at all.
PeterStuer · 2 years ago
Aren't tensorcores basically asics for ml?
magic_hamster · 2 years ago
The only way to compete with Nvidia is to supply a drop in replacement for CUDA with the same (or better) performance for price.

Good luck with that.

In the current state of things, Nvidia is like a car manufacturer that exclusively owns the concept of tires.

peter303 · 2 years ago
The larger language models now employ a trillion parameters. This is faster when memory and computing is tighter, not distributed. Cerebus's million core super-wafer addresses this.
IshKebab · 2 years ago
There have been various attempts but you need a workload that's basically public and also runs on a single GPU (because you don't have NVLink or similar).
TechnicolorByte · 2 years ago
Incredible company. It’s absolutely insane how far ahead they are with the investments they made over a decade ago.

So nice to see a “hard” engineering (from silicon to software) SV-founded company getting all this recognition. Especially after what has felt like a decade of SV hype software companies dominating the mainstream financial markets pre-pandemic with a spate of overpriced IPOs or large ad-revenue generating mega corporations.

kccqzy · 2 years ago
The moniker of "hard" engineering is neither precise nor useful. What makes engineering hard? Is solving problems with distributed systems, even if these systems are for ads, hard? Or do you mean hardware? In that case even Nvidia is not hard enough since they don't fabricate their own chips. Or do you mean designing hardware? Then what makes writing system verilog at a desk hard but writing Python not hard?
TechnicolorByte · 2 years ago
I admit that was a glib comment and unnecessary.

I’m really speaking about Nvidia’s ability to perform well in both hardware and software, at chip-scale and datacenter-scale. Also speaking of their product/business direction that revolutionizes multiple industries (leaders in graphics with ray tracing and AI frame/resolution sacking; leaders in AI infra and datacenter systems, etc.) all resulting in big impacts to their respective industries.

You’re right that many of those software-only companies do very real engineering with distributed systems and such. I should’ve been more precise and was really complaining about the SV hype of the 2010s focusing on regulating-breaking companies like Airbnb, Uber, wework, etc. and on companies like Meta and Google who focus on pushing ads for their revenue.

omniglottal · 2 years ago
I suppose the difference is engineering something deterministic (i.e., physics, electronics, logic) versus something soft and indistinct (SEO, ad impressions, customer conversion rate).
nsteel · 2 years ago
It's hard to get complex systems correct. There's far less margin for error when you get a hardware design wrong. Correcting a Python software mistake is orders of magnitude easier and cheaper to resolve, it doesn't cost multiple billions and take 6 months to iterate. You might consider the hardware design harder in that respect.
systemvoltage · 2 years ago
Yeah. NVidia was a docile looking company and in 2012, they were merely a gaming oriented hardware shop.

These companies exist today. Which small or ignored companies do you think have a bright future?

epolanski · 2 years ago
Are they so far ahead?

AMD GPUs get comparable results as of late on Stable Diffusion.

Software and hardware from competitors will catch up, crunching 4/8/16 bit width numbers is no rocket science.

david-gpu · 2 years ago
> Software and hardware from competitors will catch up, crunching 4/8/16 bit width numbers is no rocket science.

I used to think like that, until I got a job there and... Oh, boy! I left five years later still amazed at all the ever more mind bending ways you can multiply two damn matrices. It was the most tedious yet also most intellectually challenging work I've ever done. My coworkers there were also the brightest group of engineers I've ever met.

johnvanommen · 2 years ago
> Software and hardware from competitors will catch up, crunching 4/8/16 bit width numbers is no rocket science.

I made the mistake of buying an A770 from Intel, based on the spec sheet. Hardware is comparable to what Nvidia is selling, for 70% of the price.

It's basically a useless paperweight. The AI software crashes constantly, and when it's not crashing, it performs at half the level of Nvidia's cards.

Turns out that drivers and software compatibility are a big deal, and Intel is way way behind in that arena.

smoldesu · 2 years ago
Nvidia has a small lead on the industry in a few places, adding up to super attractive backend hardware options. They aren't invincible, but they profit off the hostility between their competitors. Until those companies gang up to fund an open alternative, it's open season for Nvidia and HPC customers.

The recent Stable Diffusion results are great news, but also don't include comparisons to an Nvidia card using the same optimizations. Nvidia claims that Microsoft Olive doubles performance on their cards too, so it might be a bit of a wash: https://blogs.nvidia.com/blog/2023/05/23/microsoft-build-nvi...

Plus, none of those optimizations were any more open than CUDA (since it used DirectML).

> crunching 4/8/16 bit width numbers is no rocket science.

Of course not. That's why everyone did it: https://onnxruntime.ai/docs/execution-providers

The problem with that "15 competing standards" XKCD is that normally one big proprietary standard wins. Nvidia has the history, the stability, the multi-OS and multi-arch support. The industry can definitely overturn it, but they have to work together to obsolete it.

skocznymroczny · 2 years ago
Perhaps RDNA3 GPUs get comparable results, but RDNA2 GPUs are behind.

I bought a RX 6800XT to do some AI work because of the 16GB VRAM, and while the VRAM allows me to do stuff that my 6GB RTX 2060 wasn't able to, on performance side it's actually a downgrade in many aspects.

But the main issue is software support. To get acceptable performance you need to use ROCm, which is Linux only. There was some Windows release of ROCm few weeks ago, but I am not sure how usable it is and none of the libraries have picked up on it yet.

Even with a Linux installed, most frameworks still assume CUDA and it's an effort to get them to use ROCm. For some tools all it takes is uninstalling PyTorch or Tensorflow and installing a special ROCm enabled version of those libraries. Sometimes it will be enough, sometimes it wasn't. Sometimes the project uses some auxiliary library like bitsandbytes which doesn't have an official ROCm fork, so you have to use unofficial ones (that you have to compile manually and Makefiles quickly get out of date). Which once again, may work or may not.

I have things set up for stable diffusion and text generation (oobabooga), and things mostly work, but sometimes they still don't. For example I can train stable diffusion embeddings and dreambooth checkpoints, but for some reason it crashes when I attempt to train a LORA. And I don't have enough expertise to debug it myself.

For things like video encoding most tools also assume CUDA will be present so you're stuck with CPU encoding which takes forever. If you're lucky, some tools may have a DirectML backend, which kinda works under Windows for AMD, but it's performance is usually far behind a ROCm implementation.

xwdv · 2 years ago
Hard times make hard companies. Hard companies make good times. Good times make soft companies. Soft companies make hard times.
mikestew · 2 years ago
Whelp, I guess those September NVDA call options I sold are going to get exercised. Who woulda guessed after the crypto fallout that "AI" would come along and bump the price back up.

Record revenues, and a dividend of $0.04 on a $450 stock? That's not even worth the paperwork. For example, if you bought 100 shares, that's $45K. From that, around September $4 will show up in your account, which you have to pay taxes on. So $3 or so net on a $45,000 investment. Sure, there were stock buybacks, but why keep the token dividend around?

thomas8787 · 2 years ago
Jensen is one of the largest shareholders. With over 80 million shares that's an over 3 million dollar dividend for him.
epolanski · 2 years ago
Wait 80M shares? He's worth 4B $ then. Not bad.
_zoltan_ · 2 years ago
I sold 600C for this Friday an hour or so before earnings. Free money with 168% IV.
euazOn · 2 years ago
It is free money, until it isn’t!
yieldcrv · 2 years ago
collecting pennies in front of a steamroller
theogravity · 2 years ago
I sold the Fri 560C as a covered call. The high IV was free money for little risk.
shpongled · 2 years ago
I sold the 590 :)
oatmeal1 · 2 years ago
It's probably good for the long term share price if they can say in 20 years they've had a dividend for 20 years, even if that dividend was actually measly.
AnotherGoodName · 2 years ago
There's a $25billion buyback this coming quarter. That's how they distribute profits these days.
elbasti · 2 years ago
A stock buyback trades certainty of wealth transfer (you're not sure the price will go up, or by how much!) for flexibility in when the investor takes the gains for purposes of fiscal planning.
kinghajj · 2 years ago
Should have sold a call credit spread instead!

For large shareholders, the dividend would still be worthwhile. From what I could find, Jensen has 1.3 million shares, so he'd receive over $200k in dividends this year. You might think that's chump change, but another source lists his salary at just under $1m; another 20% bump in liquid income is nothing to sneeze at.

haldujai · 2 years ago
> Should have sold a call credit spread instead!

Why?

> From what I could find, Jensen has 1.3 million shares, so he'd receive over $200k in dividends this year. You might think that's chump change, but another source lists his salary at just under $1m; another 20% bump in liquid income is nothing to sneeze at.

Jensen Huang is worth $42 billion and has been a billionaire for probably a decade or so now? Any CEO with that net worth would use stock-secured loans/LOCs for liquidity. 200k is very much chump change.

mikestew · 2 years ago
Should have sold a call credit spread instead!

I'll get right on that...after I go look up what that means. :-) I'm but a simple options trader who sells calls to unload stock I didn't want anymore anyway, and the premium is the icing on that cake. Left some money on the table this time, but I otherwise would have just sold the shares outright, and I did make some bank regardless.

Gonna be missing that sweet, sweet $0.04 dividend, though.

Vvector · 2 years ago
The stock is up 9% or $45/share after hours. Jensen just made $58 million. $200k doesn't pay his dry cleaning bill.
loeg · 2 years ago
This benefit is basically only to large shareholders who can't sell stock. Which might be insiders like Jensen and... anyone else? Everyone else can just sell, like, 0.0001% of their stock or whatever.
catchnear4321 · 2 years ago
many times what a lot of people make in a year is nothing to sneeze at.

especially when it is awarded for merely having a stack of papers.

gen220 · 2 years ago
How durable do we think their Revenue is?

To remind us all, they're selling capitalized assets, not contracts or services.

Is the marginal demand for GPU chips over the next 3 years enough to sustain (or grow?) current revenues and keep this valuation afloat? To me, it feels like a comparatively fragile situation to find themselves in, to convince the world of 2025 that they need even more chips, unless "everybody needs to train their own LLM" is a secular trend.

I'm not sure if investors fully appreciate the nuances of this boom, or if I'm not fully appreciating how many GPUs "need" to be held by different companies (and sovereign entities, if you've read the headlines in the last couple weeks) to train LLMs in the coming decade.

paulmd · 2 years ago
it will be sticky as long as there's a cambrian explosion of AI innovations happening. NVIDIA built the best swiss-army-knife for handling GPGPU problems in general, spent 15 years building the ecosystem and adoption around it, and then tailored it to AI specifically.

Once the tech settles down a bit, Google and Amazon and others can absolutely snipe the revenue at a lower cost, just like they did with the previous TPUs/gravitons. But then some new innovation comes out that the ASICs (or, ARM+accelerator) don't do, and everyone's back to using NVIDIA because it just works.

AMD potentially has a swiss-army knife too, but, they also have a crap software stack that segfaults just running demos in the supported OS/ROCm configurations, and a runtime with a lot of paper features and feature gaps. And NVIDIA's just works and has a massive ecosystem of libraries and tools available. And moreover they just have a mindshare advantage. Innovation happens on NVIDIA's platform (because NVIDIA spent billions of dollars building the ecosystem to make sure it happens on their platform). And it actually does just work and has a massive codebase etc. Sure it's a cage but it's got golden bars and room service.

https://github.com/RadeonOpenCompute/ROCm/issues/2198

So I guess I'd say it's sticky until the technology settles. Steady-state, I think competitors will capture a lot of that revenue. But during the periods of innovation everyone flocks back to NVIDIA. AMD could maybe break that trend but they'll have to actually do the work first, they have tried the "do nothing and let the community write it" strategy for the last 15 years and it hasn't worked. You gotta get the community to the starting line, at least. Writing a software ecosystem is one thing, writing runtime/drivers is another.

Deleted Comment

gmm1990 · 2 years ago
It seems pretty similar to Tesla’s valuation with a product that won’t be as sticky as electric cars
redox99 · 2 years ago
They are not similar at all. Tesla P/E is 67, Nvidia's P/E is 244.
xiphias2 · 2 years ago
I think it's more interesting to see how undervalued AMD is instead of focusing on NVIDIA which seems reasonably priced.

AI is not showing any sign of slowing down so far.

HDThoreaun · 2 years ago
Up more than 10% after hours compared to close yesterday. I really thought NVDA had hit its ceiling at $1+ trillion, apparently not. Really does feel like a huge opportunity for Intel to me. They have the fab capacity to pump out at least reasonably competitive GPUs if they can figure out the software side of things.

P/E still above 50 even after the AI craze 9x'd eps this quarter. Still hard for me to see that valuation ever makes sense but what do i know.

UncleOxidant · 2 years ago
Intel doesn't seem to be able to execute. It's not just pumping out GPUs - for AI you need drivers, and the equivilent of CUDA and all the various libraries built on CUDA like cuDNN. They do have OneAPI but it hasn't caught on like CUDA in that space. It's kind of too bad since OneAPI is open and CUDA is not.
HDThoreaun · 2 years ago
Right but the market is saying that a dominant GPU business is worth more than a trillion dollars. Just hard for me to believe that they can't get the business off the ground with that kind money on the table. Can't they just hire all of nvidia's developers and pay them 5x as much?
highwaylights · 2 years ago
I can really see Intel figuring this out. A lot of people on HN talking about Intel as an also-ran just like they spoke about AMD before Zen.

Raptor Lake is at 7nm and incredibly competitive there (~2700 single core on geekbench, taken with a pinch of salt). They’re still planning on being on 1.8nm/18A within 2 years, while at the same time ramping up their GPU efforts (albeit using TSMC for 4nm). Nvidia is very much in the lead, but this is just the beginning.

tldr; I ain’t hear no bell.

Mountain_Skies · 2 years ago
Over the past decade Intel seems to have become more interested in social causes than in technology, maybe with a side of government backrubbing to keep some income flowing.
bemmu · 2 years ago
If you melted the moon to turn it into GPUs, we’d still find use for that compute.

If they can hold the market and not implode for internal reasons, not sure this has any upper limit?

smolder · 2 years ago
I wonder if this is what Nancy Pelosi's husband actually bought all his NVDA in preparation for, way back then. Their prices have been shit for consumers since then, but it's still been good for them.
NickC25 · 2 years ago
Absolutely monster numbers. The aftermarket trading is up over 8% as of right now, roughly $41 USD to approximately $513 a share. Insane.

Anyone who is a lot more versed in company valuation methodology see this as being near peak value, or does Nvidia have a lot more room to run?

gen220 · 2 years ago
The fundamental model of company valuation is that the company's equity is worth the accumulation of the company's future cash flows, typically discounted by some rate so that predicted cash flows 5 years from now are worth some % less than 100%, with the intent of pricing in uncertainty.

Nvidia has had a watershed moment of revenue growth, because they're the only significant player in the space of top-of-the-line GPUs for training LLMs.

Their current valuation bakes in the assumption that not only is this unprecedented level of pricing power durable, but that top line revenue will also grow significantly over the next few years.

Reminder that Nvidia is selling chips, mostly to datacenters, whose purchasing habits are primarily driven by "customer" demand (where customers are, in this case, tech companies wanting to train neural nets).

So a bet that they go up from here is a bet that datacenters will want at least as many chips in the next N continuous quarters, and will be willing to pay the current premium that Nvidia is charging, today. The corollary is you're betting that "customer demand" for ever-improving GPUs is climbing an incredibly upwards, secular (read: permanent, non-cyclical) trajectory, such that it outpaces the assumptions of these already-lofty expectations.

I think betray my own opinion, but the price is what it is. :)

saberience · 2 years ago
So you think demand for AI driven software and tools is going to stop growing over the next few years? It’s a big call. I think it’s just the beginning personally but time will tell.
mholm · 2 years ago
Nvidia is the pickaxe seller in a gold rush. Their valuation is very much tied to how big AI grows in the next several years, and how quickly competitors can arise. I could easily see them continuing to go up from here, especially if AI keeps on expanding utility instead of leveling off as some fear.
creer · 2 years ago
Very much so. Nvidia was lucky with the perfect sequence of video and compute farms, then cryptocurrency and model training, and now the model training direction is flowering into application (hopes) left and right. But they did great with their luck. And now they are yet again in the position of selling tools to soak up everyone else's capital investment power. They are still now (and yet again) at the perfect spot for a giant new market.

But that's still a high valuation - that is even if that new market grows to the sky, it's not clear that it can justify that new valuation.

Is Nvidia failing anytime soon? No. Is it the best investment you can find? That's harder to tell which is why the complaint of "very high valuation already". It's not in doubt that it's a great business. It's less easy to decide whether it's a great investment to get in right now.

But everything is relative a PE around 40-60 is NOT historically crazy high for the verge of a giant new market. And yet it is very high for a trillion dollar market cap. This is exciting: a trillion dollar market cap at the verge of a giant new market!

Deleted Comment

squeaky-clean · 2 years ago
It's pretty overpriced already if you're looking at the fundamentals, and has been for a while. But fundamentals haven't really mattered in tech stocks for a long time.

If you want the responsible advice, it's overpriced. If you want my personal advice, well I bought more yesterday afternoon.

AnotherGoodName · 2 years ago
I don't see it. It would have been overpriced if not for this insane report. 854% YoY earnings growth. A PE that's now below 50 (even taking into account the $500 share price).

Its not overpriced anymore. In fact if there's anything left in the tank or this is at all sustained it's cheap.

what_ever · 2 years ago
Can you add a little more color on what fundamentals is it overpriced? Have you looked at their QoQ growth (not even YoY) for last few quarters? I would say the stock price is just trying to keep up with the numbers they are putting out.
mikeweiss · 2 years ago
In my opinion it's likely mostly pull forward demand. Companies are racing to buy as many chips as possible and hoard them.

I already saw a few posts here on HN from companies that threw down insane amounts of $$ on H100s and are now looking to rent out their excess capacity. I'm guessing we'll be seeing a lot more posts like that soon.

mortehu · 2 years ago
Looking to rent out, or fully booked for the next year and looking to buy more GPUs?
epolanski · 2 years ago
This incredible growth was already priced in at 250$.

Now it's just crazy.

tmn · 2 years ago
Valuation fundamentals don't justify current prices. That said it could easily go higher (much higher). Passive investing has created a constant bid that has significantly distorted price discovery compared to pre passive era.

Deleted Comment

csomar · 2 years ago
I would not predict a peak if I didn't predict this rise (which most people didn't?). A new crypto that requires GPU mining, continued AI boom, GPUs being used for something else?, etc.. Their price could go infinitely up.
reilly3000 · 2 years ago
It’s basically a meme stock now. I don’t think anyone should be surprised by wide swings and irrational pricing going forward into the next few months.
vsareto · 2 years ago
I don't think the market leader for graphics cards -- a technically complex product compared to a bunch of brick stores selling video games -- is what you can consider a meme stock
pb7 · 2 years ago
What makes it a meme stock? It's printing money from an industry that is only starting. This isn't crypto nonsense.
danielmarkbruce · 2 years ago
Top line growing 100% a year, faster recently..... Doesn't take long for $50 bill pa to turn into 1 trillion pa at that rate...
rvz · 2 years ago
> The aftermarket trading is up over 8% as of right now, roughly $41 USD to approximately $513 a share. Insane.

8% is close to nothing in stocks. Biotech stocks go up and down more than that without earnings announcements.

> Anyone who is a lot more versed in company valuation methodology see this as being near peak value, or does Nvidia have a lot more room to run?

As long as fine-tuning, training or even using these models are inefficient and no other efficient alternatives to that without these GPUs, then Nvidia will remain unchallenged unless that changes.

EDIT: It is true like it or not AI bros. There are too many to list. For example, just yesterday:

Fulcrum Therapeutics, Inc. (FULC) 38% up.

China SXT Pharmaceuticals (CM:SXTC) down 25%.

Regencell Bioscience Holdings (RGC) 28% up.

NanoViricides (NNVC) up 20%.

Armata Pharmaceuticals (ARMP) down 23%.

[0] https://simplywall.st/stocks/us/pharmaceuticals-biotech

pb7 · 2 years ago
Biotechs are lottery tickets, not stocks. You're just gambling on binary results.