Readit News logoReadit News
theincredulousk · 7 months ago
This has been developing for a while... The big players have basically been competing for allocations of a set production, so NVIDIA negotiated into the allocations that some % of the compute capacity they "sell" them is reserved and exclusively leased back to NVIDIA.

So now NVIDIA has a whole bunch of cloud infrastructure hosted by the usual suspects that they can use for the same type of business the usual suspects do.

well played tbh

xhkkffbf · 7 months ago
I hate to be cynical, but I've seen such leaseback schemes used to inflate sales. Is that the case here? Or is there enough demand or legit usage to justify this kind of arrangement?
kurthr · 7 months ago
It's just another kind of leverage. There's no problem, until there is, and then it's bigger than it would have been. Once all of their sales are leasebacks, you'll know it's about to go boom (of course they won't announce that in their reports).
justahuman74 · 7 months ago
I can't see the cloud providers being happy about this, it whitelabels away their branding and customer experience flows.

It puts nvidia on both the vendor and customer side of the relationship, which seems odd

ketzo · 7 months ago
Well, what are they gonna do about it?

Nvidia has the most desirable chips in the world, and their insane prices reflect that. Every hyperscaler is already massively incentivized to build their own chips, find some way to take Nvidia down a peg in the value chain.

Everyone in the world who can is already coming for Nvidia’s turf. No reason they can’t repay the favor.

And beyond just margin-taking, Nvidia’s true moat is the CUDA ecosystem. Given that, it’s hugely beneficial to them to make it as easy as possible for every developer in the world to build stuff on top of Nvidia chips — so they never even think about looking elsewhere.

Xevion · 7 months ago
While I don't dispute that they're objectively the most desirable at the current moment - I do think your comment implies that they deserve it, or that people WANT Nvidia to be the best.

It almost sounds like you're cheering on Nvidia, framing it as "everyone else trying to reduce the value of Nvidia", meanwhile they have a long, long history of closed-source drivers, proprietary & patented cost-inflated technology that would be identical if not inferior to alternatives - if it weren't for their market share and vendor lock-in strategies.

"Well, what are they gonna do about it?" When dealing with a bully, you go find friends. They're going to fund other chip manufacturers and push for diversity, fund better drivers and compatibility. That's the best possible future anyone could hope for.

chii · 7 months ago
> Nvidia’s true moat is the CUDA ecosystem.

it is true, but also not. nvidia is certainly producing a chip that nobody else can replicate (unless they're the likes of google, and even they are not interested in doing so).

The CUDA moat is the same type of moat as intel's x86 instruction set. Plenty of existing programs/software stack have been written against it, and the cost to migrate away is high. These LLM pipelines are similar, and even more costly to migrate away.

But because LLM is still immature right now (it's only been approx. 3 yrs!), there's still room to move the instruction set. And middleware libraries can help (pytorch, for example, has more than just the CUDA backend, even if they're a bit less mature).

The real moat that nvidia has is their hardware capability, and CUDA is the disguised moat.

rwmj · 7 months ago
[Genuine question!] Does NVidia have patents etc on CUDA that prevent a competitor from reverse engineering and producing a compatible clone, or is it just that competitors are incompetent (hey AMD)? Or is it that the task is enormous and rapidly changing, like you have to be bug-for-bug compatible with a large, ill-documented API (the Microsoft Windows moat)?
imtringued · 7 months ago
Google has already mostly exited the CUDA and Nvidia ecosystem. They only offer it for their customers at this point.
mattlondon · 7 months ago
Ask Google: TPUs.
roenxi · 7 months ago
> Well, what are they gonna do about it?

Ironically, the most effective thing they can do is probably haul up the AMD rep and yell at them.

alexgartrell · 7 months ago
The cloud business model is to use scale and customer ownership to crush hardware margins to dust. They’re also building their own accelerators to try to cut Nvidia out altogether.
cbg0 · 7 months ago
I've always felt that the business model is nickel & diming for things like storage/bandwidth and locking in customers with value-add black box services that you can't easily replace with open source solutions.

Just took a random server: https://instances.vantage.sh/aws/ec2/m5d.8xlarge?duration=mo... - to get a decent price on it you need to commit to three years at $570 per month(no storage or bandwidth included). Over the course of 3 years that's $20520 for a server that's ~10K to buy outright, and even with colo costs over the same time frame you'll spend a lot less, so not exactly crushing those margins to dust.

shrubble · 7 months ago
Cloud is propped up by the tax laws.

Cloud bills can be written off in the month in which they are paid; while buying hardware has to be depreciated over years.

Hilift · 7 months ago
> DGX Cloud Lepton, is designed to link artificial intelligence developers with Nvidia’s network of cloud providers, which provide access to its graphics processing units, or GPUs. Some of Nvidia’s cloud provider partners include CoreWeave, Lambda and Crusoe.

> "Nvidia DGX Cloud Lepton connects our network of global GPU cloud providers with AI developers," said Jensen Huang, chief executive of Nvidia in a statement. The news was announced at the Computex conference in Taiwan.

Sounds like a preferred developer resource. The target audience isn't the usual cro-mag that wants to run LLM's for food.

xbmcuser · 7 months ago
This is what Nvidia has always done creeping into the margins of its partners and taking over. All it's gpu board partners will tell the same story.

Deleted Comment

neximo64 · 7 months ago
Why? If the GPUs are used and they want more it makes it easy, also its opt-in.
moralestapia · 7 months ago
Cool, they're also free to start making their own GPUs ...
seydor · 7 months ago
google make their own chips too
londons_explore · 7 months ago
TPU's have serious compatibility problems with a good chunk of the ML ecosystem.

That alone means many users will want to use Nvidia hardware even at a decent price premium when the alternative is an extra few months of engineering time in a very fast moving market.

Bostonian · 7 months ago
snihalani · 7 months ago
ty
AlotOfReading · 7 months ago
I can see the value of the product, but this seems like an incredibly dangerous offering for smaller clouds. Nvidia has significant leverage to drive prices down to commodity and keep any margin for themselves, while pushing most of the risk onto their partners.
alexgartrell · 7 months ago
I’d imagine that these clouds are probably being incentivized to participate

Deleted Comment

bgwalter · 7 months ago
Well, Sun Microsystems launched a cloud shortly before being acquired by Oracle:

https://en.wikipedia.org/wiki/Sun_Cloud

Microsoft's Azure is reportedly a loss leader:

https://www.cnbc.com/2022/12/21/google-leaked-doc-microsoft-...

But don't let that stop you from going outside your core competency.

londons_explore · 7 months ago
Isn't that rather stepping on the toes of your biggest clients - Microsoft, aws, gCloud, etc.
noosphr · 7 months ago
All those customers are also building their own chips.

Having been a partner for Microsoft research I've also had them try and patent the stuff we were providing them.

In short with megacorps the only winning move is to fuck them faster than they can fuck you.

mi_lk · 7 months ago
That’s a beautiful conclusion
aranchelk · 7 months ago
Customers of those services have a lot of considerations, as long as Nvidia doesn’t undercut the prices too much, I think no.

Getting more developers creating more models that can then be run on those services will likely expand business for all of those vendors.

netfortius · 7 months ago
Isn't their CEO the guy who called Trump re-industrialisation policies 'visionary'? [1] Maybe that's where the idea on cloud (as all others, nowadays) is coming from?!? ;->

[1] https://www.reuters.com/business/aerospace-defense/nvidia-ce...

zombiwoof · 7 months ago
Bye AMD
Xevion · 7 months ago
Dumb.

No cloud provider is gonna see further price gouging from the company with the largest market share and think "Yeah, let's disconnect from the only remaining competitor, make sure every nail is in our coffin".

It's probably the opposite. I bet this move will lead to AMD's increased funding towards compatability and TPU development, in the hopes that they'll become a serious competitor to Nvidia.

chii · 7 months ago
> AMD's increased funding towards compatability and TPU development

no investor is going to bet on the second-place horse. Because they would've done the betting _before_ nvidia became the winning powerhouse that it has become!

The fact is, AMD's hardware capability is just insufficient to compete, and they're not getting there fast enough - unlike the games industry, there's not a lot of low budget buyers here.