If each Furiosa RNGD Gen 2 card costs $10k while an RTX 5090 costs $2k, and the RTX 5090 has better performance for LLMs, you have to be mad stupid, have a personal grudge against Nvidia, or just want to burn cash for no good reason to rack up your data centers with Furiosa.
The value of their company is going to diminish and their next offer won't go over $1.5 billion. It will actually be less than $800 million since every year Nvidia, Intel, and other AI hardware startups introduce a better and faster card.
If Furiosa cards magically became cheaper than Nvidia's similar hardware, Furiosa might be worth a quarter billion dollars. I highly doubt this would ever happen because making AI compute with cutting edge lithography is hella expensive and involves heavy politics.
The story with Intel around these times was usually that AMD or Cyrix or ARM or Apple or someone else would come around with a new architecture that was a clear generation jump past Intel's, and most importantly seemed to break the thermal and power ceilings of the Intel generation (at which point Intel typically fired their chip design group, hired everyone from AMD or whoever, and came out with Core or whatever). Nvidia effectively has no competition, or hasn't had any - nobody's actually broken the CUDA moat, so neither Intel nor AMD nor anyone else is really competing for the datacenter space, so they haven't faced any actual competitive pressure against things like power draws in the multi-kilowatt range for the Blackwells.
The reason this matters is that LLMs are incredibly nifty often useful tools that are not AGI and also seem to be hitting a scaling wall, and the only way to make the economics of, eg, a Blackwell-powered datacenter make sense is to assume that the entire economy is going to be running on it, as opposed to some useful tools and some improved interfaces. Otherwise, the investment numbers just don't make sense - the gap between what we see on the ground of how LLMs are used and the real but limited value add they can provide and the actual full cost of providing that service with a brand new single-purpose "AI datacenter" is just too great.
So this is a press release, but any time I see something that looks like an actual new hardware architecture for inference, and especially one that doesn't require building a new building or solving nuclear fusion, I'll take it as a good sign. I like LLMs, I've gotten a lot of value out of them, but nothing about the industry's finances add up right now.