Readit News logoReadit News
lanthissa · 6 months ago
can someone help me understand how the following can be true:

1. TPU's are a serious competitor to nvidia chips.

2. Chip makers with the best chips are valued at 1-3.5T.

3. Google's market cap is 2T.

4. It is correct for google to not sell TPU's.

i have heard the whole, its better to rent them thing, but if they're actually good selling them is almost as good a business as every other part of the company.

Velorivox · 6 months ago
Wall street undervalued Google even on day one (IPO). Bezos has said that some of the times the stock had been doing the worst were when the company was doing great.

So, to help you understand how they can be true: market cap is governed by something other than what a business is worth.

As an aside, here's a fun article that embarrasses wall street. [0]

[0] https://www.nbcnews.com/id/wbna15536386

YZF · 6 months ago
I remember sitting around the lunch table in a tech company when Google IPO'd and none of us understood the IPO valuation. I didn't buy any stocks. I also didn't get "cloud" either. Sometimes new business is essentially created out of thin air. Google and Amazon's valuation did not increase only due to their efforts, it also increased because the broader market shifted.

I guess that means don't take investment advice from me ;) I've done OK buying indices though.

ethbr1 · 6 months ago
The fact that Wall Street sometimes misses revolutions and misvalues stocks does not mean that all perceived misvalued stocks are revolutionary.

Plenty of companies have screwed up execution, and the market has correctly noticed and penalized them for that.

ghoshbinayak · 6 months ago
Reading through this news article is hilarious.

P.S. I did not have access to internet in 2006, so I guess the skepticism was normal at the time.

mrlongroots · 6 months ago
Hah thanks for sharing this, fun read!
smokel · 6 months ago
Selling them and supporting that in the field requires quite some infrastructure you'd have to build. Why go through all that trouble if you already make higher margins renting them out?

Also, if they are so good, it's best to not level the playing field by sharing that with your competitors.

Also "chip makers with the best chips" == Nvidia, there aren't many others. And Alphabet does more than just produce TPUs.

matt-p · 6 months ago
Does Google cloud offer them on a "aws outpost" style model? I think that plus cloud access is probably the easiest and ' best ' way to offer them. Last thing you need to be dealing with is super micro, gigabyte etc building a box for them and so on - I can definitely understand not selling the raw chip.
dismalaf · 6 months ago
Nvidia is selling a ton of chips on hype.

Google is saving a ton of money by making TPUs, which will pay off in the future when AI is better monetized, but so far no one is directly making a massive profit from foundation models. It's a long term play.

Also, I'd argue Nvidia is massively overvalued.

CalChris · 6 months ago
Common in gold rushes but then they are selling chips. Are they overvalued? Maybe. Are they profitable (something WeWork and Uber aren't) ? Yes, quite.
michaelt · 6 months ago
nvidia, who make AI chips with kinda good software support, and who have sales reflecting that, is worth 3.5T

google, who make AI chips with barely-adequate software, is worth 2.0T

AMD, who also make AI chips with barely-adequate software, is worth 0.2T

Google made a few decisions with TPUs that might have made business sense at the time, but with hindsight haven't helped adoption. They closely bound TPUs with their 'TensorFlow 1' framework (which was kinda hard to use) then they released 'TensorFlow 2' which was incompatible enough it was just as easy to switch to PyTorch, which has TPU support in theory but not in practice.

They also decided TPUs would be Google Cloud only. Might make sense, if they need water cooling or they have special power requirements. But it turns out the sort of big corporations that have multi-cloud setups and a workload where a 1.5x improvement in performance-per-dollar is worth pursuing aren't big open source contributors. And understandably, the academics and enthusiasts who are giving their time away for free aren't eager to pay Google for the privilege.

Perhaps Google's market cap already reflects the value of being a second-place AI chipmaker?

que-encrypt · 6 months ago
jax is very much a working (and in my view better, aside from the lack of community) software support. Especially if you use their images (which they do). > > Tensorflow They have been using jax/flax/etc rather than tensorflow for a while now. They don't really use pytorch from what I see on the outside from their research works. For instance, they released siglip/siglip2 with flax linen: https://github.com/google-research/big_vision

TPUs very much have software support, hence why SSI etc use TPUs.

P.S. Google gives their tpus for free at: https://sites.research.google/trc/about/, which I've used for the past 6 months now

jeffbee · 6 months ago
Like other Google internal technologies, the amount of custom junk you'd need to support to use a TPU is pretty extreme, and the utility of the thing without the custom junk is questionable. You might as well ask why they aren't marketing their video compression cards.
Workaccount2 · 6 months ago
Ironically, despite Google ultimately being an advertising company, it is the absolute worst company at advertising itself.
roughly · 6 months ago
Aside from the specifics of Nvidia vs Google, one thing to note regarding company valuations is that not all parts of the company are necessarily additive. As an example (read: a thing I’m making up), consider something like Netflix vs Blockbuster back in the early days - once Blockbuster started to also ship DVDs, you’d think it’d obviously be worth more than Netflix, because they’ve got the entire retail operation as well, but that presumes the retail operation is actually a long-term asset. If Blockbuster has a bunch of financial obligations relating to the retail business (leases, long-term agreements with shippers and suppliers, etc), it can very quickly wind up that the retail business is a substantial drag on Blockbuster’s valuation, as opposed to something that makes it more valuable.
matt-p · 6 months ago
AMD and even people like Huawei also make somewhat acceptable chips but using them is a bit of a nightmare. Is it a similar thing here? Using TPUs is more difficult, only exists inside Google cloud etc
radialstub · 6 months ago
I believe Broadcom is also very involved in the making of the TPU's and networking infrastructure and they are valued at 1.2T currently. Maybe consider the combined value of Broadcom and Google.
lftl · 6 months ago
Wouldn't you also need to add TSMC to Nvidia's side in that case?
mft_ · 6 months ago
If they think they’ve got a competitive advantage vs. GPUs which benefits one of their core products, it would make sense to retain that competitive advantage for the long term, no?
Uehreka · 6 months ago
No. If they sell the TPUs for “what they’re worth”, they get to reap a portion of the benefit their competitors would get from them. There’s money they could be making that they aren’t.

Or rather, there would be if TPUs were that good in practice. From the other comments it sounds like TPUs are difficult to use for a lot of workloads, which probably leads to the real explanation: No one wants to use them as much as Google does, so selling them for a premium price as I mentioned above won’t get them many buyers.

epolanski · 6 months ago
> can someone help me understand how the following can be true

You're conflating price with intrinsic value with market analysis. All different things.

foota · 6 months ago
5. The efficient market hypothesis is true :-)
santaboom · 6 months ago
Good questions, below I attempt to respond to each point then wrap it up. TLDR: even if TPU is good (and it is good for Google) it wouldn’t be “almost as good a business as every other part of their company” because the value add isn’t FROM Google in the form of a good chip design(TPU). Instead the value add is TO Google in form of specific compute (ergo) that is cheap and fast FROM relatively simple ASICs(TPU chip) stitched together into massively complex systems (TPU super pods).

If interesting in further details:

1) TPUs are a serious competitor to Nvidia chips for Google’s needs, per the article they are not nearly as flexible as a GPU (dependence on precompiled workloads, high usage of PEs in systolic array). Thus for broad ML market usage, they may not be competitive with Nvidia gpu/rack/clusters.

2)chip makers with the best chips are not valued at 1-3.5T, per other comments to OC only Nvidia and Broadcomm are worth this much. These are not just “chip makers”, they are (the best) “system makers” driving designs for chips and interconnect required to go from a diced piece of silicon to a data center consuming MWs. This part is much harder, this is why Google (who design TPU) still has to work with Broadcomm to integrate their solution. Indeed every hyperscalar is designing chips and software for their needs, but every hyperscalar works with companies like Broadcomm or Marvell to actually create a complete competitive system. Side note, Marvell has deals with Amazon, Microsoft and Meta to mostly design these systems they are worth “only” 66B. So, you can’t just design chips to be valuable, you have to design systems. The complete systems have to be the best, wanted by everyone (Nvidia, Broadcomm) in order to be in Ts, otherwise you’re in Bs(Marvell).

4. I see two problems with selling TPU, customers and margins. If you want to sell someone a product, it needs to match their use, currently the use only matches Google’s needs so who are the customers? Maybe you want to capture hyperscalars / big AI labs, their use case is likely similar to google. If so, margins would have to be thin, otherwise they just work directly with Broadcomm/Marvell(and they all do). If Google wants everyone using cuda /Nvidia as a customer then you massively change the purpose of TPU and even Google.

To wrap up, even if TPU is good (and it is good for Google) it wouldn’t be “almost as good a business as every other part of their company” because the value add isn’t FROM Google in the form of a good chip design(TPU). Instead the value add is TO Google in form of specific compute (ergo) that is cheap and fast FROM relatively simple ASICs(TPU chip) stitched together into massively complex systems (TPU super pods).

Sorry that got a bit long winded, hope it’s helpful!

throwaway31131 · 6 months ago
This also all assumes that there is excess foundry capacity in the world for Google to expand into, which is not obvious. One would need exceptionally good operations to compete here and that has never been Google's forte.

https://www.tomshardware.com/tech-industry/artificial-intell...

"Nvidia to consume 77% of wafers used for AI processors in 2025: Report...AWS, AMD, and Google lose wafer share."

rwmj · 6 months ago
Aren't Google's TPUs a bit like a research project with practical applications as a nice side effect?
silentsea90 · 6 months ago
All of Google ML runs on TPUs tied to $ billions in revenue. You make it sound like TPUs are a Google X startup that's going to get killed tomorrow.
hackernudes · 6 months ago
Why do you say that? They are on their seventh iteration of hardware and even from the beginning (according to the article) they were designed to serve Google AI needs.

My take is "sell access to TPUs on Google cloud" is the nice side effect.

wkat4242 · 6 months ago
I think you're thinking of the coral ones. The only ones they've sold directly to the public.
surajrmal · 6 months ago
On what basis do you make that claim? It's incredibly misleading and wrong.
almostgotcaught · 6 months ago
> In essence, caches allow hardware to be flexible and adapt to a wide range of applications. This is a large reason why GPUs are very flexible hardware (note: compared to TPUs).

this is correct but mis-stated - it's not the caches themselves that cost energy but MMUs that automatically load/fetch/store to cache on "page faults". TPUs don't have MMUs and furthermore are a push architecture (as opposed to pull).

Neywiny · 6 months ago
What's not mentioned is a comparison vs FPGAs. You can have a systolic, fully pipelined system for any data processing not just vectorized SIMD. Every primitive is able to work independently of everything else. For example, if you have 240 DSP slices (which is far from outrageous on low scale), a perfect design could use those as 240 cores at 100% throughput. No memory, caching, decoding, etc overhead.
adrian_b · 6 months ago
True, but FPGAs are suitable only for things that will not be produced in great numbers, because their cost and power consumption are many times higher than those of an ASIC.

For a company of the size of Google, the development costs for a custom TPU are quickly recovered.

Comparing a Google TPU with an FPGA is like comparing an injection-moulded part with a 3D-printed part.

Unfortunately, the difference in performance between FPGAs and ASICs has greatly increased in recent years, because the FPGAs have remain stuck on relatively ancient CMOS manufacturing processes, which are much less efficient than the state-of-the-art CMOS manufacturing processes.

cpgxiii · 6 months ago
> True, but FPGAs are suitable only for things that will not be produced in great numbers, because their cost and power consumption are many times higher than those of an ASIC.

While common folk wisdom, this really isn't true. A surprising number of products ship with FPGAs inside, including ones designed to be "cheap". A great example of this is that Blackmagic, a brand known for being a "cheap" option in cinema/video gear, bases everything on Xilinx/AMD FPGAs (for some "software heavy" products they use the Xilinx/AMD Zynq line, which combines hard ARM cores with an FPGA). Pretty much every single machine vision camera on the market uses an FPGA for image processing as well. These aren't "one in every pocket" level products, but they are widely produced.

> Unfortunately, the difference in performance between FPGAs and ASICs has greatly increased in recent years, because the FPGAs have remain stuck on relatively ancient CMOS manufacturing processes

This isn't true either. At the high end, FPGAs are made on whatever the best process available is. Particularly in the top-end models that combine programmable fabric with hard elements, it would be insane not to produce them on the best process available. What is the big hindrance with FPGAs is that almost by definition the cell structures needed to produce programability are inherently more complex and less efficient than the dedicated circuits of an ASIC. That often means a big hit to maximum clock rate, with resulting consequences to any serial computation being performed.

Neywiny · 6 months ago
When you can ASIC, yes, do an ASIC. But my point was that there was a lot of GPU comparison. GPUs are also not ASICs relative to AI.
c-c-c-c-c · 6 months ago
fpga's are not expensive when ordered in bulk, the volume prices you see on mouser are way higher than the going rates.
RossBencina · 6 months ago
Can you suggest a good reference for understanding which algorithms map well onto the regular grid systolic arrays used by TPUs? The fine article says dese matmul and convolution are good, but is there anything else? Eigendecomposition? SVD? matrix exponential? Solving Ax = b or AX = B? Cholesky?
musebox35 · 6 months ago
I think https://jax-ml.github.io/scaling-book/ is one of the best references to go through. It details how single device and distributed computations map to TPU hardware features. The emphasis is on mapping the transformer computations, both forwards and backwards, so requires some familiarity with how transformer networks are structured.
cdavid · 6 months ago
SVD/eigendecomposition will often boil down to making many matmul (e.g. when using Krylov-based methods, e.g. Arnoldi, Krylov-schur, etc.), so I would expect TPU to work well there. GMRES, one method to solve Ax = b is also based on Arnoldi decomp.
Straw · 6 months ago
You can do all of these in terms of matmul to some extent:

Solving AX=B can be done with Newton's method to invert A, which boils down to matmuls.

Matrix exponential is normally done with matmuls- the scale down, Taylor/Pade and square approach.

Why do you need Cholesky? It's typically a means to an end, and when matmul is your primitive, you reach for it much less often.

Eigendecomposition is hard. If we limit ourselves to symmetric, we could use a blocked Jacobi algorithm where we run a non-matmul Jacobi to do 128x128 off-diagonal blocks and then use the matmul unit to apply to the whole matrix- for large enough matrices, still bottlenecked on matmul.

SVD we can get from Polar decomposition, which has purely-matmul iterations, and symmetric eigendecomposition.

One does have to watch out for numerical stability and precision very carefully when doing all these!

RossBencina · 6 months ago
Cholesky for generating so-called sigma points in the Unscented Transformation.
WithinReason · 6 months ago
Anything that you can express as 128x128 (but ideally much larger) dense matrix multiplication and nothing else
serf · 6 months ago
does that cooling channel have a NEMA stepper on it as a pump or metering valve?[0]

If so, wild. That seems like overkill.

[0]: https://henryhmko.github.io/posts/tpu/images/tpu_tray.png

fellowmartian · 6 months ago
definitely closed-loop, might even be a servo
mdaniel · 6 months ago
Related: OpenTPU: Open-Source Reimplementation of Google Tensor Processing Unit (TPU) - https://news.ycombinator.com/item?id=44111452 - May, 2025 (23 comments)
wkat4242 · 6 months ago
Haha I thought this was about 3D printing with thermostatic poly urethane. It's one of the harder materials to print and it also took me some time to get my head around it.
frays · 6 months ago
How can someone have this level of knowledge about TPUs without working at Google?
ipsum2 · 6 months ago
Everything thats in the blog post is basically well known already. Google publishes papers and gives talks about their TPUs. Many details are lacking though, and require some assumptions/best guesses. Jax and XLA are (partially) open source and give clues about how TPUs work under the hood as well.

https://arxiv.org/abs/2304.01433

https://jax-ml.github.io/scaling-book/

musebox35 · 6 months ago
From the acknowledgment at the end, I guess the author has access to TPUs through https://sites.research.google/trc/about/

This is not the only way though. TPUs are available to companies operating on GCP as an alternative to GPUs with a different price/performance point. That is another way to get hands-on experience with TPUs.

erwincoumans · 6 months ago
A quick free way to access TPUs is through https://colab.research.google.com, Runtime / Change Runtime Type / v2-8 TPU
antognini · 6 months ago
There's a pretty detailed description of TPUs in the latest edition of Hennessy's Computer Architecture. (Hennessy was also involved with the design of the early TPU architectures if I recall right.)