This is interesting. Groq (chip co, not twitter’s ‘Grok’ LLM) has a similar silicon scale, I’m not sure about architecture, though. One very interesting thing about Groq that I failed to appreciate when they were originally raising is that the architecture is deterministic.
Why is determinism good for inference? If you are clever, you can run computations distributed without waiting for sync. I can’t tell from their marketing materials, but it’s also possible they went for the gold ring and built something latch-free on the silicon side.
Groq seems to have been able to use their architecture to deliver some insanely high token/s numbers; groqchat is by far the fastest inference API I’ve seen.
All this to say that I’m curious what a Dojo architecture designed around training could do. Presuming training was a key use case in the arch design. Knowing the long game thinking at Tesla, I imagine it was.
TSMC got so far out in front of everyone that their competitors had to get creative and solve other issues.
Why is this on 7mn? Because I dont think you could do this on 3nm. It is my understand that everything down at that scale is double shot/imaged to get the right sized components, and with that a higher defect rate.
Look at what intel is doing, and holding out for single shot processes. Their pushing of double sided chips (power on one side and data on the other) would be impossible with the 3nm double shot (I cant see flipping the die as being a good way to get reliability in alignment for 4 imagings..)
I suspect that were getting to the end of size (shrink) scaling and were going to get into process and design scaling. Going to be interesting to see what happens to cost and capacity if we're at that point. Process flexibility would be the new king!
Not necessarily. It's not like an image sensor where you throw it out if theres a single dead pixel. To increase yield of CPUs, they design them to have parallel redundancy and deactivate cores or memory chunks and sell it as a lower tier model. These AI chips are way more parallel than a CPU. They had decades of statistics of wafer flaw distributions before they began the design process, so they would design in just enough redundancy to get the desired yield for the process. I wouldn't be surprised if each processor has hundreds of things disabled (ALUs,memory units, whatever theyre using), and thousands to spare.
Traditionally, that has been the issue with wafer scale processors. Cerebras is supposedly selling these things to production customers for obscene pricing, probably offsetting the low yields. Who knows if it’s profitable though or still burning cash.
We don’t know if this wafer is a good investment or not. It really depends on what its performance and cost are compared to the performance and cost of other solutions.
My guess is if they did this, they thought it would be a huge improvement over buying off the self hardware. The reason I say this is it’s expensive to design a wafer computer, and get it manufactured.
The chip operates without a traditional motherboard, and it’s basically permanently and directly attached to power delivery and a massive heatsink. It’s a very ambitious design, I never thought we’d see it so soon. Technical overview for this reads like porn for HW engineers
Why is determinism good for inference? If you are clever, you can run computations distributed without waiting for sync. I can’t tell from their marketing materials, but it’s also possible they went for the gold ring and built something latch-free on the silicon side.
Groq seems to have been able to use their architecture to deliver some insanely high token/s numbers; groqchat is by far the fastest inference API I’ve seen.
All this to say that I’m curious what a Dojo architecture designed around training could do. Presuming training was a key use case in the arch design. Knowing the long game thinking at Tesla, I imagine it was.
TSMC got so far out in front of everyone that their competitors had to get creative and solve other issues.
Why is this on 7mn? Because I dont think you could do this on 3nm. It is my understand that everything down at that scale is double shot/imaged to get the right sized components, and with that a higher defect rate.
Look at what intel is doing, and holding out for single shot processes. Their pushing of double sided chips (power on one side and data on the other) would be impossible with the 3nm double shot (I cant see flipping the die as being a good way to get reliability in alignment for 4 imagings..)
I suspect that were getting to the end of size (shrink) scaling and were going to get into process and design scaling. Going to be interesting to see what happens to cost and capacity if we're at that point. Process flexibility would be the new king!
The actual processors sound like they’re being made on other wafers in a standard way and then cut out and attached to the interconnect wafer.
So it’s not like the entire part was built only on one wafer and a single mistake would cause the whole thing to have to be thrown out.
At least that’s how I understood the article + educated guessing.
This feels a little like a “you were so preoccupied with whether or not you could” thing.
Wow.
I can’t imagine how much one of those wafers must cost. I’d love to know.
Hope it accomplishes what they want because they’ve certainly had to spend a fortune to get to this point.
My guess is if they did this, they thought it would be a huge improvement over buying off the self hardware. The reason I say this is it’s expensive to design a wafer computer, and get it manufactured.
Because it doesn't seem to make much sense.