> Slashing orders to a contract manufacturer is not trivial since fabless chip designers are obliged to procure a fixed number of wafers in certain quarters. Nevertheless, TSMC is reportedly willing to accept compensation (as it will hold wafers with chips from AMD, Intel, Nvidia, etc., before they are ready to buy them) and even renegotiate deals on long-term supply contracts (i.e., increase the number of wafers that a company is committed to buying in the future) in exchange.
Sounds like it might not be such a big deal for TSMC, at least in the short term.
They are forecasting a lower demand in the short term future. I’d guess they ran the numbers and producing fewer high cost items is more cost effective for them.
Unfortunately, this likely means more shortages and costs stay high for the consumer.
The fact that the market can still do this (reduce supply to operate in the high margin quadrant) is a temporary blip due to the x86 duopoly. Soon anyone will be able to supply cycles/$ at a better rate than the entrenched players.
This is all pretty accurate. The only real surprise is that 40um tech and above are still running full steam and even the 20+ year old tech has only dropped a few percent for next year.
This is entirely unsurprising if you actually look at things other than big CPU/GPU/IO.
Power regulators doesn't need small transistors, it wants big nice and fat transistors. You'd get at least one of them in every device and a bunch in any bigger device. Same with any power device, LEDs, and really anything that passes some current.
Any microcontroller smaller than "runs full fledged linux" often cares about cost of production first and foremost, hell, STM32 only recently [1] got on 90nm node!
Interestingly enough, bigger node might get you worse uA/MHz but often lower idle current (the bigger things are the lower the leakage) so they might even be desirable. And the chip might want to have some mA current output on each pin (so it doesn't require external drivers to control stuff), which again means some transistor size in your I/O port is fixed and can't be made smaller so your savings from going to smaller process are not linear.
For vast majority of non CPU/GPU chips it comes to cost to manufacture and those 20+ years old lines not only paid for themselves multiple times, they also have great yields.
> Power regulators doesn't need small transistors, it wants big nice and fat transistors.
Is it possible to make bigger transistors using a smaller process? Just because you are using saying a 7nm process, can't you still make a transistor (like if you needed) that was the same size as it was on 90nm? I think of newer processes as merely being more precise so you could still do the old designs on them, you are just unecessarily tying up the newer machines... but concievably in the future you could just start using 5nm process to keep producing 30 nm and 90 nm chips just to consolidate production lines, non?
Just to add color to the discussion above, what distinguishes a microcontroller from other chips is that the volatile memory is on the same piece of silicon as the processing cores.
This allows for power saving because you don’t have to condition signals as they jump from silicon piece to silicon piece (oversimplification).
It is somewhat harder to make the above on smaller nodes than it is to make just processing cores or just memory.
That fact (in addition to a number of other cost and technical factors) is why microcontrollers lag process nodes.
3D NAND, which basically underpins all of our SSD tech, is hard enough to make at 40nm. They basically have to fill a tiny hole really precisely with layers of five different materials to create a vertical stack that works. It's really hard to go down node sizes plus once you hit 28nm the price of transistors stops decreasing and starts increasing instead with 22nm and smaller.
This idea that the price per transistor gets higher after 28nm needs to die. Right now the cheapest node for logic dominated chips is N6 (an optical shrink of N7). And even then, N5 is still cheaper than N10.
None of that is really relevant, because TSMC isn't in the business of making flash memory. The fab processes for logic and for flash memory are so different that there's no point to making comparisons between their respective nodes.
> E.g. what is preventing a RISC-V company from leap-frogging the competition?
They would need to have already taped out on N7 and be ready to start mass production now. That doesn't happen overnight. Even if you have taped out a ready-to-fab design, you usually want to start small and run a few rounds of QA on small batches of chips before you start ordering enough wafers to make a dent in TSMC's schedule.
N7 is a rough node to target. You have the high capital costs for NRE being an EUV node, but N5 is supposed to be cheaper to manufacturer chips on pretty much as soon as N3 is in full swing.
So those companies would probably be targeting something in the range of 14nm to 10nm in order to take advantage of the fire sale on late DUV tooling to prove that they can make a working chip to investors, and then leapfrog to N5 after the giants have jumped to N3.
Is this because of the economic slowdown as the article suggests, or simply because 2020-2021 covid lockdowns lead to a big surge in buying of computer hardware, and therefore a big drop now given that modern computers last a lot longer and everyone is equipped?
> 2020-2021 covid lockdowns lead to a big surge in buying of computer hardware, and therefore a big drop now given that modern computers last a lot longer and everyone is equipped
The problem is that the higher end GPUs are actually the same chip as the lower end GPUs. The lower end GPUs are where the manufacturing process had an error and thus they fuse off a core. All that to say that they cant manufacture a high end GPU without also manufacturing a low end one (or multiple depending on the manufacturing error rate). So, if they cant sell the low end one, it may not make sense to sell the high end one even if there is a buyer.
>The problem is that the higher end GPUs are actually the same chip as the lower end GPUs.
Don't know where you got that info but that's definitely false.
If you look at Nvidia's lineup, each GPU has a die that's unique to that product in both size and markings. Sure, there have been/are a few products that make use of the same die but with defects depending on yelds, but those are usually in the same performance class (1070, 1070TI and 1080 had the same die), and not the case you're talking about of entry level GPUs being high end GPUs dies with defects fused off. That's never the case. An RTX 3050 die is a completely different part than a 3080/3090 die.
Maybe this isn't strictly about GPUs. Or maybe it's just FUD designed to signal that no, video cards aren't getting cheaper and they'd rather cut supply than lower their margins. Who knows with this industry.
If I were Intel, I would see this as an opportunity to break into the GPU market by releasing their GPU's at very competitive prices and ensuring enough supply.
It’s not just about the 4080 though (though, you are right). AMDs new cards are much better value for gamers, but they are in this list too.
I think this is less about consumer GPUs and more about cost cutting in big corps slowing expansions, and the reduction in crypto demand.
Think about every tech company that says “oops we overbuilt for covid, whelp let’s lay everyone off” well they could be buying fewer servers if they over-anticipated their needs in that department too.
TPUs have been all over the shelf, and adoption was snubbed. More graphics cards than ever before! Real-time raytracing available on your cell phone, laptop, and all 5 smart TVs in your home and backyard porch!!
You were unable to convince your executives to adopt huggingface, IOT, and instead they opted to employ phone calls and Excel spreadsheets.
You, consumer, failed to accept Stadia into your heart,... and now you shall pay for your snide indifference!!
Does a 4xxx card make sense for any pedestrian gamer? Clocking in at >$1200, it is a hard swallow to say that the extra FPS is worth it. Unless the price comes down massively, the high sticker price (plus insane power draw), really leave me wondering who is purchasing these things for anything but machine learning.
Its worse, TSMC is the single biggest semiconductor manufacturer for general logic chips. Samsung has their own fabs for ram and ssd's specialised for density. All the litography equipment they use come from a single company called ASML. They also have complete market domination for the past few generations of litography machines and are the first and only producers of EUV litography.
Semiconductors are so complex and capital intensive that even a duopoly couln't balance the expense.
Sounds like it might not be such a big deal for TSMC, at least in the short term.
Unfortunately, this likely means more shortages and costs stay high for the consumer.
Power regulators doesn't need small transistors, it wants big nice and fat transistors. You'd get at least one of them in every device and a bunch in any bigger device. Same with any power device, LEDs, and really anything that passes some current.
Any microcontroller smaller than "runs full fledged linux" often cares about cost of production first and foremost, hell, STM32 only recently [1] got on 90nm node!
Interestingly enough, bigger node might get you worse uA/MHz but often lower idle current (the bigger things are the lower the leakage) so they might even be desirable. And the chip might want to have some mA current output on each pin (so it doesn't require external drivers to control stuff), which again means some transistor size in your I/O port is fixed and can't be made smaller so your savings from going to smaller process are not linear.
For vast majority of non CPU/GPU chips it comes to cost to manufacture and those 20+ years old lines not only paid for themselves multiple times, they also have great yields.
* [1] https://blog.st.com/stm32g0-mainstream-90-nm-mcu/
STM32G0 is value-oriented.
The latest-and-greatest power efficiency STM32 is the STM32U5: https://www.st.com/content/ccc/resource/sales_and_marketing/...
Which seems to be 40nm.
I'm not 100% sure if the STM32U5 is the smallest node they're using. But 40nm (or better) seems like a safe bet these days.
Is it possible to make bigger transistors using a smaller process? Just because you are using saying a 7nm process, can't you still make a transistor (like if you needed) that was the same size as it was on 90nm? I think of newer processes as merely being more precise so you could still do the old designs on them, you are just unecessarily tying up the newer machines... but concievably in the future you could just start using 5nm process to keep producing 30 nm and 90 nm chips just to consolidate production lines, non?
I know nothing about this, so I'm just asking.
This allows for power saving because you don’t have to condition signals as they jump from silicon piece to silicon piece (oversimplification).
It is somewhat harder to make the above on smaller nodes than it is to make just processing cores or just memory.
That fact (in addition to a number of other cost and technical factors) is why microcontrollers lag process nodes.
But they are not optimized for low idle current , they're for motor control and led driving.
https://www.tomshardware.com/news/tsmcs-wafer-prices-reveale...
Will that capacity be utilized by other companies? E.g. what is preventing a RISC-V company from leap-frogging the competition?
They would need to have already taped out on N7 and be ready to start mass production now. That doesn't happen overnight. Even if you have taped out a ready-to-fab design, you usually want to start small and run a few rounds of QA on small batches of chips before you start ordering enough wafers to make a dent in TSMC's schedule.
So those companies would probably be targeting something in the range of 14nm to 10nm in order to take advantage of the fire sale on late DUV tooling to prove that they can make a working chip to investors, and then leapfrog to N5 after the giants have jumped to N3.
In other words, the slowdown?
Don't know where you got that info but that's definitely false.
If you look at Nvidia's lineup, each GPU has a die that's unique to that product in both size and markings. Sure, there have been/are a few products that make use of the same die but with defects depending on yelds, but those are usually in the same performance class (1070, 1070TI and 1080 had the same die), and not the case you're talking about of entry level GPUs being high end GPUs dies with defects fused off. That's never the case. An RTX 3050 die is a completely different part than a 3080/3090 die.
Deleted Comment
I think this is less about consumer GPUs and more about cost cutting in big corps slowing expansions, and the reduction in crypto demand.
Think about every tech company that says “oops we overbuilt for covid, whelp let’s lay everyone off” well they could be buying fewer servers if they over-anticipated their needs in that department too.
You were unable to convince your executives to adopt huggingface, IOT, and instead they opted to employ phone calls and Excel spreadsheets.
You, consumer, failed to accept Stadia into your heart,... and now you shall pay for your snide indifference!!
It doesn't make much sense for NVidia to increase production of 4xxx series when there's so much 3xxx series sitting around in warehouse inventory.
Semiconductors are so complex and capital intensive that even a duopoly couln't balance the expense.