Comments about the marketing driven nm measurements aside, this still looks like another solid advance for TSMC. They are already significantly ahead of Samsung and Intel on transistor density. TSMC is at 197 MTr/mm2 wile Samsung is at 150 MTr/mm2 and Intel is at 123 MTr/mm2. This 1.6nm process will put them around 230 MTr/mm2 by 2026. When viewed by this metric, Intel is really falling behind.
Intel has a 1.4nm process in the pipeline for ~2027. They just took delivery on their first high NA EUV machine in order to start working on it.
Their gamble however is that they need to figure out DSA, a long storied technology that uses self-forming polymers to allow less light to sharply etch smaller features.
If they figure out DSA, they will likely be ahead of TSMC. If not, it will just be more very expensive languishing.
The nomenclature for microchip manufacturing left reality a couple generations ago. Intel’s 14A process is not a true 14A half-pitch. It’s kind of like how they started naming CPUs off “performance equivalents” instead of using raw clock speed. And this isn’t just Intel. TSMC, Samsung, everyone is doing half-pitch equivalent naming now a days.
Certain feature sizes have hit a point of diminishing returns, so they are finding new ways to increase performance. Each generation is better than the last but we have moved beyond simple shrinkage.
Comparing Intel’s 14A label to TSMCs 16A is meaningless without performance benchmarks. They are both just marketing terms. Like the Intel/AMD CPU wars. You can’t say one is better because the label says it’s faster. There’s so much other stuff to consider.
Do you know the name of the company that produces the EUV machine? is it ASML?
It is my understanding that only ASML had cracked the EUV litography, but if there's another company out there, that would be an interesting development to watch.
Well, "extreme UV" these days. 13.5nm, larger than the "feature size". And even that required heroic effort in development of light sources and optics.
Stupid beginner question: is MTr/mm² really the right thing to be looking at? Shouldn't it be more like mm²/MTr ? This feels kind of like these weird "miles per gallon" units, when "gallons per mile" is much more useful...
Not understanding chip design - but is it possible to get more computational bang with less transistors - are there some optimizations to be had? Better design that could compensate for bigger nodes?
Yes, generally one of the trends has been movement toward specialized coprocessors/accelerators. This was happening before the recent AI push and has picked up steam.
If you think of an SOC, the chip in your phone, more and more of the real estate is being dedicated to specialized compute (AI accelerators, GPUs, etc. vs general purpose compute (CPU).
At the enterprise scale, one of the big arguments NVIDIA has been making, beyond their value in the AI market, has been the value of moving massive, resource intense workloads from CPU to more specialized GPU acceleration. In return for the investment to move their workload, customers can get a massive increase in performance per watt/dollar.
There are some other factors at play in that example, and it may not always be true that the transistors/mm^2 is always lower, but I think it illustrates the overall point.
The design optimization software for modern semiconductors is arguably the most advanced design software on earth with likely tens if not hundreds of millions of man-years put into it. It takes into account not only the complex physics that apply at the nano-scale but also the interplay of the various manufacturing steps and optimizes trillions of features. Every process change brings about new potential optimizations, so rather than compensating for bigger nodes it actually widens the gap further. By analogy, the jump from hatchet to scalpel in the hands a layman is far less than the jump from hatchet to scalpel for a skilled surgeon.
In general, yes, but all the chip design companies have already invested a whole lot of time and engineering resources into squeezing as much as possible from each transistor. But those kinds of optimizations are certainly part of why we sometimes see new CPU generations released on the same node.
Everyone generally does that before sending the design to the fab.
Not to say that improvements and doing more with less are impossible, they probably aren't, but it's going to require significant per design human effort to do that.
Some yeah, but many of these optimizations aren't across-the-board performance improvements, but rather specializations that favor specific kinds of workloads.
There are the really obvious ones like on-board GPUs and AI accelerators, but even within the CPU you have optimizations that apply to specific kinds of workloads like specialized instructions for video encode/decode.
The main "issue", such as it is, is that this setup advantages vertically integrated players - the ones who can release software quickly to use these optimizations, or even going as far as to build specific new features on top of these optimizations.
For more open platforms you have a chicken-and-egg problem. Chip designers have little incentive to dedicate valuable and finite transistors to specialized computations if the software market in general hasn't shown an interest. Even after these optimizations/specialized hardware have been released, software makers often are slow in adopting them, resulting in consumers not seeing the benefit for a long time.
See for example the many years it took for Microsoft to even accelerate the rendering of Windows' core UI with the GPU.
BSPDN is not about putting devices on the front and back, the logic layer is still mostly 2D, it's about the power connects moving to the back of the chip so there's less interference with logic and larger wiring can be used.
I'd say they haven't been very competitive with AMD in performance per watt/dollar in the ryzen era, specifically due to process advantage. (On CPU dies especially, with less advantage for AMDs I/O dies.) I'd agree they have done a good job advancing other aspects of their designs to close the gap, though.
The metric means what it has always meant: lower numbers mean higher transistor density on some area (the digital part) of the chip.
What more do you want? It’s as meaningful as a single number ever could be.
If you’re a consumer - if you don’t design chips - lower number means there’s some improvement somewhere. That’s all you need to know. It’s like horse powers on a car. The number doesn’t tell you everything about the performance under all conditions but it gives a rough idea comparatively speaking.
If you’re a chip designer then you never cared about the number they used in naming the process anyway. You would dig into the specification and design rules from the fab to understand the process. The “nm” number might show up in the design rules somewhere.. but that’s never been relevant to anyone, ever.
I really don’t understand why so many commenters feel the need to point out that “nm” doesn’t refer to a physical dimension. Who cares? It doesn’t have any impact on anyone. It’s very mildly interesting at best. It’s a near meaningless comment.
The measurement used to refer to feature size [0]. It used to be a physical measurement of a thing in nanometers hence nm
Now it's a free for all marketing team names it whatever the fuck they want. That's why you get stuff like intel 10nm being equivalent to AMD 7nm. [1]. It's not real.
what is a rough rule of thumb for each 1nm reduction? What does that increase in transistor density look like each year? If the jump is bigger (5nm->3nm), does that rate of change increase too?
trying to understand the economic impact of these announcements as I don't understand this topic well enough
Nope that's what it meant a long time ago. Nowadays, the nm number represents the smallest possible element on the chip, typically the gate length, which is smaller than the size of a transistor.
This means that when different manufacturers use a different transistor design, their 'nm' process could be the same but their transistor density different.
Because it lost meaning somewhere between a micron and 100nm.
From roughly the 1960s through the end of the 1990s, the number meant printed gate lengths or half-pitch (which were identical).
At some point, companies started using "equivalences" which became increasingly detached from reality. If my 50nm node had better performance than your 30nm node because I have FinFETs or SOI or whatever, shouldn't I call mine 30nm? But at that point, if you have something less than 30nm somewhere in your process, shouldn't you call it 20nm? And so the numbers detached from reality.
So now when you see a 1.6nm process, it's think "1.6nm class", rather than that corresponding to any specific feature size, and furthermore, understand that companies invent class number exaggerations differently. For example, an Intel 10nm roughly corresponds to Samsung / TSMC 7nm (and all would probably be around 15-30nm before equivalences).
That should give you enough for a web search if you want all the dirty details.
It stopped being an actual measure of size long ago. The nm is t Nanometer anything it’s just a vague marketing thing attempting some sort of measure of generations
process/node "size" has, for some time now, been divorced from any actual physical feature on or of the chip/transistor itself. See the second paragraph of: https://en.wikipedia.org/wiki/2_nm_process for additional details
Intel is the one trying to catch up to TSMC, not vice versa!
The link you give doesn't have any details of Intel's 18A process, including no indication of it innovating in any way, as opposed to TSMC with their "backside power delivery" which is going to be critical for power-hungry SOTA AI chips.
While you are correct that it is Intel trying to catch TSMC, you are wrong about the origin of backside power delivery. The idea originated at Intel sometime ago, but it would be very ironic if TSMC implements it before Intel...
OP never said Intel wasn't trying to catch up. As far as backside power delivery, this is literally what Intel has been working on. It is called PowerVia.
Except for backside power. Intel published and had a timeline to ship at least one generation ahead of TSMC. I haven’t been tracking what happened since, but Intel was ahead on this one process improvement. And it’s not a small one, but it doesn’t cancel out their other missteps. Not by half.
This isn't a comparison of shipping processes though, it's just roadmap products. And in fact until this announcement Intel was "ahead" of TSMC on the 2026 roadmap.
The "Hollywood accounting" transistor density aside, I think a new metric needs to become mainstream: "wafers per machine per day" and "machines per year manufactured".
Getting these machines built and online is more important than what one machine (that might be less than 6 per year) can do.
The information, I'm sure, is buried in their papers, but I want to know what node processes are in products available now.
ICs are manufactured on silicon disks called wafers. Discs have two sides, and traditionally, everything was done on top. We can now do power on the bottom. This makes things go faster and use less power:
* Power wires are big (and can be a bit crude). The bigger the better. Signal wires are small and precise. Smaller is generally better.
* Big wires, if near signal wires, can interfere with them working optimally (called "capacitance").
* Capacitance can slow down signals on the fine signal wires.
* Capacitance also increases power usage when ones become zeros and vice-versa on signal wires.
* Big wires also take up a lot of space.
* Putting them on the back of the wafer means that things can go faster and use less power, since you don't have big power wires near your fine signal wires.
* Putting them in back leaves a lot more space for things in front.
* Capacitance between power wires (which don't carry signal) actually helps deliver cleaner power too, which is a free bonus!
This is hard to do, since it means somehow routing power through the wafer. That's why we didn't do this before. You need very tiny wires through very tiny holes in locations very precisely aligned on both sides. Aligning things on the scale of nanometers is very, very hard.
There's still the question though of why they didn't do this decades ago - seems very obvious that this layout is better. What changed that made it possible only now and not earlier?
> You need very tiny wires through very tiny holes in locations very precisely aligned on both sides. Aligning things on the scale of nanometers is very, very hard.
Do you need to align that precisely? Can't the power side have very large landing pads for the wires from the signal side to make it much easier?
> Typically, backside power delivery enables thicker, lower-resistance wires, which can deliver more power to enable higher performance and save power. Samsung's paper noted a 9.2% reduction in wiring length, enhancing performance
instead of signals and power path going through the same side (frontside) causing all sorts of issues and inefficiency, they're decoupling where power is coming from (from the other, backside, err side).
More importantly, intel saw it as one of two key technologies of them moving into angstrom era, and was touting itself they'll be the first one to bring it to life (not sure they did).. so this seems to be more of a business power move.
It looks like the topology for backside moves the transistors to the middle so "singal wires and power wires are decoupled and optimized separately" instead of "compete[ing] for the same resources at every metal layer"
So, if I understood that correctly, "frontside" actually is the top side, and "backside" is actually the bottom side. They're delivering power from the bottom (substrate) side, and connecting the data on the top side.
Half your wires deliver power, half deliver signal. So if you do both on the same side, you need twice the density of wires. If you split the delivery into two parts, you get double the density without needing to make things smaller.
This isn't quite right. Big wires (ideally entire planes) deliver power. Small wires deliver signal. Half and half isn't the right split, and you don't want to make power wires smaller.
The very different requirements of the two is where a lot of the gains come in.
I am not sure I understand backside in this instance and
the illustration in the article didn't entirely help.
In general, at least in older time, one side of the CPU has all
the nice pins on it, and the motherboard has a pincushion that the pins match nicely.
At the top of the CPU you put a HUGE heatsink on it and off you go.
In this configuration the power delivery must be via the pincushion,
through some of the pins.
Intuitively that sounds to me like the power is coming in the backside?
But given that it is a big deal I am missing something.
Is the power fed from the "top" of the cpu where the heatsink would sit?
A16 in 2027 vs Intel's 18A in full swing by 2026 feels like a miss on TMSCs behalf. This looks like an open door for fabless companies to try Intel's foundry service.
Their gamble however is that they need to figure out DSA, a long storied technology that uses self-forming polymers to allow less light to sharply etch smaller features.
If they figure out DSA, they will likely be ahead of TSMC. If not, it will just be more very expensive languishing.
This is the industry roadmap from 2022: https://irds.ieee.org/images/files/pdf/2022/2022IRDS_Litho.p... If you look at page 6 there is a nice table that kind of explains it.
Certain feature sizes have hit a point of diminishing returns, so they are finding new ways to increase performance. Each generation is better than the last but we have moved beyond simple shrinkage.
Comparing Intel’s 14A label to TSMCs 16A is meaningless without performance benchmarks. They are both just marketing terms. Like the Intel/AMD CPU wars. You can’t say one is better because the label says it’s faster. There’s so much other stuff to consider.
It is my understanding that only ASML had cracked the EUV litography, but if there's another company out there, that would be an interesting development to watch.
And, to think, it's all done with light !
We live in interesting times !
230 M/mm2 translates to 33nm "half-pitch".
Of course, transistors aren't square and aren't so densely packed, but these numbers are more real IMO.
Gallons per mile only makes sense when you are talking about dragsters.
If you think of an SOC, the chip in your phone, more and more of the real estate is being dedicated to specialized compute (AI accelerators, GPUs, etc. vs general purpose compute (CPU).
At the enterprise scale, one of the big arguments NVIDIA has been making, beyond their value in the AI market, has been the value of moving massive, resource intense workloads from CPU to more specialized GPU acceleration. In return for the investment to move their workload, customers can get a massive increase in performance per watt/dollar.
There are some other factors at play in that example, and it may not always be true that the transistors/mm^2 is always lower, but I think it illustrates the overall point.
Not to say that improvements and doing more with less are impossible, they probably aren't, but it's going to require significant per design human effort to do that.
Deleted Comment
There are the really obvious ones like on-board GPUs and AI accelerators, but even within the CPU you have optimizations that apply to specific kinds of workloads like specialized instructions for video encode/decode.
The main "issue", such as it is, is that this setup advantages vertically integrated players - the ones who can release software quickly to use these optimizations, or even going as far as to build specific new features on top of these optimizations.
For more open platforms you have a chicken-and-egg problem. Chip designers have little incentive to dedicate valuable and finite transistors to specialized computations if the software market in general hasn't shown an interest. Even after these optimizations/specialized hardware have been released, software makers often are slow in adopting them, resulting in consumers not seeing the benefit for a long time.
See for example the many years it took for Microsoft to even accelerate the rendering of Windows' core UI with the GPU.
Would it be that x2 (for front & back)?
E.g., 230 on front side and another 230 on back side = 460 MTr/mm2 TOTAL
Gotta love how we now have fractions of a near meaningless metric.
What more do you want? It’s as meaningful as a single number ever could be.
If you’re a consumer - if you don’t design chips - lower number means there’s some improvement somewhere. That’s all you need to know. It’s like horse powers on a car. The number doesn’t tell you everything about the performance under all conditions but it gives a rough idea comparatively speaking.
If you’re a chip designer then you never cared about the number they used in naming the process anyway. You would dig into the specification and design rules from the fab to understand the process. The “nm” number might show up in the design rules somewhere.. but that’s never been relevant to anyone, ever.
I really don’t understand why so many commenters feel the need to point out that “nm” doesn’t refer to a physical dimension. Who cares? It doesn’t have any impact on anyone. It’s very mildly interesting at best. It’s a near meaningless comment.
That's just patently wrong.
The measurement used to refer to feature size [0]. It used to be a physical measurement of a thing in nanometers hence nm
Now it's a free for all marketing team names it whatever the fuck they want. That's why you get stuff like intel 10nm being equivalent to AMD 7nm. [1]. It's not real.
>Who cares?
You...else we wouldn't be having this discussion
[0] https://en.wikipedia.org/wiki/Semiconductor_device_fabricati...
[1] https://www.pcgamer.com/chipmaking-process-node-naming-lmc-p...
trying to understand the economic impact of these announcements as I don't understand this topic well enough
This means that when different manufacturers use a different transistor design, their 'nm' process could be the same but their transistor density different.
From roughly the 1960s through the end of the 1990s, the number meant printed gate lengths or half-pitch (which were identical).
At some point, companies started using "equivalences" which became increasingly detached from reality. If my 50nm node had better performance than your 30nm node because I have FinFETs or SOI or whatever, shouldn't I call mine 30nm? But at that point, if you have something less than 30nm somewhere in your process, shouldn't you call it 20nm? And so the numbers detached from reality.
So now when you see a 1.6nm process, it's think "1.6nm class", rather than that corresponding to any specific feature size, and furthermore, understand that companies invent class number exaggerations differently. For example, an Intel 10nm roughly corresponds to Samsung / TSMC 7nm (and all would probably be around 15-30nm before equivalences).
That should give you enough for a web search if you want all the dirty details.
Dead Comment
[0] https://www.tomshardware.com/tech-industry/manufacturing/int...
The link you give doesn't have any details of Intel's 18A process, including no indication of it innovating in any way, as opposed to TSMC with their "backside power delivery" which is going to be critical for power-hungry SOTA AI chips.
https://www.intel.com/content/www/us/en/newsroom/news/powerv...
Getting these machines built and online is more important than what one machine (that might be less than 6 per year) can do.
The information, I'm sure, is buried in their papers, but I want to know what node processes are in products available now.
ICs are manufactured on silicon disks called wafers. Discs have two sides, and traditionally, everything was done on top. We can now do power on the bottom. This makes things go faster and use less power:
* Power wires are big (and can be a bit crude). The bigger the better. Signal wires are small and precise. Smaller is generally better.
* Big wires, if near signal wires, can interfere with them working optimally (called "capacitance").
* Capacitance can slow down signals on the fine signal wires.
* Capacitance also increases power usage when ones become zeros and vice-versa on signal wires.
* Big wires also take up a lot of space.
* Putting them on the back of the wafer means that things can go faster and use less power, since you don't have big power wires near your fine signal wires.
* Putting them in back leaves a lot more space for things in front.
* Capacitance between power wires (which don't carry signal) actually helps deliver cleaner power too, which is a free bonus!
This is hard to do, since it means somehow routing power through the wafer. That's why we didn't do this before. You need very tiny wires through very tiny holes in locations very precisely aligned on both sides. Aligning things on the scale of nanometers is very, very hard.
How did I do?
Do you need to align that precisely? Can't the power side have very large landing pads for the wires from the signal side to make it much easier?
Dead Comment
As a result, transistors are sandwiched between two wiring stacks. Resulting in lower noise, but makes heat removal a bit harder AFAICS.
See the image at: https://www.custompc.com/wp-content/sites/custompc/2023/06/I...
Are they sparse, like wires? Or solid, like the ground plane of a PCB? Are there "burried vias"?
frontside: https://www.imec-int.com/_next/image?url=%2Fsites%2Fdefault%...
backside: https://www.imec-int.com/_next/image?url=%2Fsites%2Fdefault%...
Another graphic samsung used: https://www.thelec.net/news/photo/202210/4238_4577_1152.jpg
From https://www.tomshardware.com/pc-components/cpus/samsung-to-i...
> Typically, backside power delivery enables thicker, lower-resistance wires, which can deliver more power to enable higher performance and save power. Samsung's paper noted a 9.2% reduction in wiring length, enhancing performance
More importantly, intel saw it as one of two key technologies of them moving into angstrom era, and was touting itself they'll be the first one to bring it to life (not sure they did).. so this seems to be more of a business power move.
more on all of it from anandtech: https://www.anandtech.com/show/18894/intel-details-powervia-...
It looks like the topology for backside moves the transistors to the middle so "singal wires and power wires are decoupled and optimized separately" instead of "compete[ing] for the same resources at every metal layer"
https://spectrum.ieee.org/backside-power-delivery
The very different requirements of the two is where a lot of the gains come in.
Dead Comment
In general, at least in older time, one side of the CPU has all the nice pins on it, and the motherboard has a pincushion that the pins match nicely. At the top of the CPU you put a HUGE heatsink on it and off you go.
In this configuration the power delivery must be via the pincushion, through some of the pins.
Intuitively that sounds to me like the power is coming in the backside? But given that it is a big deal I am missing something.
Is the power fed from the "top" of the cpu where the heatsink would sit?