Readit News logoReadit News
futureshock · 2 years ago
Comments about the marketing driven nm measurements aside, this still looks like another solid advance for TSMC. They are already significantly ahead of Samsung and Intel on transistor density. TSMC is at 197 MTr/mm2 wile Samsung is at 150 MTr/mm2 and Intel is at 123 MTr/mm2. This 1.6nm process will put them around 230 MTr/mm2 by 2026. When viewed by this metric, Intel is really falling behind.
Workaccount2 · 2 years ago
Intel has a 1.4nm process in the pipeline for ~2027. They just took delivery on their first high NA EUV machine in order to start working on it.

Their gamble however is that they need to figure out DSA, a long storied technology that uses self-forming polymers to allow less light to sharply etch smaller features.

If they figure out DSA, they will likely be ahead of TSMC. If not, it will just be more very expensive languishing.

Sysreq2 · 2 years ago
The nomenclature for microchip manufacturing left reality a couple generations ago. Intel’s 14A process is not a true 14A half-pitch. It’s kind of like how they started naming CPUs off “performance equivalents” instead of using raw clock speed. And this isn’t just Intel. TSMC, Samsung, everyone is doing half-pitch equivalent naming now a days.

This is the industry roadmap from 2022: https://irds.ieee.org/images/files/pdf/2022/2022IRDS_Litho.p... If you look at page 6 there is a nice table that kind of explains it.

Certain feature sizes have hit a point of diminishing returns, so they are finding new ways to increase performance. Each generation is better than the last but we have moved beyond simple shrinkage.

Comparing Intel’s 14A label to TSMCs 16A is meaningless without performance benchmarks. They are both just marketing terms. Like the Intel/AMD CPU wars. You can’t say one is better because the label says it’s faster. There’s so much other stuff to consider.

saganus · 2 years ago
Do you know the name of the company that produces the EUV machine? is it ASML?

It is my understanding that only ASML had cracked the EUV litography, but if there's another company out there, that would be an interesting development to watch.

hackernudes · 2 years ago
DSA = Directed Self-Assembly
dev1ycan · 2 years ago
Intel says a lot of things but until they put them out in the field I don't believe a word they say
kumarvvr · 2 years ago
Its so hard to even fathom 200+ Million Transistors in 1 square millimeter !

And, to think, it's all done with light !

We live in interesting times !

pjc50 · 2 years ago
Well, "extreme UV" these days. 13.5nm, larger than the "feature size". And even that required heroic effort in development of light sources and optics.
seastarer · 2 years ago
Translating 197 M/mm2 into the dimensions of a square, we get a dimension of 71nm. If we compute the "half-pitch", that's 35.5nm.

230 M/mm2 translates to 33nm "half-pitch".

Of course, transistors aren't square and aren't so densely packed, but these numbers are more real IMO.

black_puppydog · 2 years ago
Stupid beginner question: is MTr/mm² really the right thing to be looking at? Shouldn't it be more like mm²/MTr ? This feels kind of like these weird "miles per gallon" units, when "gallons per mile" is much more useful...
simpsond · 2 years ago
200 million transistors per square millimeter.

Gallons per mile only makes sense when you are talking about dragsters.

bee_rider · 2 years ago
What’s the argument for either? They are, of course, equivalent… might as well pick the one where bigger=better.
ReptileMan · 2 years ago
Not understanding chip design - but is it possible to get more computational bang with less transistors - are there some optimizations to be had? Better design that could compensate for bigger nodes?
7thpower · 2 years ago
Yes, generally one of the trends has been movement toward specialized coprocessors/accelerators. This was happening before the recent AI push and has picked up steam.

If you think of an SOC, the chip in your phone, more and more of the real estate is being dedicated to specialized compute (AI accelerators, GPUs, etc. vs general purpose compute (CPU).

At the enterprise scale, one of the big arguments NVIDIA has been making, beyond their value in the AI market, has been the value of moving massive, resource intense workloads from CPU to more specialized GPU acceleration. In return for the investment to move their workload, customers can get a massive increase in performance per watt/dollar.

There are some other factors at play in that example, and it may not always be true that the transistors/mm^2 is always lower, but I think it illustrates the overall point.

jjk166 · 2 years ago
The design optimization software for modern semiconductors is arguably the most advanced design software on earth with likely tens if not hundreds of millions of man-years put into it. It takes into account not only the complex physics that apply at the nano-scale but also the interplay of the various manufacturing steps and optimizes trillions of features. Every process change brings about new potential optimizations, so rather than compensating for bigger nodes it actually widens the gap further. By analogy, the jump from hatchet to scalpel in the hands a layman is far less than the jump from hatchet to scalpel for a skilled surgeon.
mort96 · 2 years ago
In general, yes, but all the chip design companies have already invested a whole lot of time and engineering resources into squeezing as much as possible from each transistor. But those kinds of optimizations are certainly part of why we sometimes see new CPU generations released on the same node.
pjc50 · 2 years ago
Everyone generally does that before sending the design to the fab.

Not to say that improvements and doing more with less are impossible, they probably aren't, but it's going to require significant per design human effort to do that.

Deleted Comment

pixelpoet · 2 years ago
Such optimisation would apply equally to the more dense processes, though.
potatolicious · 2 years ago
Some yeah, but many of these optimizations aren't across-the-board performance improvements, but rather specializations that favor specific kinds of workloads.

There are the really obvious ones like on-board GPUs and AI accelerators, but even within the CPU you have optimizations that apply to specific kinds of workloads like specialized instructions for video encode/decode.

The main "issue", such as it is, is that this setup advantages vertically integrated players - the ones who can release software quickly to use these optimizations, or even going as far as to build specific new features on top of these optimizations.

For more open platforms you have a chicken-and-egg problem. Chip designers have little incentive to dedicate valuable and finite transistors to specialized computations if the software market in general hasn't shown an interest. Even after these optimizations/specialized hardware have been released, software makers often are slow in adopting them, resulting in consumers not seeing the benefit for a long time.

See for example the many years it took for Microsoft to even accelerate the rendering of Windows' core UI with the GPU.

alberth · 2 years ago
> This 1.6nm process will put them around 230 MTr/mm2

Would it be that x2 (for front & back)?

E.g., 230 on front side and another 230 on back side = 460 MTr/mm2 TOTAL

zeusk · 2 years ago
BSPDN is not about putting devices on the front and back, the logic layer is still mostly 2D, it's about the power connects moving to the back of the chip so there's less interference with logic and larger wiring can be used.
rowanG077 · 2 years ago
Are intel really just the best chip designers on the earth or why can they compete with such densities with AMD?
smolder · 2 years ago
I'd say they haven't been very competitive with AMD in performance per watt/dollar in the ryzen era, specifically due to process advantage. (On CPU dies especially, with less advantage for AMDs I/O dies.) I'd agree they have done a good job advancing other aspects of their designs to close the gap, though.
Havoc · 2 years ago
>1.6nm

Gotta love how we now have fractions of a near meaningless metric.

audunw · 2 years ago
The metric means what it has always meant: lower numbers mean higher transistor density on some area (the digital part) of the chip.

What more do you want? It’s as meaningful as a single number ever could be.

If you’re a consumer - if you don’t design chips - lower number means there’s some improvement somewhere. That’s all you need to know. It’s like horse powers on a car. The number doesn’t tell you everything about the performance under all conditions but it gives a rough idea comparatively speaking.

If you’re a chip designer then you never cared about the number they used in naming the process anyway. You would dig into the specification and design rules from the fab to understand the process. The “nm” number might show up in the design rules somewhere.. but that’s never been relevant to anyone, ever.

I really don’t understand why so many commenters feel the need to point out that “nm” doesn’t refer to a physical dimension. Who cares? It doesn’t have any impact on anyone. It’s very mildly interesting at best. It’s a near meaningless comment.

Havoc · 2 years ago
>The metric means what it has always meant

That's just patently wrong.

The measurement used to refer to feature size [0]. It used to be a physical measurement of a thing in nanometers hence nm

Now it's a free for all marketing team names it whatever the fuck they want. That's why you get stuff like intel 10nm being equivalent to AMD 7nm. [1]. It's not real.

>Who cares?

You...else we wouldn't be having this discussion

[0] https://en.wikipedia.org/wiki/Semiconductor_device_fabricati...

[1] https://www.pcgamer.com/chipmaking-process-node-naming-lmc-p...

spxneo · 2 years ago
what is a rough rule of thumb for each 1nm reduction? What does that increase in transistor density look like each year? If the jump is bigger (5nm->3nm), does that rate of change increase too?

trying to understand the economic impact of these announcements as I don't understand this topic well enough

blackoil · 2 years ago
Number represents transistor density. 2nm has ~twice the density of 4nm. If you ignore nm as unit of distance it makes sense.
misja111 · 2 years ago
Nope that's what it meant a long time ago. Nowadays, the nm number represents the smallest possible element on the chip, typically the gate length, which is smaller than the size of a transistor.

This means that when different manufacturers use a different transistor design, their 'nm' process could be the same but their transistor density different.

_heimdall · 2 years ago
They're using the wrong units if we need to consider the nanometer as something other than a measure of distance.
2OEH8eoCRo0 · 2 years ago
Wouldn't 2nm have 4x the density of 4nm?
phonon · 2 years ago
Dimensions go by 1/root(2) to double density...so about .7 per generation.
__alexs · 2 years ago
TSMC N5 is indeed about 30% more Transistors/mm2 than N7 as you might expect so seems reasonably meaningful?
thsksbd · 2 years ago
1.6 nm is 16 A. If they continue the BS for much longer, the "feature size" will be smaller than the lattice of Si.
s_dev · 2 years ago
Why is it meaningless?
blagie · 2 years ago
Because it lost meaning somewhere between a micron and 100nm.

From roughly the 1960s through the end of the 1990s, the number meant printed gate lengths or half-pitch (which were identical).

At some point, companies started using "equivalences" which became increasingly detached from reality. If my 50nm node had better performance than your 30nm node because I have FinFETs or SOI or whatever, shouldn't I call mine 30nm? But at that point, if you have something less than 30nm somewhere in your process, shouldn't you call it 20nm? And so the numbers detached from reality.

So now when you see a 1.6nm process, it's think "1.6nm class", rather than that corresponding to any specific feature size, and furthermore, understand that companies invent class number exaggerations differently. For example, an Intel 10nm roughly corresponds to Samsung / TSMC 7nm (and all would probably be around 15-30nm before equivalences).

That should give you enough for a web search if you want all the dirty details.

jl6 · 2 years ago
It’s not meaningless, it’s near meaningless. That’s what the nm stands for, right?
chmod775 · 2 years ago
Because it's not measuring anything, except the tech generation. It conveys about as much information as "iPhone 14".
Havoc · 2 years ago
It stopped being an actual measure of size long ago. The nm is t Nanometer anything it’s just a vague marketing thing attempting some sort of measure of generations
andrewSC · 2 years ago
process/node "size" has, for some time now, been divorced from any actual physical feature on or of the chip/transistor itself. See the second paragraph of: https://en.wikipedia.org/wiki/2_nm_process for additional details
helsinkiandrew · 2 years ago
Wonder how long until its back to round numbers? - 800pm
mazurnification · 2 years ago
It will be 18A first.
highwaylights · 2 years ago
I hear it also has many megahertz.

Dead Comment

654wak654 · 2 years ago
Sounds like a response to Intel's 18A process [0], which is also coming in 2026.

[0] https://www.tomshardware.com/tech-industry/manufacturing/int...

HarHarVeryFunny · 2 years ago
Intel is the one trying to catch up to TSMC, not vice versa!

The link you give doesn't have any details of Intel's 18A process, including no indication of it innovating in any way, as opposed to TSMC with their "backside power delivery" which is going to be critical for power-hungry SOTA AI chips.

shookness · 2 years ago
While you are correct that it is Intel trying to catch TSMC, you are wrong about the origin of backside power delivery. The idea originated at Intel sometime ago, but it would be very ironic if TSMC implements it before Intel...
s3p · 2 years ago
OP never said Intel wasn't trying to catch up. As far as backside power delivery, this is literally what Intel has been working on. It is called PowerVia.

https://www.intel.com/content/www/us/en/newsroom/news/powerv...

hinkley · 2 years ago
Except for backside power. Intel published and had a timeline to ship at least one generation ahead of TSMC. I haven’t been tracking what happened since, but Intel was ahead on this one process improvement. And it’s not a small one, but it doesn’t cancel out their other missteps. Not by half.
ajross · 2 years ago
This isn't a comparison of shipping processes though, it's just roadmap products. And in fact until this announcement Intel was "ahead" of TSMC on the 2026 roadmap.
Alifatisk · 2 years ago
This is a response to something that’s coming in 2 years? Incredible how far ahead they are
tgtweak · 2 years ago
Fabs have a 3+ year lead time from prototyping the node to having it produce production wafers.
blackoil · 2 years ago
This will also come in 2026, so not that far ahead.
sylware · 2 years ago
It seems the actually impressive thing here is the 15-20% less power consumption at same n2 complexity/speed.
gosub100 · 2 years ago
The "Hollywood accounting" transistor density aside, I think a new metric needs to become mainstream: "wafers per machine per day" and "machines per year manufactured".

Getting these machines built and online is more important than what one machine (that might be less than 6 per year) can do.

The information, I'm sure, is buried in their papers, but I want to know what node processes are in products available now.

tvbv · 2 years ago
could someone ELi5 the backside power delivery please ?
blagie · 2 years ago
ELI5:

ICs are manufactured on silicon disks called wafers. Discs have two sides, and traditionally, everything was done on top. We can now do power on the bottom. This makes things go faster and use less power:

* Power wires are big (and can be a bit crude). The bigger the better. Signal wires are small and precise. Smaller is generally better.

* Big wires, if near signal wires, can interfere with them working optimally (called "capacitance").

* Capacitance can slow down signals on the fine signal wires.

* Capacitance also increases power usage when ones become zeros and vice-versa on signal wires.

* Big wires also take up a lot of space.

* Putting them on the back of the wafer means that things can go faster and use less power, since you don't have big power wires near your fine signal wires.

* Putting them in back leaves a lot more space for things in front.

* Capacitance between power wires (which don't carry signal) actually helps deliver cleaner power too, which is a free bonus!

This is hard to do, since it means somehow routing power through the wafer. That's why we didn't do this before. You need very tiny wires through very tiny holes in locations very precisely aligned on both sides. Aligning things on the scale of nanometers is very, very hard.

How did I do?

AdamN · 2 years ago
There's still the question though of why they didn't do this decades ago - seems very obvious that this layout is better. What changed that made it possible only now and not earlier?
pedrocr · 2 years ago
> You need very tiny wires through very tiny holes in locations very precisely aligned on both sides. Aligning things on the scale of nanometers is very, very hard.

Do you need to align that precisely? Can't the power side have very large landing pads for the wires from the signal side to make it much easier?

pmx · 2 years ago
Thanks this was a very useful summary for me!
1oooqooq · 2 years ago
so the wafer is a huge ground plane? still can't see how one side is separate from the other if its the same block.
vintendo64 · 2 years ago
thank you for the explanation! It really helped

Dead Comment

bayindirh · 2 years ago
Signal wires and power wires are not routed together, but separately.

As a result, transistors are sandwiched between two wiring stacks. Resulting in lower noise, but makes heat removal a bit harder AFAICS.

See the image at: https://www.custompc.com/wp-content/sites/custompc/2023/06/I...

Tempest1981 · 2 years ago
How many power layers are there? Or how do the power "wires" cross over each other?

Are they sparse, like wires? Or solid, like the ground plane of a PCB? Are there "burried vias"?

__s · 2 years ago
https://www.imec-int.com/en/articles/how-power-chips-backsid... has some explanation that's a bit beyond ELI5, but they have 2 pictures:

frontside: https://www.imec-int.com/_next/image?url=%2Fsites%2Fdefault%...

backside: https://www.imec-int.com/_next/image?url=%2Fsites%2Fdefault%...

Another graphic samsung used: https://www.thelec.net/news/photo/202210/4238_4577_1152.jpg

From https://www.tomshardware.com/pc-components/cpus/samsung-to-i...

> Typically, backside power delivery enables thicker, lower-resistance wires, which can deliver more power to enable higher performance and save power. Samsung's paper noted a 9.2% reduction in wiring length, enhancing performance

barfard · 2 years ago
the pics go to 404
Keyframe · 2 years ago
instead of signals and power path going through the same side (frontside) causing all sorts of issues and inefficiency, they're decoupling where power is coming from (from the other, backside, err side).

More importantly, intel saw it as one of two key technologies of them moving into angstrom era, and was touting itself they'll be the first one to bring it to life (not sure they did).. so this seems to be more of a business power move.

more on all of it from anandtech: https://www.anandtech.com/show/18894/intel-details-powervia-...

brohee · 2 years ago
Not ELI5 but I found this 20mn video pretty informative https://www.youtube.com/watch?v=hyZlQY2xmWQ
wwfn · 2 years ago
I'm also clueless. First search brings up https://www.tomshardware.com/news/intel-details-powervia-bac... with a nice diagram

It looks like the topology for backside moves the transistors to the middle so "singal wires and power wires are decoupled and optimized separately" instead of "compete[ing] for the same resources at every metal layer"

gostsamo · 2 years ago
An article on the topic though about Intel:

https://spectrum.ieee.org/backside-power-delivery

AnimalMuppet · 2 years ago
So, if I understood that correctly, "frontside" actually is the top side, and "backside" is actually the bottom side. They're delivering power from the bottom (substrate) side, and connecting the data on the top side.
vollbrecht · 2 years ago
there is a good youtube video that explains it https://www.youtube.com/watch?v=hyZlQY2xmWQ
brennanpeterson · 2 years ago
Half your wires deliver power, half deliver signal. So if you do both on the same side, you need twice the density of wires. If you split the delivery into two parts, you get double the density without needing to make things smaller.
blagie · 2 years ago
This isn't quite right. Big wires (ideally entire planes) deliver power. Small wires deliver signal. Half and half isn't the right split, and you don't want to make power wires smaller.

The very different requirements of the two is where a lot of the gains come in.

Dead Comment

ThinkBeat · 2 years ago
I am not sure I understand backside in this instance and the illustration in the article didn't entirely help.

In general, at least in older time, one side of the CPU has all the nice pins on it, and the motherboard has a pincushion that the pins match nicely. At the top of the CPU you put a HUGE heatsink on it and off you go.

In this configuration the power delivery must be via the pincushion, through some of the pins.

Intuitively that sounds to me like the power is coming in the backside? But given that it is a big deal I am missing something.

Is the power fed from the "top" of the cpu where the heatsink would sit?

sgalloway · 2 years ago
Here's a very good youtube video on the subject of backside power delivery I watched a few days ago. https://youtu.be/hyZlQY2xmWQ?si=JuXtzRy0U0onEzoe
tgtweak · 2 years ago
A16 in 2027 vs Intel's 18A in full swing by 2026 feels like a miss on TMSCs behalf. This looks like an open door for fabless companies to try Intel's foundry service.
akmittal · 2 years ago
Intel has made lots of big promises, they have yet to catch us to TSMC
SilverBirch · 2 years ago
There are definitely going to be people taking a bet on the Intel foundry, but Intel has tried this before and it has worked badly.
onepointsixC · 2 years ago
Sounds like an easy short for you then.
crowcroft · 2 years ago
Intel hasn't got a great track record in recent history. Really hoping they can hit their timeline though.