What "exactly" is the reason that road to 2nm may not be worth it? I don't see the article makes any answer to it.
Qualcomm are comparing 10nm and 7nm, and those figure are not as high as a generation node should be, and that is perfectly fine because 10nm from TSMC are half node. They should be comparing 14/12nm with 7nm, and there are multiple iteration of 7nm. Does it scale as perfectly as before? No.
It isn't about the complexity and design rules. It is about all about cost, and it only means chips are going to be more expensive going forward and only few companies will be able to afford it, but there are nothing on the roadmap suggest 2nm isn't worth it.
This article reminds me of the old days 10 to 15 years ago when they keep saying the end of Moores law.
What's different now is that everything is running out of steam, not just one component of the process. Lithography is getting exponentially more complicated: the cost of a mask set for a 7nm-class process is obscene. Materials are getting exotic: the serious discussion around use of ruthenium, which is notoriously difficult to work with at nanoscale, tells you how desperate people are getting. Even the fundamental structures are getting bizarre: FinFETs got us to ~5nm, but nanosheet or gate-all-around structures are going to be necessary to go further. And all of this exoticism is expensive, so chips just aren't getting cheaper to make like they used to. Plus reliability is going to hell and costs are out of control....
Basically, the stars are aligning to spell the end of traditional CMOS scaling. 7nm nodes are in early production now; 5nm is on the way; 3nm is likely to happen; 2nm may or may not depending on the economics; 1nm is unlikely but you never know; and sub-nm CMOS nodes are probably just not going to happen. (Don't confuse actual nodes with what happens when marketing gets involved. Some twat sticking a "2nm" label on a 20nm-class process doesn't actually make it any better!) This isn't to say that semiconductors will stop improving, just that it sure won't be classic CMOS any more.
One under-explored area I think we're going to see a lot more of is backfill of older processes. Very few things actually need the latest and greatest digital process: main processors (CPUs, APUs, GPUs) are about it. I think there's plenty of room for improving cost on older processes to make them more widely deployable. To some extent we're seeing that now with SOI processes like 22FDX, but I think that trend will continue. (Unfortunately, there's a lot of stuff that just doesn't shrink, period: some analog stuff, all HV I/O cells, and just about all power management.) So the potential upsides aren't universal, but I think there's a big market out there for stuff that can benefit from smaller processes but can't afford the obscene cost of a multipatterning mask set.
>What's different now is that everything is running out of steam.
At least not yet until 2nm. The TSMC roadmap and its Grand Alliance as Morris Chang likes to call it, has it all till 3nm. That is roughly equivalent or slightly better than Intel's 7nm. We could do 450mm Wafer, when the time comes where it make economics sense. 3nm is likely 2021 / 2022, it is a little hard to tell 5 years down the road what road blocks we have.
I think we are more in agreement than disagreement. I just don't like the article headline and its handpicked statements and content tries to convince whoever is reading it we cant do 2nm. We can and we absolutely will. I would be very much surprised if our 1.3 billion smartphone per year sold and all the money in AI / deep learning as well as GPU can not absorbed the increase in cost. AWS is still expanding at a unbelievable pace, and China and India still has lots of rooms. The bitcoin mining chip has already helped to recoup some of the 16 / 7nm R&D cost ( So bitcoin is not all useless after all as some said ). We have Cars that need lots more transistor for Autonomous Vehicle, I doubt the few thousands dollar increase in BOM would be problem for these car buyers. As long as we can increase the total addressable market, 3nm or even 2nm cost should not be a problem.
>One under-explored area I think we're going to see a lot more of is backfill of older processes.
We already are. 28nm capacity demand has outstripped supply for a while. TSMC is working on further simplifying and lowering cost of 12nm, so it should be ready to replace 28nm by 2020 or later.
In terms of Semiconductor industry, I am much more worry about DRAM cost not coming down.
> This isn't to say that semiconductors will stop improving
The funny thing is they might stop being semiconductors actually.
> Digital logic integrated circuits (ICs) implemented with complementary metal-oxide-semiconductor (CMOS) transistors have a fundamental lower limit in energy efficiency because transistors are imperfect electronic switches, having non-zero OFF-state current (IOFF) and finite sub-threshold slope. In contrast, electro-mechanical switches (relays) can achieve zero IOFF and perfectly abrupt switching characteristics.
> “The abstractions we worked so hard to maintain are becoming porous as secondary and tertiary effects bubble up through the whole flow … from node to node, they intensify or change character,” he said, expressing optimism that “all of these effects will get solved.”
Might you be able to provide a little more information about which effects are becoming problematic?
This stuff is fascinating, and possibly most wonderful in that it's somebody else's problem, but I'm curious what "obscenely expensive" means in this context.
From one of the interview subjects right in the article: "Area still scales in strong double digits..." That ought to disprove the headline all on its own.
The starting point in the article seems sensible, but unremarkable: 2nm is not inherently valuable. It needs to pay out in smaller chips, lower power usage, faster per-transistor speeds, or lower cost per FLOP. But the article is written like a shrink is only valuable if it does all of those things, when in reality it's still worthwhile to get any of them.
Assuming the article is accurate, per-transistor speeds may stall and cost per FLOP may even move backwards (that's not new though - just a question of how fast cost savings catch up with new processes). Power savings might slow, but look substantial at 7nm and 5nm and there's no pitch for why 2nm would differ.
Meanwhile, area scaling is still a major improvement, just as you'd expect. Perhaps that won't be reason enough for desktops and servers, but that's been an open question before now. Lowering minimum chip size creates qualitative change in smartphones, IoT hardware, sensors, etc. It's in that "world market for five computers" vein of only looking at the use of a development in already-extant devices.
As a last note, it's downright sleazy for the article to write "'...It’s not clear what will remain at 5 nm,' said Penzes, suggesting that 5-nm nodes may only be extensions of 7 nm." From the first half of his quote, Penzes is clearly saying that it's not clear which advances will persist through 5nm, not that it's unclear whether any will persist.
Being smaller isn't inherently worth anything for a typical CPU. It's actually a negative for cooling. Even in a phone CPU/GPU, power use is an order of magnitude more important than die size. A more efficient chip means that even if you can't make your battery 2% bigger you still come out far ahead.
You're more optimistic than the article about power use, but if you agree it makes an okay case about speed and cost, and the only strong improvements are on area, then we're not in a great spot.
One of the main problems with reduced feature size is that layers appear thicker relative to that, which causes crosstalk and increased capacitance in the interconnect, which increases impedance and limits the maximum frequency.
In the olden days, poly silicon and metal wires could be thought of as having a square cross section. So if a wire needed to carry a certain current, it had to have a certain area. But in order to manage that current when one dimension shrank (the base) then the other dimension had to grow (the height).
Around the time that chips were getting into the GHz range, the wires had a small footprint but they began to have a taller height relative to that. So instead of square wires, they ended up looking like skyscrapers. All that parallel cross section facing each other makes the wires act like capacitors. So that sets a lower limit on feature size.
Edit: so if the ratio of height to base doubles, the capacitance almost doubles, which decreases the bandwidth by almost half:
We've also just optimized for features, time to market, and programming approaches that don't require the most highly-skilled and trained developers. You can change all that of course (and we do for some applications/situations) but you don't get significantly more efficient code for free. It probably takes longer to write and a lot of current developers probably can't do it.
I'm not a low level developer (whatever that means today). I've written a very minimal amount of production C. Everything else has been higher level languages. So I apologize in advance for my ignorance.
Isn't the problem really memory bandwidth? Aren't most CPU cycles "wasted" on speculative execution while waiting on fetches from memory slower than CPU caches?
Wouldn't a hardware model where all of the RAM (SRAM?) was on the CPU die (fabrication and DRAM clock speed aside) solve this problem and make everything an order of magnitude faster? What am I missing? Is this really the programmers fault?
The CPU literally spends most of its time waiting on my instructions, not choking on them, so why is this my fault for being lazy?
I don't know, Windows 10 seems to be faster on even older hardware. Upgrading from 7 to 10 makes the machine feel MUCH more responsive. Seems to bring life back to the old machines around here.
Have you tried the latest fad? Serverless functions, so you take maybe a 128 to 512 MB of memory virtual machine and have it run a function that is maybe a few hundred lines of code? I double dare you to find something less efficient. Even better is the speed between machines isn't going to get that much faster as some jerk has declared the speed of light to be a constant so say hello to some fun latency issues. To top it off after you invest massive amounts in serverless functions, with all there lockin, you have no control over what Amazon, Microsoft or xxx are going to be charging you in 5 or 10 years. Particularly if the fad falls out of favor and these companies decide to wind the operations down.
Apple doing an optimization focused update with iOS 12 is a breath of fresh air.
My iPad Pro 9.7" (one version old, came before the 10.5") drops frames in iOS 11's full screen blur transition animations. Before iOS 11 it was fine, but either the iOS team didn't test animations on anything but their brand new hardware, or they tested and didn't care if it ran smoothly.
Having one of the OS's most frequent animations choke on anything but the brand newest GPU in the $650+ model? Not a great look, especially with Apple's reputation for making devices that last a while.
iOS 12 is indeed faster, but it seems to be at a cost to increased battery usage. From my understanding, CPU is ramped up to maximum performance mode when needed, like launching applications and other demanding tasks. Then dialed back down when not needed. It could be just the normal beta battery usage issues, but mine gets hit hard.
> All the more reason to go distributed and parallel, and embrace simpler architectures that can support this.
It's very easy to stamp out more ALUs on a chip than we can reasonably fill with data (and, arguably, we've already hit that point with consumer hardware). Improving parallelization nearly always comes at a cost to serial performance, and if you have code that requires a latency-sensitive, unparallelizable critical loop... well, you're basically saying "screw you."
When it became clear around that time that processor frequency was tapped out and that multi-core was the future there was a lot of hand-wringing around parallel programing. There weren't the tools. There weren't the skills. Etc. For a variety of reasons, the issues turned out to be less than feared overall.
But you were still getting more transistors at the time. You just had to use them in different ways.
There are a lot of other levers to improve price/performance such as specialized hardware architectures. However CMOS process scaling has been a singularly powerful approach. I've read something like 3500X over the course of its lifetime. It's reasonable to ask whether any combination of other techniques can come close to that and what the effect is of price performance stagnating.
You can always just add more computers in "the cloud" I suppose. But, at some level, increased functionality depends on better price performance.
The march of faster and faster transistors seems to be headed for an interregnum until some new technology can replace CMOS. Price per compute might still continue to fall though. I hope that we'll see an era of more architectural diversity coming up as the end of the march of nodes mean that a design can take longer to pay off and still be economical.
We're already seeing this in machine learning. I used to follow the processor space fairly closely and I'd see various companies come out with designs that were specialized for some workload or other. The problem was that you could instead wait for a couple years and x86 would be just as fast.
You also have some other things going on like open source software, centralized computing in clouds, etc. that make having hardware that's optimized for some specific workload much more tenable.
>The problem was that you could instead wait for a couple years and x86 would be just as fast.
This still holds for the majority of users. Only tier 1 dotcoms can run stuff on GPUs economically. For everybody else, just getting many times, possibly few magnitudes, more commodity x86 machine time still makes sense.
I always remember the saying of my ops unit head once I worked in an advertising network. It was something to the meaning of: "a new coder like you who can find a fancy algo to do thing A and B faster pays off in two years, but a 10 new servers will pay off in just 2 month"
And I am still convinced that fabs will continue mercilessly squeezing CMOS till the last drop of blood for at least a decade. There are tons and tons of avenues to make cells smaller, cooler, and more performant that don't involve further process down scaling or switching to a next gen semiconductor.
> centralized computing in clouds [could] make having hardware that's optimized for some specific workload much more tenable.
I think this is the key. GPUs are pretty much the only consumer-level specialized co-processors, and those have been selling like gang busters. Plot twist, it's the data centers that are buying them up... to the point that the manufacturers tried to prevent you from buying more than one at a time.
Of course specialized processors need to have proper networking and storage proximity to really deliver value. May be more challenging to offer a specialized service like this than you'd hope. Perhaps the big cloud vendors will dominate again, so chip manufacturers need to just hope for a few big contracts to Amazon / Microsoft / Google.
IMO this will be the final forcing function for silicon photonics. ultimately if you cannot scale on-die you have to scale off-die. plus this takes away any further scaling concerns about on/off board components. lastly this will have major implications for processor architecture because memory access latency is quite huge and execution is often stalled for memory access. if you have any doubts about that just look at the amount of area dedicated to caches.
AFAIK Intel & luxtera are the two main leaders in silicon photonics so I wouldn't count intel out yet.
Offtopic, are 3D chips still being researched? I mean not two dies glued together but true multilevel semiconductors. What are the biggest challenges (I guess heat is one, and semiconductor lattice another), and how big would the advantages be?
It is cheaper to make chip 4 times the size, than doubling the amount of process layers.
The trick is to add 3D features that don't involve making an entirely new device on top of another. This is the case for RAM, MRAM, and flash memory: they share devices in the stack, the only parts that are being scaled vertically are charge/spin carrying parts, but not amplifiers, backend, data lanes, or other devices.
There was a lot of talk about making 3D standard cells (stuff from which normal, non-memory, devices are made of.) The amount of work is immense. Every year there are a dozen cookie cutter PhD work like "3D NAND/XOR/INVERT device that is N percents smaller than before," but it will take years to cover and unify the whole cornucopia of devices in cell libraries. And only once it's done, will major fabs think of switching to that. No fab will try to add much more litho layers just to reduce footprint of only one device or macrocell.
I just learned about microfluidics[1] the other day, so now I'm seeing possible applications everywhere. I wonder if this is one such possible application.
Well, there has to be an end to exponentially growth somewhere...but I remeber that being preached that Moores Law is ending for years. I recently saw old slides from a professor at my uni from the 2000ers containing the "the end of exponential growth is near!"-like messages.
Moore's law did stop being true right around ~2006. And if you look at overall CPU and memory performance rather than transistor density the story is worse. You haven't been able to count on the computer being twice as fast very soon in a long time.
Sort of. Moore's law has meant a lot of things that were all tied together with transistor scaling back when voltage didn't need to be scaled down with each new node to fight leakage. But now that leakage is a major concern a lot of what would have been the old gains of shrinking a transistor go away. There are gains but they're a lot smaller than they used to be. But the oldest sense of Moore's law of doubling the transistors you can economically fit on a piece of silicon is very much with us. The doubling period has gone back to the original 2 years rather than the 18 months we were able to achieve for a while but it's still happening.
What's nice (at least I like it) is that once performance slowed down, so did software requirements. Developers try a bit harder to optimize their code.
Is the speed of light for light-in-some-substrate sufficiently faster than the speed of light for electrical charge? Is there any speed penalty for converting between the two?
the speed of light is always the speed of light, you have to observe at sufficiently small scale to make this observation. light in a medium travels between obstacles in a less than linear trajectory thus has an "apparent" average velocity. conversion from photonic to electronic can be very fast such as photonic emission during valence electron demotion, but in a practical case a junction has a latency and a fuzziness due to thermalnoise and quantum uncertainty...that is where the penalties will occur...
conceptually there are 2 ends of a stick, does light actually move, or does light stand still while spacetime unfolds at "the speed of light"
Qualcomm are comparing 10nm and 7nm, and those figure are not as high as a generation node should be, and that is perfectly fine because 10nm from TSMC are half node. They should be comparing 14/12nm with 7nm, and there are multiple iteration of 7nm. Does it scale as perfectly as before? No.
It isn't about the complexity and design rules. It is about all about cost, and it only means chips are going to be more expensive going forward and only few companies will be able to afford it, but there are nothing on the roadmap suggest 2nm isn't worth it.
This article reminds me of the old days 10 to 15 years ago when they keep saying the end of Moores law.
Basically, the stars are aligning to spell the end of traditional CMOS scaling. 7nm nodes are in early production now; 5nm is on the way; 3nm is likely to happen; 2nm may or may not depending on the economics; 1nm is unlikely but you never know; and sub-nm CMOS nodes are probably just not going to happen. (Don't confuse actual nodes with what happens when marketing gets involved. Some twat sticking a "2nm" label on a 20nm-class process doesn't actually make it any better!) This isn't to say that semiconductors will stop improving, just that it sure won't be classic CMOS any more.
One under-explored area I think we're going to see a lot more of is backfill of older processes. Very few things actually need the latest and greatest digital process: main processors (CPUs, APUs, GPUs) are about it. I think there's plenty of room for improving cost on older processes to make them more widely deployable. To some extent we're seeing that now with SOI processes like 22FDX, but I think that trend will continue. (Unfortunately, there's a lot of stuff that just doesn't shrink, period: some analog stuff, all HV I/O cells, and just about all power management.) So the potential upsides aren't universal, but I think there's a big market out there for stuff that can benefit from smaller processes but can't afford the obscene cost of a multipatterning mask set.
At least not yet until 2nm. The TSMC roadmap and its Grand Alliance as Morris Chang likes to call it, has it all till 3nm. That is roughly equivalent or slightly better than Intel's 7nm. We could do 450mm Wafer, when the time comes where it make economics sense. 3nm is likely 2021 / 2022, it is a little hard to tell 5 years down the road what road blocks we have.
I think we are more in agreement than disagreement. I just don't like the article headline and its handpicked statements and content tries to convince whoever is reading it we cant do 2nm. We can and we absolutely will. I would be very much surprised if our 1.3 billion smartphone per year sold and all the money in AI / deep learning as well as GPU can not absorbed the increase in cost. AWS is still expanding at a unbelievable pace, and China and India still has lots of rooms. The bitcoin mining chip has already helped to recoup some of the 16 / 7nm R&D cost ( So bitcoin is not all useless after all as some said ). We have Cars that need lots more transistor for Autonomous Vehicle, I doubt the few thousands dollar increase in BOM would be problem for these car buyers. As long as we can increase the total addressable market, 3nm or even 2nm cost should not be a problem.
>One under-explored area I think we're going to see a lot more of is backfill of older processes.
We already are. 28nm capacity demand has outstripped supply for a while. TSMC is working on further simplifying and lowering cost of 12nm, so it should be ready to replace 28nm by 2020 or later.
In terms of Semiconductor industry, I am much more worry about DRAM cost not coming down.
The funny thing is they might stop being semiconductors actually.
> Digital logic integrated circuits (ICs) implemented with complementary metal-oxide-semiconductor (CMOS) transistors have a fundamental lower limit in energy efficiency because transistors are imperfect electronic switches, having non-zero OFF-state current (IOFF) and finite sub-threshold slope. In contrast, electro-mechanical switches (relays) can achieve zero IOFF and perfectly abrupt switching characteristics.
https://www2.eecs.berkeley.edu/Pubs/TechRpts/2017/EECS-2017-...
And the work on the optical technologies to use instead of transistors are massive recently.
Gimme a coprocessor that's just 1000 Pentium I's on the same die.
> “The abstractions we worked so hard to maintain are becoming porous as secondary and tertiary effects bubble up through the whole flow … from node to node, they intensify or change character,” he said, expressing optimism that “all of these effects will get solved.”
Might you be able to provide a little more information about which effects are becoming problematic?
The starting point in the article seems sensible, but unremarkable: 2nm is not inherently valuable. It needs to pay out in smaller chips, lower power usage, faster per-transistor speeds, or lower cost per FLOP. But the article is written like a shrink is only valuable if it does all of those things, when in reality it's still worthwhile to get any of them.
Assuming the article is accurate, per-transistor speeds may stall and cost per FLOP may even move backwards (that's not new though - just a question of how fast cost savings catch up with new processes). Power savings might slow, but look substantial at 7nm and 5nm and there's no pitch for why 2nm would differ.
Meanwhile, area scaling is still a major improvement, just as you'd expect. Perhaps that won't be reason enough for desktops and servers, but that's been an open question before now. Lowering minimum chip size creates qualitative change in smartphones, IoT hardware, sensors, etc. It's in that "world market for five computers" vein of only looking at the use of a development in already-extant devices.
As a last note, it's downright sleazy for the article to write "'...It’s not clear what will remain at 5 nm,' said Penzes, suggesting that 5-nm nodes may only be extensions of 7 nm." From the first half of his quote, Penzes is clearly saying that it's not clear which advances will persist through 5nm, not that it's unclear whether any will persist.
You're more optimistic than the article about power use, but if you agree it makes an okay case about speed and cost, and the only strong improvements are on area, then we're not in a great spot.
In the olden days, poly silicon and metal wires could be thought of as having a square cross section. So if a wire needed to carry a certain current, it had to have a certain area. But in order to manage that current when one dimension shrank (the base) then the other dimension had to grow (the height).
Around the time that chips were getting into the GHz range, the wires had a small footprint but they began to have a taller height relative to that. So instead of square wires, they ended up looking like skyscrapers. All that parallel cross section facing each other makes the wires act like capacitors. So that sets a lower limit on feature size.
Edit: so if the ratio of height to base doubles, the capacitance almost doubles, which decreases the bandwidth by almost half:
https://en.wikipedia.org/wiki/Electrical_impedance#Capacitiv...
https://www.electronics-tutorials.ws/accircuits/ac-capacitan...
We've gotten complacent. Software keeps getting faster, despite bad coding - just because of the improvements in hardware.
Engineers have known for a long time that we aren't as efficient as we could be.
[0] Not attempting to beat up on either of those companies.
Isn't the problem really memory bandwidth? Aren't most CPU cycles "wasted" on speculative execution while waiting on fetches from memory slower than CPU caches?
Wouldn't a hardware model where all of the RAM (SRAM?) was on the CPU die (fabrication and DRAM clock speed aside) solve this problem and make everything an order of magnitude faster? What am I missing? Is this really the programmers fault?
The CPU literally spends most of its time waiting on my instructions, not choking on them, so why is this my fault for being lazy?
Disagree. Much of it keeps getting slower.
My iPad Pro 9.7" (one version old, came before the 10.5") drops frames in iOS 11's full screen blur transition animations. Before iOS 11 it was fine, but either the iOS team didn't test animations on anything but their brand new hardware, or they tested and didn't care if it ran smoothly.
Having one of the OS's most frequent animations choke on anything but the brand newest GPU in the $650+ model? Not a great look, especially with Apple's reputation for making devices that last a while.
Fingers crossed that iOS 12 fixes it.
It's very easy to stamp out more ALUs on a chip than we can reasonably fill with data (and, arguably, we've already hit that point with consumer hardware). Improving parallelization nearly always comes at a cost to serial performance, and if you have code that requires a latency-sensitive, unparallelizable critical loop... well, you're basically saying "screw you."
https://millcomputing.com/
They're working on a new processor architecture, which is very interesting in many, many ways.
And they're working on ways to parallelize existing code, in ways that can't really be done with conventional superscalar architectures.
http://www.gotw.ca/publications/concurrency-ddj.htm
But you were still getting more transistors at the time. You just had to use them in different ways.
You can always just add more computers in "the cloud" I suppose. But, at some level, increased functionality depends on better price performance.
We're already seeing this in machine learning. I used to follow the processor space fairly closely and I'd see various companies come out with designs that were specialized for some workload or other. The problem was that you could instead wait for a couple years and x86 would be just as fast.
You also have some other things going on like open source software, centralized computing in clouds, etc. that make having hardware that's optimized for some specific workload much more tenable.
This still holds for the majority of users. Only tier 1 dotcoms can run stuff on GPUs economically. For everybody else, just getting many times, possibly few magnitudes, more commodity x86 machine time still makes sense.
I always remember the saying of my ops unit head once I worked in an advertising network. It was something to the meaning of: "a new coder like you who can find a fancy algo to do thing A and B faster pays off in two years, but a 10 new servers will pay off in just 2 month"
And I am still convinced that fabs will continue mercilessly squeezing CMOS till the last drop of blood for at least a decade. There are tons and tons of avenues to make cells smaller, cooler, and more performant that don't involve further process down scaling or switching to a next gen semiconductor.
I think this is the key. GPUs are pretty much the only consumer-level specialized co-processors, and those have been selling like gang busters. Plot twist, it's the data centers that are buying them up... to the point that the manufacturers tried to prevent you from buying more than one at a time.
Of course specialized processors need to have proper networking and storage proximity to really deliver value. May be more challenging to offer a specialized service like this than you'd hope. Perhaps the big cloud vendors will dominate again, so chip manufacturers need to just hope for a few big contracts to Amazon / Microsoft / Google.
AFAIK Intel & luxtera are the two main leaders in silicon photonics so I wouldn't count intel out yet.
The trick is to add 3D features that don't involve making an entirely new device on top of another. This is the case for RAM, MRAM, and flash memory: they share devices in the stack, the only parts that are being scaled vertically are charge/spin carrying parts, but not amplifiers, backend, data lanes, or other devices.
There was a lot of talk about making 3D standard cells (stuff from which normal, non-memory, devices are made of.) The amount of work is immense. Every year there are a dozen cookie cutter PhD work like "3D NAND/XOR/INVERT device that is N percents smaller than before," but it will take years to cover and unify the whole cornucopia of devices in cell libraries. And only once it's done, will major fabs think of switching to that. No fab will try to add much more litho layers just to reduce footprint of only one device or macrocell.
[1]https://en.wikipedia.org/wiki/Microfluidics
Or a tube-like cylindrical thing?
https://en.wikipedia.org/wiki/3D_XPointhttps://en.wikipedia.org/wiki/Flash_memory#Vertical_NAND
(2) How the heck would you cool it?
2) I seem to remember IBM was looking at microfluidics for both power and heat distribution, like our blood.
So I am not sure what to believe.
When do we start using optical interconnects?