Readit News logoReadit News
CydeWeys · 10 years ago
I remember reading articles like this a decade ago saying that transistors couldn't get any smaller because they were already the size of visible light. We've since switched to UV light per this article, and there's still much shorter wavelengths available. The size of the atoms that make up the wafer itself could be a potential problem too, but there's other materials we can switch to that are less susceptible to quantum tunneling. And that's not even touching true 3D chip design (right now it's a pile of layers that is essentially 2D).

Point being, there's many billions of dollars in revenue here at stake, and chip companies are doing their damnedest to solve these problems. They've solved every challenge so far, and there's no real reason that these latest challenges are fundamentally unconquerable in ways that the previous ones were not. An article highlighting the current challenges is useful, but one positing that they can't be overcome is sensationalist.

kragen · 10 years ago
On the contrary, this article omits mentioning most of the limits we're running into, probably because the reporter couldn't understand them. Dennard scaling ended almost a decade ago because of leakage. You can still put more transistors on a chip, but the benefit of doing so is going away: it allows you to put more hardware in there, but you can't afford to run it all the time because it overheats (the "dark silicon" problem), so instead of giving you a parallel speedup, you're limited to providing a wider range of alternative hardware resources. Reversible computing was an attempt to solve the power-dissipation problem, but it doesn't help with leakage currents, and it's also essentially unachievable as a practical reduction in dissipation with current technology.

(I think it's a little strange to call light with a wavelength of under 10nm "ultraviolet". It's 40 times more energetic than the single octave we call visible light; that's five or six octaves away. I think it's usually called "X-rays".)

True 3D chip design is older than 2D chip design. Bardeen and Brattain's transistor was a 3D chip. That's why it was impractical for 12 years until the planar fabrication process in 1959. Planar fabrication is not only technically important, but also necessary for our current approaches to cooling, which is getting gradually more crucial.

I agree that we're not even close to the ultimate limits of computation. If your lambda is 10 nm and your wires are 20 nm across, that's still on the order of 200 atoms across, and it's been demonstrated that you could just use one: that's almost 16 Moore doublings, or 32 years. And certainly when we figure out how to do reversible computation and molecular nanotechnology, we can reach that limit. But CMOS and X-ray lithography aren't on a path to do that.

CMOS had a great run, totally dominating electronics from about 1982 to about 2017, pushing every alternative process to the margins with its low cost and the miracle of Dennard scaling: ECL, Josephson junctions, CCDs, DNA computing, vacuum tubes, core, chalcogenide glasses. But now that's over.

But don't forget that electronics started in 1904. Vacuum tubes, planar bipolar transistors, planar bipolar ICs, and planar CMOS ICs have each had their time in the sun. Whatever comes next will be something totally different, maybe as different as vacuum tubes are from planar transistors, and there's no reason to expect Intel or TSMC to be the one to pioneer it, just as Polaroid, Kodak, and Nikon weren't the companies that pioneered semiconductor imaging sensors or flash memory.

I'd be surprised if it happened in the US or Europe. Israel, China, Korea, or Japan, possibly.

ChuckMcM · 10 years ago
Excellent summary, note that the deep trench transistors (aka FinFets) helped with the leakage problem significantly, but heat will always be an issue.

I've speculated that if we could figure out how to manufacture it, we could get electron scale computation out of an artfully constructed Graphene matrix. Basically a tiny Pachinko machine with electrons for balls. But at that scale you're going to need redundancy to get around quantum effects. We know you can build transistors at 7nm [1] but to what end.

[1] http://arstechnica.com/gadgets/2015/07/ibm-unveils-industrys...

interdrift · 10 years ago
Great talk!
Quanticles · 10 years ago
I don't think anyone doubts that we'll continue to make improvements in process technologies until the sun explodes, the main questions are pace and benefits.

Yes, we can create shorter channels using more/different materials. Yes, we can get stronger gate fields using more/different materials. Yes, we can start building vertically and cool the chip using microfluidics. But, all of these things take more time than previous scaling efforts, and are going to have quantum scaling benefits instead of the classical scaling benefits that we had before.

Even now, companies have a difficult choice when switching to 16nm - is it worth the cost and effort? The rules for designing chips have double width each new generation too, making it harder to physically design in addition to the less-manageable transistors. We now have a bad combination of high supply and low demand - new processes double supply due to shrinkage, but reduce demand due to design cost & difficulty.

edit: Newtonian => classical

weland · 10 years ago
> Even now, companies have a difficult choice when switching to 16nm - is it worth the cost and effort?

The alternative is to spread processing power over multiple cores and have programmers write code that takes advantage of that. We have a thirty year-long history of failures in that department. It has gone from "this is how we'll be writing code tomorrow" in the '80s to "this will FINALLY be a nail in the coffin of Moore's law, just wait and see" since the late 1990s.

The cost is worth it because there are a lot of high-profile industries depending on it, and there is no credible alternative yet. Until that alternative -- which will be hardware in nature, not software -- comes up, the current trend is all we have.

jessriedel · 10 years ago
The opposite of 'quantum' is 'classical', not 'Newtonian'.
lsc · 10 years ago
Now, I'm not even close to being able to weigh in on the physics side here... but my observation, as someone who has spent several hundred grand on servers in the last decade is that Intel is run by profit-maximizing professionals.

Look at how much better/faster/cheaper the intel stuff got after AMD's release of the hyper transport opteron.

and look at how much Intel tried to collect rent from their position of market dominance (say, by making us buy rambus and then FBDIMMs) before that.

AMD needs to step up their game.

There's another market force at work, too, and that is demand. On the consumer side of things, at least, there isn't any need for faster x86 computers, because Microsoft isn't doing it's job. It used to be that every three years, Microsoft would release another office suite that everyone needed, that ran like a dog if your computer was more than a year or two old.

I just upgraded my games box, a decade-old core2duo, to windows 10. Works fine (modulo spyware bullshit) - Microsoft has been focusing on the "mobile" market and has been putting a lot of effort into slimming down.

That, and (and I'm so shocked to say this) but it turns out that the people crowing about 'mobile is the future' were mostly right. When I was in high school, even my poor friends had desktops. Now, I know a lot of people, even technical people who only have laptops... and a lot of the people I went to high school with got rid of their desktops and now only have cellphones and tablets; nothing with a full keyboard and intel CPU at all.

If that trend continues, I would predict that we'll eventually switch our servers to whatever architecture the mobile devices use. AMD and Cavium and a handful of other companies have been talking about doing that with ARM, but so far there's nothing I can buy except at engineering sample prices.

achamayou · 10 years ago
You can buy HP Moonshots with ARM cartridges for relatively realistic prices. PayPal is supposedly running some of their stuff on them.
lsc · 10 years ago
I do not have the energy to go through sales... what is 'realistic' here?

And can they compete in cpu power per watt with the Intel Xeon D boards I can get from my local supermicro distributor? they look pretty sweet; the only real problem is the four dimm limit; I'm seriously considering those and 32gib modules right now, just because they have such a nice watts per flop.

Long-term, for it to be a realistic solution, I'll need to be able to buy compatible parts from different companies.

Dead Comment

transfire · 10 years ago
10 years? It has already happened. Consumer chips haven't gotten much better in a decade. Most gains have been applied to power efficiency for mobile use. An unfortunate side effect is the that chip makers are learning that they can milk money for smaller improvements. I doubt we will ever return to the old <2yr Moore's law.

On the upside we might get lucky and some new innovations will come along an give us a quick bump. Tech like optical interconnects, 3D chip manufacturing, memristors, etc.

CydeWeys · 10 years ago
Huh? Moore's Law is about transistor density, not clockspeed. It absolutely hasn't paused over the past decade. See here: http://education.mrsec.wisc.edu/SlideShow/images/computer/Mo...

Processors have also gotten significantly more capable over the past decade, and if you don't believe me, go use an Athlon 64 X2 for a bit (the cream of the crop from 2005) and see how that compares to today's processors. Poorly.

CydeWeys · 10 years ago
And here's a representative comparison if you want to see actual benchmarks that measure real work that processors can perform:

1. THEN: AMD Athlon 64 X2 3800+ http://www.cpubenchmark.net/cpu.php?cpu=AMD+Athlon+64+X2+Dua...

2. NOW: Intel Core i7-5820K http://www.cpubenchmark.net/cpu.php?cpu=Intel+Core+i7-5820K+...

Note that the modern one is 13 times faster, and came out about nine years later. Sounds like Moore's Law is alive and well to me.

scholia · 10 years ago
This is true. However, if you look at the chips in average consumer laptops, I don't think they have increased in performance in line with Moore's Law.

Today's consumers are being sold cheap laptops that often have Celeron and Pentium chips, which are branded versions of the Atom design. Most of them are slower than an Intel Core 2 Duo E4400 from 2007. The big difference is that that had a TDP of 65W where a modern Atom would be more like 8W.

That dramatic reduction in TDP has been driven by ultra-thin laptop designs, which has also affected faster Core designs, especially the 4.5W Core M.

So yes, a lot of consumers are getting laptops that haven't increased dramatically in performance, and may have declined (as rated by Passmark benchmarks (1)). But they are getting thinner laptops with better battery life, and they are paying a lot less for them (maybe $200).

(1) Real-life performance is also affected by things like built-in H264 decoding, hardware acceleration in operating systems etc.

phaemon · 10 years ago
CydeWeys is correct. To quote a previous post of mine:

----------------------

The fastest CPU in 2005 was (I think) the mighty AMD Athlon 64 FX-57, which PassMark rates at: 731.

Meanwhile, in 2015, we have the Intel Core i7-5930K which scores: 13,638.

--------------------

I really have no idea where this crazy idea that 2005 was the pinnacle of consumer CPU power comes from...

tormeh · 10 years ago
When people say things like that, they usually bemoan the radical slowdown in single-thread performance improvements. For some problems that's all that matters, and the people that have such problems are understandably disappointed.

Moore's law is alive and well (at least for now) for low-power and multicore, but for single-thread desktop-class chips things have slowed down a lot.

edmccard · 10 years ago
>I really have no idea where this crazy idea that 2005 was the pinnacle of consumer CPU power comes from...

As 'tormeh said, single-threaded performance has not improved as much since 2005. For example, the Passmark single-thread rating for the Athlon 64 FX-57[1] is 832 and for the i7-5930K[2] is "only" 2081.

[1] http://www.cpubenchmark.net/cpu.php?cpu=AMD+Athlon+64+FX-57

[2] https://www.cpubenchmark.net/cpu.php?cpu=Intel+Core+i7-5930K...

knowaveragejoe · 10 years ago
I have little experience with hardware but I've often wondered about this. Chip manufacturers seem to be trying to fit more and more into an amount of space that continues to shrink. Why not stay at, say, 1x1" and fill that space out? I'm sure there's a good reason why not, heat or density of the connections or somesuch. I'd love a more thorough explanation however, if anyone knows.
akiselev · 10 years ago
The smaller your transistors get, the more errors you get per square inch of silicon wafer so in order to make ends meet as the manufacturing gets more and more expensive, you have to do a lot of tricks.

Tiny errors that are within a single processor can usually be fixed by disabling some cores or features and downgrading the SKU (i7 sold as i5). However, big errors are extremely costly and a few can wipe out the yield and profitability on a wafer.

If you have, for example, two 1x1 in processors on a wafer and a scratch stretching two adjacent corners of those processors, you lose both processors. If the scratch is diagonal and at the boundary between three or four processors, you would lose them all. If, however, those 1x1in units are split into .5x.5in ones, even your worst errors are unlikely to destroy the entire 2 sq in surface.

The difference in size isn't as drastic as in my example, but there are many other factors that impact yield and chip size is one of the most impactful ones (everything else being equal)

pkaye · 10 years ago
Die size influences yield which immensely impacts how much money you make. So you make things as small as feasible. Smaller die means more chips per wafer and smaller die means less chance of some contamination causing a particular die to be defective.
yk · 10 years ago
Because they want to fit as many dies as possible onto a wafer. The cost for lithography is largely per waver, so if they can cut more chips from a single waver, that is, if their chips are smaller, they can increase their profit per wafer.
mpweiher · 10 years ago
That and smaller geometries tend(ed) to get faster (the electrons have to travel smaller distances) and consume less power to boost. Win win win.

This is why Intel used to beat out much better processor architectures simply by having enough money to always be at least 1 fab generation ahead.

sliverstorm · 10 years ago
Defect density is relatively fixed. If you have very large die, defects will take out all your chips on the wafer and leave you with zero yield. If you have very small die, you will only lose as many die as you have defects.
kurthr · 10 years ago
If you want actual industry discussion, I recommend SemiWki articles 1-5 by Scotten Jones. https://www.google.com/url?q=https://www.semiwiki.com/forum/... (my apologies on the link, but the direct one I tried did't work without login) The issue isn't Moore's law (either speed or #transistors), the issue is cost per transistor. If that doesn't scale we aren't going to double the price we pay to get 2x more cores. Power/Cooling and the increasing cost of fabs has also put a dent in things.
max-a · 10 years ago
Do you know more sources where I can read about IC manufacturing? Gwern's article [0] spurred my curiosity about the topic. EE senior here.

[0]http://www.gwern.net/Slowing%20Moore%27s%20Law

hga · 10 years ago
While it's dated (e.g. just prior to what people are saying are the heydays of CMOS, which matches my vague memory), I found this 1981 book invaluable back when I read it in the early '80s: http://www.amazon.com/Microelectronics-Revolution-Tom-Forest...

It covers in at least a little detail all the generations of semiconductors, e.g. there was a great table showing which companies were big in each. As I recall, back then TI was the only survivor, and, surprise, TI is still pretty strong as I understand it. It's discussions and illustraions of yield, what akiselev discusses here https://news.ycombinator.com/item?id=10286735 were particularly useful.

I wouldn't recommend it today (and probably didn't return to it after the '80s) except that's it pretty cheap used and will cover lots of stuff that's not so generally well known now.

modeless · 10 years ago
Although it requires watching, I'm a fan of https://youtu.be/NGFhc8R_uO4

Deleted Comment

DannoHung · 10 years ago
Are gallium based chips completely non-viable or something? I thought they were the next big thing in chip materials that would let us get past the Moore's law restrictions in silicon?
kragen · 10 years ago
According to the USGS, gallium arsenide use is higher than ever, but it remains niche, because it's just a minor tweak to the silicon bandgap structure, especially for bipolar circuits, where people have been fabricating high-performance CPUs in it since the 1980s. It doesn't work even adequately for CMOS, but even if it did, it doesn't solve the problems with Dennard scaling.

If you're looking for exotic materials to bail us out, it might be a little less hopeless to look to diamond or to high-temperature semiconductors.

vjoshi · 10 years ago
I thought (and forgive me if I'm wrong, not a big area for me)... but Denard scaling fell apart cira 2004? Aah unless that's what you mean, about not able to scale oxide thickness / voltage etc
sliverstorm · 10 years ago
They are really, really expensive to make compared to silicon, and I want to say they run hotter (but my memory is fuzzy).

Silicon has never really been the most ideal material for transistors, but it has always been so cheap in comparison to be the most cost effective.

The joke about GaAs is that it will always be the material of tomorrow. (They've been using it in lab settings for decades)

vjoshi · 10 years ago
Agree, Lidow's championing of gallium (along with many others) is far from going unnoticed. Some people are still however caught up on the cost aspect and feel the economies of scale silicon based chips have are too high to break. Frankly, this is just not true anymore...
CydeWeys · 10 years ago
I don't think the author of this article knows about them, or they didn't fit into the narrative.
CmonDev · 10 years ago
Searched for 'parallel' and 'core' on the page - no matches found.