So far Gelsinger's ambitious roadmap has worked out. Goodbye MBA mentality and back to Grovian execution and engineering centric culture.
Intel made huge error when they decided to delay DUV -> EUV transition. Now Intel is the first to order ASML’s EXE:5200 and push High-NA. PoverVia and RibbonFET are what Intel is going to use. Meanwhile Intel's EUV 3nm chips are coming out this year.
I realize the whole MBA bad idea is popular right now on HN but it's worth remembering that Intel's struggles with 10nm were the result of too much engineering ambition rather than too little. The original 10nm was very VERY ambitious and if it had actually worked and had hit volume production on anything resembling the original timeline Intel would have essentially had a half decade worth of a process advantage over its competitors. Unfortunately, letting engineering go nuts can sometimes screw you just as much as letting the out of touch bean counters rule. Engineering based businesses have to manage both the engineering and the business side. Failing to do that means disaster.
You can't accurately describe Intel's 10nm disaster without mentioning that they were making a huge bet that EUV wasn't going to be ready anytime soon so they were trying everything they could to keep up with Moore's Law except using EUV. But some of the things Intel planned for 10nm turned out to be harder to get working correctly than EUV.
It wasn't simply the engineers going nuts trying to make a huge jump all at once. They were taking a bunch of unique risks in order to follow a different path from the rest of the industry. If Intel had planned to follow a similar EUV timeline to the rest of the industry, they would have been subject to the same risks as everyone else regarding EUV and probably could have maintained a moderate lead throughout that transition, with a worst-case outcome being that they would be part of an industry-wide failure to keep up with Moore's Law if EUV didn't work out. Instead, they ended up years behind.
The MBA idea is correct in this case. Intel had the board and leadership that led it to ambitious process without enough urgency or resources, or ability to course correct quickly enough.
The most important question must be discussed at C-suite. Intel didn't have have enough people there to make decisions.
> Engineering based businesses have to manage both the engineering and the business side. Failing to do that means disaster.
Top engineers can learn to manage business at the highest levels. Business leaders can't learn enough engineering to manage engineering companies.
A business where one unit thinks it can make decisions in a vacuum is going to suffer regardless of the particular field of training that the leadership comes from.
If Engineering and Business work _together_ it doesn't matter who's in the lead position.
> Goodbye MBA mentality and back to Grovian execution and engineering centric culture.
> Intel made huge error when they decided to delay DUV -> EUV transition.
Just as an FYI, that error was made when the CEO was an engineer, not an MBA.
And I find it amusing for folks here to cheer Grovian culture. Andy Grove's management style had all of what people criticize Amazon's culture, on steroids. Indeed, I believe Jeff Bezos took some of the 14 leadership principles from Grove (who was CEO at that time).
> And I find it amusing for folks here to cheer Grovian culture. Andy Grove's management style had all of what people criticize Amazon's culture, on steroids.
People tend to forgive a leader's flaws, including really terrible flaws, if the leader seems to be producing results that people like.
> Just as an FYI, that error was made when the CEO was an engineer, not an MBA.
Hm... no? Most of the EUV roadmap decisions were likely finalized c. 2009-2010 when they bet on double patterning for what would become the 14nm node. That was under Paul Otellini, who was an MBA who climbed the ladder in Intel via the sales and marketing organization.
The leadership principles are nonsensical because (no joke) they occur in pairs labelling extrema of various dimensions, the point being that _every_ activity can be described as lying on some point in 7-dimensional bullshit space, and that point can be either characterized as close to or far from some leadership principle.
I don't know how else to describe it, but it's just a justification system for a deep hierarchy to belittle the workers.
I say this having worked years in the inner engineering sanctum of Apple where none of this bullshit existed (both during and after the Steve Jobs era).
The Intel roadmap is a nice work of optimistic investor-targeted marketing, but I have no idea how to interpret it.
I can see in the roadmap slide that 10A "arrives" in late 2027.
However, the Intel roadmap also shows both intel4/3 and intel 20A/18A present from the start of 2023. The article mentions that 18A/20A nodes have been in "some form of production since 2023". Meanwhile, current Intel chips are still partially outsourced to TSMC, and Intel has promised zetta-scale systems by 2027.
Why aren't these companies investing more into 3-dimensional chips instead of trying to squeeze more on the same 2-dimensional die when we're so close to hard atom-size limits?
A 3cm x 3cm x 3cm cube could fit a hell of a lot of transistors and gates even if it is 20nm.
Cooling and power supply. Plus it's really hard to have more than one transistor level: you can try to put transistors on both sides of a die or you can take separate dies, grind some of them really thin, and then put them on top of each other.
I’m not the one who downvoted you, but while stacking in 3D might work for things like ram and storage, it doesn’t help the same way with cpu. They aren’t trying to get the overall die size smaller per se, but rather the node size. Generally speaking, smaller transistors = lower energy consumption + higher performance for the same number of transistors.
Intel's five nodes in four years plan is extremely agressive and ambitious. They are trying to pull off something that I believe is not feasible. Lots of moving parts in parallel.
I worked in a Fab for a year and the complexity is mind blowing. I don't see how they can execute to build those nodes and get the yields under control in such a short timeframe.
The step from 7 to 5nm shrunk pitch to 85%, a mere 19% of marketing hype.
But the step from 2 to 1nm shrunk gate pitch to 93.3% instead of the suggested 50%, a large 87% marketing hype.
Or viewed as a shrink by only 6.7%, it's a whopping 600+% hype.
However note that the feature size barely shrinks between 2 nm and 1nm. I'm not sure what size Intel will go with, but offhand this might be close to the limits of silicon as we know it. I'm not sure where things will get pushed in the quest for further progression.
Also of note, IIRC some of AMD's presentations around when Ryzen launched involved mention that cache / memory (and maybe some other features?) on CPUs weren't scaling down effectively past 7 ~ 5 nm, which is one of the reasons GPUs had the cores and then external memory controllers. I know automakers and a bunch of other consumers that would like bulk (cheap) products made on modern wafers (300mm) with a semi-modern process that's super inexpensive. Particularly power ICs which would prefer lower leakage even if it means a slower response.
Well what is 1nm measuring? The diameter of the sphere in which the combined gray brain matter of the marketing department could be squeezed into? Lithography wavelength?
In principle 7nm has 57, 64 or 76nm gate pitch. Routing is quite inefficient in 57nm gate pitch, so 64nm is used most often. As far as I can see in the design rule manual there's no possibility for 60nm pitch. 40nm metal pitch for colored metals is correct.
Standard cell height pitch has a similar story, 270nm height is possible but means high parasitics induced RC delay. For good performance 300nm or higher height pitches are used.
These tables miss a huge context though. TSMC N7 drawn gate length is like 20nm, not 60nm. This pitch includes diffusion (source/drain) and contacts. Biggest limit is interconnect (vias are impossible to shrink as we make them now).
This is such a ridiculous way to do, well anything. Especially something that’s supposed to be based on technology and science. It’s how you would sell infomercial products or gas station virility pills.
Why do you care? Are you designing a standard cell library for it?
Seriously. I don’t understand why so many people care that the “nm” number is a marketing term. The chip manufacturing processes are now often improving in ways that measuring the minimum pitch doesn’t really capture. So they bump down the number whenever it improves in some other way. It’s that simple.
I have designed a standard cell library. I don’t care what number they use in marketing. If you actually want to know the performance/characteristics/dimensions of the process there are so many other numbers you need to know and you’d refer to the actual documentation.
As an end user customers, all I need to know is that “3nm” is slightly better in performance/density/power than “4nm”, and the label they assign achieves that goal.
Once again, the "process size" are marketing numbers. There's no actual feature measuring 1nm.
The actual transistors are around 40nm across [1]. They change the geometry of the features, either the shape of the transistor gate or even the method of power delivery. All really incredible features and worthy of awe. Just not actually making transistors with finger-countable number of atoms.
I have often heard that these are “just marketing.” But humor me - where does the “1nm” even come from? What is the calculation that ends up spitting out 1nm, even if its invalid?
If you looked at the chip sideways we would be shooting way beyond femtometers, but if you happen to look at it from above then it's only 1 nm. Don't you think it's strange that the manufacturer is using metrics that make them look bad?
I said this yesterday in a different thread, but Intel needs to jettison its design business and just be a foundry. Pulling a reverse AMD. I don’t think they’ll be able to meaningfully acquire customers for their foundry business unless they split the company. I also think their foundry business is the one thing that could cause the stock to soar, and was historically what gave them their edge. I think there is a lot of demand for another possible fab outside of TSMC[1] because of the risk China poses to Taiwan but only if there is a process advantage. Right now TSMC has proven itself to be stable and continues to deliver for its customers. Intel is playing catch up, and sort of needs to prove it’s dedicated to innovating its fab business. I don’t think competing with its potential customers is a way of doing that.
[1] I know Samsung has foundry services (among others) but I don’t think they have the leading node capabilities that really compete with TSMC.
Story time: I worked on Google Fiber. I believed in the project and it did a lot of good work but here, ultimately, was the problem: leadership couldn't decide if the future of Internet delivery was wired or wireless. If it was wireless then an investment of billions of dollars might be made valueless. If it was wired and the company pursued wireless, then this would also lose.
But here's the thing: if you decide to do neither then you definitely lose. But, more importantly, no executive would lose their head from making a wrong decision. It's one of these situations where doing anything, even the wrong thing, is better than doing nothing because doing nothing will definitely lose.
Intel's 10nm process seemed like a similar kind of inflection point. Back in the mid-2010s it wasn't clear what the future of lithography would be. Was it EUV? Was in X-ray lithography? Something else? Intel seemed unable to commit. I bet no executive wanted to put their ass on the line and be wrong. So Intel loses to ASML and TSMC but it's OK because all the executives kept getting paid.
I forget the exact timelines but Intel's 10nm transition was first predicted in 2014 (?) and it got delayed at least 5 years. Prior to this, Intel's process improvements were its secret weapon. It constantly stayed ahead of the competition. There were hiccups though, most notably the Pentium 4 transition in the Gigahertz race (only saved by the Pentium 3 -> Centrino -> Core architecture transition) and pushing EPIC/Itanium where they got killed by Athlon 64 and it's x86_64 architecture.
I see the same problems at Boeing: once engineering-driven people companies get taken over by finance leeches. This usually follows an actual or virtual monopoly, just as Steve Jobs described [1].
I used to work with an old Sun Microsystems dude, he was an executive at the company I was at. We used to have these meetings every week and he ended up attending one. We had been trying to come to a conclusion for weeks on a specific piece of tech. He stopped the meeting dead in its tracks and said we're going to make a decision right now, if it's wrong, we'll learn from it, if it's right, awesome. Not making this decision is more costly than making the wrong decision.
I just remember thinking, finally, someone with some authority is getting this ball moving.
Reminds me of a story that happened at my company.
We needed and had purchased a rather expensive database software license, however, we didn't have the hardware yet to run that database. The guys doing hardware spent MONTHS debating on which $10k piece of hardware they'd pick to run the DB. The DB license cost? Something like $0.5 mill.
As one engineer said to me "I don't care what hardware you guys get, purchase them all! We are wasting god knows how much money on a license we can't use because we don't have the hardware to install it on!"
When I was a hardware product manager, there were a ton of decisions people asked for and a heck of a lot of them basically didn't matter. If we needed to course correct, we'd course correct. The main thing was not sitting around twiddling our thumbs for weeks or longer.
A crap boss is one that doesn't make choices, a good boss is one that does, a great boss is one that makes sure that the best of the possible choices is made giving the data at hand.
Having careers and heads depending on making the wrong choice just pushed to paralysis.
The Google Fiber project was always meant to push the carriers into competition. Google knew that if they didn't launch Google Fiber, none of their other ventures or the internet as a whole, could be as successful. Google paid big money for YouTube and the plan was always to turn it into the service it is today. At the time, there were also worries whether the carriers would restrict services (aka net neutrality) or if they would charge by GB. Launching Google Fiber made it such that the carriers had to start competing and upgrade their infrastructure.
If it wasn't for Google Fiber, I'm certain that we'd be stuck with 20mbps speeds, the cable/DSL monopoly, and we wouldn't have the likes of the OTT services and the choices that we have today. Or at least it would have been delayed by quite a bit.
I worked for a company that was an equipment vendor for Google Fiber and other service providers.
Plenty of countries have better (faster and/or cheaper) broadband options than most of the US, without having any Google involvement. Competition (or government enforced requirements and price caps) are what's needed, Google Fiber had a bit more of an incentive than most for aiming to undercut their competitors but ultimately I think you're overstating their importance.
That seems a very US-centric way of seeing the internet evolution.
The rest of the world moved to higher speeds and didn't count Gabs (except on mobile) decades ago and I mean decades.
In 2004 in Italy I had a 20 Mbit/s fiber connection, I had 100 Mbit few years later. I still remember pinging 4, literally 4 ms, on Counter Strike 1.6.
And Fiber was started way later in 2010. So I don't see any impact by Google fiber on internet as a whole, maybe it pushed US carriers to not do worse (internet in US is not really that amazing in terms of speeds and latency).
One thing that I noticed is that while speeds increased in the decades since then, latency became worse. Even with the fastest connection I can use I rarely if ever ping below 30 Ms on the very same Counter strike 1.6 or newer versions.
> If it wasn't for Google Fiber, I'm certain that we'd be stuck with 20mbps speeds
Are you trying to say that Google Fiber influenced the behaviour of incumbent telcos in different regions? If same region, sure, but the size of area served by Google Fiber is/was tiny.
> I see the same problems at Boeing: once engineering-driven people companies get taken over by finance leeches. This usually follows an actual or virtual monopoly, just as Steve Jobs described [1].
There is a nice mental framework for viewing such things. It has a bit of a religious origin, but it effectively explains and describes what you're seeing (I'm viewing it through an atheistic lens). I mean, the egregore.
This is the natural life cycle of an egregore! Which is explained by having two groups, those that serve the purpose the egregore was created for (engineers, people that provide value), and those that serve the egregore itself (financials, people that extract value). Both these groups need to exist for a healthy entity to exist. But the balance (seems to) always tip - the egregore eventually chooses the group that serves the egregore to lead - when that happens, the original vision is often lost, and the company looses customer trust by altering the relation the customer has with the egregore (how much value the customer extracts from the egregore vs how much value the egregore extracts from the customer).
> leadership couldn't decide if the future of Internet delivery was wired or wireless.
Weird way to think of this problem (IMO). I'd think there would always be a mixed wired and wireless world.
Even if customers don't end up using wired connections to their homes, you'd still need wired connection to the antennas servicing a home, neighborhood, apartment building. That's where a lot of Telcos today are making their money. Not to the customer, but to Tmobile or At&t as the put in a fiber line directly to the antenna towers.
And even if google wanted to be the end to end ISP for someone, they'd benefit from a vast fiber network even if they later decided it wireless was the best, because they already have the fiber wherever they'd need their wireless antenna.
The last mile is expensive. Even hooking up the customer to the line running outside their house is expensive. I've seen different customers estimate this at anywhere between $2000 and $5000 per premises. This assumes ~40% customer take-up rate so with more competitors, the cost to each goes up. It's one reason why overbuilds make no sense and municipal broadband is the best model for last mile Internet delivery.
Wireless bandwidth keeps going up. Wireless is already >1Gbps inside a building. What if instead of spending $5000 per house, you could use tightbeam wireless or highly cellular network with >1Gbps bandwidth? You may have spent billions on a network that it would take decades to amortize and have it be made worthless by wireless last mile delivery.
> Intel seemed primed to dominate the chip industry as it transitioned into the era of Extreme Ultraviolet Lithography (EUV). The company had played a pivotal role in the development of EUV technology, with Andy Grove’s early investment of $200 million in the 1990s being a crucial factor.
I know nothing, but it felt like intel paid the price of being the first. They picked something hard and pricey.. and it didn't pan out in time, allowing other competitors to catch up and adapt to markets (mobile) nicely.
> I believed in the project and it did a lot of good work but here, ultimately, was the problem: leadership couldn't decide if the future of Internet delivery was wired or wireless. If it was wireless then an investment of billions of dollars might be made valueless. If it was wired and the company pursued wireless, then this would also lose.
This one is particularly amusing because the difference is primarily a business distinction and not a technical one.
Here's how your tablet gets internet via fiber: There is a strand of fiber that comes near your house and then you attach an 802.11 wireless access point to it. Every few years the latter has to be replaced as new standards are created.
Here's how your tablet gets internet via 5G: There is a strand of fiber that comes near your house and then the telco attaches a cellular wireless access point to it. Every few years the latter has to be replaced as new standards are created.
They should have just built the fiber network and put cell sites on some of the poles. Then you sell fiber to anybody who buys it and cellular to anybody who buys it and you don't have to care which one wins.
You start with product people, but if you do well enough making your product better doesn't really move the needle any more, so organizations tend to promote...
marketing / sales /operations people. These people usually are pretty good at understanding what the customer wants and so has a decent feel for the product, perhaps innovation goes down, but the customer is getting what they want, but then once you saturate the market sales and marketing are no longer going to move the needle so you promote...
Finance people. They usually don't have a great feel for product nor even what the customer wants, but they understand how to increase revenue and decrease costs and at this point in the company lifestyle that is what matters most. The risk is that you are in a competitive space where competitors are willing to jump on any product stumble. Often companies get stuck at this stage and stagnate, but usually they are so large and entrenched they keep doing just fine anyway.
One difference I'd point to is that Intel was doing "fine" not committing to future lithography. I put that in quotes because clearly it wasn't a fine plan over the long run, but not spending money on the future is a fine plan in the short/medium term. AMD had been struggling for years and Intel continued to handily beat them. ARM processors weren't a threat at the time either. Intel certainly had the better part of a decade where they weren't committing to future lithography and doing fine.
Before someone says, "but they lost mobile to ARM during that period," lithography isn't why they lost mobile to ARM. Apple was using TSMC's 16nm process in their September 2016 iPhone while Intel started shipping 14nm processors 2 years earlier. Mobile chose ARM when Intel wasn't behind on lithography.
With Google Fiber, not choosing had immediate repercussions. With Intel, the repercussions took the better part of a decade to manifest. Google just decided it didn't really care about the home internet business. No one at Google could say "yea, we're not rolling out wired or wireless home internet and the business is booming." Intel didn't decide that they were exiting the processor business, but their processor business was doing "fine" without this decision being made. Intel could say, "we aren't investing in future lithography and the business is booming anyway. Maybe future lithography is just a big waste of money."
You're correct that not choosing means you lose. However, sometimes it isn't obvious for a while. Google Fiber's lack of decision had obvious, immediate results and you couldn't delude yourself otherwise. Intel could delude itself. Execs could write reports about how they were still ahead of the competition (they were) and how they weren't wasting money on unproven technology. Fast forward a decade and they're not fine, but it took a while for that to manifest.
Plus, if Apple hadn't helped push TSMC forward so much, would Intel be in quite as bad a situation? Qualcomm has been happy to just package together ARM reference designs with their modems and it's really just their poor performance compared to Apple really pushing them forward. While Android users on HN might be buying Snapdragon 8 series processors, the vast majority of Android devices aren't using high-end ARM cores. The vast majority of the market for high-end ARM cores is Apple. If Apple hadn't made a long-term commitment to TSMC for 2016-2021, would TSMC have pushed as hard on EUV? It's a lot easier to invest when you have a guaranteed customer like TSMC had in Apple.
If Apple hadn't pushed performance so strongly, would we have seen as much EUV investment as quickly? It's unlikely it would be pushed by the Android ecosystem where most processors are low-end. TSMC serving Apple meant EUV investment. Once Apple was shipping extremely fast processors, Qualcomm and others wanted to be able to get to at least 50-70% of what Apple was offering (so there were more buyers). Once it was available, AMD could use it to push hard against Intel. Once there were more buyers, Samsung wanted to make sure that its fabrication business was at least in the ballpark.
But if Apple hadn't been focused on taking a strong performance lead, it might have been another 5+ years before Intel's lack of decision came back to haunt it. If it had taken 12-17 years instead of 7-9 years for others to put the screws to Intel, they would have basked in its profits for a long time as its execs were touted as having amazing insight. Of course: you're right. Eventually, Intel would have gotten its comeuppance. But Intel could have pretended it didn't need to invest in the future for a long time. By contrast, when Google didn't make a decision on wireless or wired, that was just the end of expanding that business.
Intel made huge error when they decided to delay DUV -> EUV transition. Now Intel is the first to order ASML’s EXE:5200 and push High-NA. PoverVia and RibbonFET are what Intel is going to use. Meanwhile Intel's EUV 3nm chips are coming out this year.
It wasn't simply the engineers going nuts trying to make a huge jump all at once. They were taking a bunch of unique risks in order to follow a different path from the rest of the industry. If Intel had planned to follow a similar EUV timeline to the rest of the industry, they would have been subject to the same risks as everyone else regarding EUV and probably could have maintained a moderate lead throughout that transition, with a worst-case outcome being that they would be part of an industry-wide failure to keep up with Moore's Law if EUV didn't work out. Instead, they ended up years behind.
The most important question must be discussed at C-suite. Intel didn't have have enough people there to make decisions.
> Engineering based businesses have to manage both the engineering and the business side. Failing to do that means disaster.
Top engineers can learn to manage business at the highest levels. Business leaders can't learn enough engineering to manage engineering companies.
In both cases loudest voices would be from someone who has not experienced them and seeing only top of iceberg.
If Engineering and Business work _together_ it doesn't matter who's in the lead position.
> Intel made huge error when they decided to delay DUV -> EUV transition.
Just as an FYI, that error was made when the CEO was an engineer, not an MBA.
And I find it amusing for folks here to cheer Grovian culture. Andy Grove's management style had all of what people criticize Amazon's culture, on steroids. Indeed, I believe Jeff Bezos took some of the 14 leadership principles from Grove (who was CEO at that time).
People tend to forgive a leader's flaws, including really terrible flaws, if the leader seems to be producing results that people like.
And not just any engineer, an engineer who specifically came from the fab side of things and not the chip design side of things.
Hm... no? Most of the EUV roadmap decisions were likely finalized c. 2009-2010 when they bet on double patterning for what would become the 14nm node. That was under Paul Otellini, who was an MBA who climbed the ladder in Intel via the sales and marketing organization.
I don't know how else to describe it, but it's just a justification system for a deep hierarchy to belittle the workers.
I say this having worked years in the inner engineering sanctum of Apple where none of this bullshit existed (both during and after the Steve Jobs era).
I can see in the roadmap slide that 10A "arrives" in late 2027. However, the Intel roadmap also shows both intel4/3 and intel 20A/18A present from the start of 2023. The article mentions that 18A/20A nodes have been in "some form of production since 2023". Meanwhile, current Intel chips are still partially outsourced to TSMC, and Intel has promised zetta-scale systems by 2027.
Isn’t that just their GPUs?
Did so much harm to the industry and the quality of software.
Deleted Comment
A 3cm x 3cm x 3cm cube could fit a hell of a lot of transistors and gates even if it is 20nm.
Deleted Comment
Dead Comment
I worked in a Fab for a year and the complexity is mind blowing. I don't see how they can execute to build those nodes and get the yields under control in such a short timeframe.
Best of luck to them.
Really only 2. Because Intel 7 was so late that it just happened to be close to ready when they announced 5 in 4 years.
So it’s 2 in 4 years. In reality is more like 2 in 5 years because 20A won’t have wide spread products so soon.
Marketing, Gate pitch, Metal Pitch, Year
7nm, 60nm, 40nm, 2018
5nm, 51nm, 30nm, 2020
3nm, 48nm, 24nm, 2022
2nm, 45nm, 20nm, 2024
1nm, 42nm, 16nm, 2026
But the step from 2 to 1nm shrunk gate pitch to 93.3% instead of the suggested 50%, a large 87% marketing hype. Or viewed as a shrink by only 6.7%, it's a whopping 600+% hype.
A table on https://en.wikipedia.org/wiki/2_nm_process suggests:
'20 angstrom (Intel)' ~= 2 nanometer (industry)
However note that the feature size barely shrinks between 2 nm and 1nm. I'm not sure what size Intel will go with, but offhand this might be close to the limits of silicon as we know it. I'm not sure where things will get pushed in the quest for further progression.Also of note, IIRC some of AMD's presentations around when Ryzen launched involved mention that cache / memory (and maybe some other features?) on CPUs weren't scaling down effectively past 7 ~ 5 nm, which is one of the reasons GPUs had the cores and then external memory controllers. I know automakers and a bunch of other consumers that would like bulk (cheap) products made on modern wafers (300mm) with a semi-modern process that's super inexpensive. Particularly power ICs which would prefer lower leakage even if it means a slower response.
Standard cell height pitch has a similar story, 270nm height is possible but means high parasitics induced RC delay. For good performance 300nm or higher height pitches are used.
These tables miss a huge context though. TSMC N7 drawn gate length is like 20nm, not 60nm. This pitch includes diffusion (source/drain) and contacts. Biggest limit is interconnect (vias are impossible to shrink as we make them now).
Deleted Comment
Seriously. I don’t understand why so many people care that the “nm” number is a marketing term. The chip manufacturing processes are now often improving in ways that measuring the minimum pitch doesn’t really capture. So they bump down the number whenever it improves in some other way. It’s that simple.
I have designed a standard cell library. I don’t care what number they use in marketing. If you actually want to know the performance/characteristics/dimensions of the process there are so many other numbers you need to know and you’d refer to the actual documentation.
As an end user customers, all I need to know is that “3nm” is slightly better in performance/density/power than “4nm”, and the label they assign achieves that goal.
Tbh I used to not care until they started lying about power consumption too.
* iPhones count up
* Ubuntu LTS counts YRMO 24 months from April
* Debian counts toy story characters
* CPU nodes count down
Scroll down a bit for a nice overview.
https://read.nxtbook.com/ieee/spectrum/spectrum_na_august_20...
"lattice parameter of 0.543 nm ... nearest neighbor distance is 0.235 nm"
The actual transistors are around 40nm across [1]. They change the geometry of the features, either the shape of the transistor gate or even the method of power delivery. All really incredible features and worthy of awe. Just not actually making transistors with finger-countable number of atoms.
[1] https://www.wikiwand.com/en/2_nm_process
Considering that the order of magnitude of an atom’s radius is 1 Å (or 100 pm, or 100 000 fm), I really doubt any chip is thinner than 1 fm.
[1] I know Samsung has foundry services (among others) but I don’t think they have the leading node capabilities that really compete with TSMC.
https://finance.yahoo.com/news/intel-splits-itself-two-aid-1...
But here's the thing: if you decide to do neither then you definitely lose. But, more importantly, no executive would lose their head from making a wrong decision. It's one of these situations where doing anything, even the wrong thing, is better than doing nothing because doing nothing will definitely lose.
Intel's 10nm process seemed like a similar kind of inflection point. Back in the mid-2010s it wasn't clear what the future of lithography would be. Was it EUV? Was in X-ray lithography? Something else? Intel seemed unable to commit. I bet no executive wanted to put their ass on the line and be wrong. So Intel loses to ASML and TSMC but it's OK because all the executives kept getting paid.
I forget the exact timelines but Intel's 10nm transition was first predicted in 2014 (?) and it got delayed at least 5 years. Prior to this, Intel's process improvements were its secret weapon. It constantly stayed ahead of the competition. There were hiccups though, most notably the Pentium 4 transition in the Gigahertz race (only saved by the Pentium 3 -> Centrino -> Core architecture transition) and pushing EPIC/Itanium where they got killed by Athlon 64 and it's x86_64 architecture.
I see the same problems at Boeing: once engineering-driven people companies get taken over by finance leeches. This usually follows an actual or virtual monopoly, just as Steve Jobs described [1].
[1]: https://www.youtube.com/watch?v=tGKsbt5wii0
I just remember thinking, finally, someone with some authority is getting this ball moving.
We needed and had purchased a rather expensive database software license, however, we didn't have the hardware yet to run that database. The guys doing hardware spent MONTHS debating on which $10k piece of hardware they'd pick to run the DB. The DB license cost? Something like $0.5 mill.
As one engineer said to me "I don't care what hardware you guys get, purchase them all! We are wasting god knows how much money on a license we can't use because we don't have the hardware to install it on!"
A crap boss is one that doesn't make choices, a good boss is one that does, a great boss is one that makes sure that the best of the possible choices is made giving the data at hand.
Having careers and heads depending on making the wrong choice just pushed to paralysis.
If it wasn't for Google Fiber, I'm certain that we'd be stuck with 20mbps speeds, the cable/DSL monopoly, and we wouldn't have the likes of the OTT services and the choices that we have today. Or at least it would have been delayed by quite a bit.
I worked for a company that was an equipment vendor for Google Fiber and other service providers.
The rest of the world moved to higher speeds and didn't count Gabs (except on mobile) decades ago and I mean decades.
In 2004 in Italy I had a 20 Mbit/s fiber connection, I had 100 Mbit few years later. I still remember pinging 4, literally 4 ms, on Counter Strike 1.6.
And Fiber was started way later in 2010. So I don't see any impact by Google fiber on internet as a whole, maybe it pushed US carriers to not do worse (internet in US is not really that amazing in terms of speeds and latency).
One thing that I noticed is that while speeds increased in the decades since then, latency became worse. Even with the fastest connection I can use I rarely if ever ping below 30 Ms on the very same Counter strike 1.6 or newer versions.
Deleted Comment
Are you trying to say that Google Fiber influenced the behaviour of incumbent telcos in different regions? If same region, sure, but the size of area served by Google Fiber is/was tiny.
There is a nice mental framework for viewing such things. It has a bit of a religious origin, but it effectively explains and describes what you're seeing (I'm viewing it through an atheistic lens). I mean, the egregore.
This is the natural life cycle of an egregore! Which is explained by having two groups, those that serve the purpose the egregore was created for (engineers, people that provide value), and those that serve the egregore itself (financials, people that extract value). Both these groups need to exist for a healthy entity to exist. But the balance (seems to) always tip - the egregore eventually chooses the group that serves the egregore to lead - when that happens, the original vision is often lost, and the company looses customer trust by altering the relation the customer has with the egregore (how much value the customer extracts from the egregore vs how much value the egregore extracts from the customer).
https://en.wikipedia.org/wiki/Egregore#:~:text=Egregore%20(a....
This pattern comes up, an possible indication of this flip: when the original owners of a company are pushed out, or leave.
from this recent thread, which is rather relevant: https://news.ycombinator.com/item?id=39491863
Weird way to think of this problem (IMO). I'd think there would always be a mixed wired and wireless world.
Even if customers don't end up using wired connections to their homes, you'd still need wired connection to the antennas servicing a home, neighborhood, apartment building. That's where a lot of Telcos today are making their money. Not to the customer, but to Tmobile or At&t as the put in a fiber line directly to the antenna towers.
And even if google wanted to be the end to end ISP for someone, they'd benefit from a vast fiber network even if they later decided it wireless was the best, because they already have the fiber wherever they'd need their wireless antenna.
Wireless bandwidth keeps going up. Wireless is already >1Gbps inside a building. What if instead of spending $5000 per house, you could use tightbeam wireless or highly cellular network with >1Gbps bandwidth? You may have spent billions on a network that it would take decades to amortize and have it be made worthless by wireless last mile delivery.
Per the book Chip War, Intel put a lot of money into EUV (going back to the late 1990s):
* https://en.wikipedia.org/wiki/Chip_War:_The_Fight_for_the_Wo...
Per the book, and other sources:
> Intel seemed primed to dominate the chip industry as it transitioned into the era of Extreme Ultraviolet Lithography (EUV). The company had played a pivotal role in the development of EUV technology, with Andy Grove’s early investment of $200 million in the 1990s being a crucial factor.
* https://techovedas.com/intel-lost-decade-5-reasons-why-chip-...
This one is particularly amusing because the difference is primarily a business distinction and not a technical one.
Here's how your tablet gets internet via fiber: There is a strand of fiber that comes near your house and then you attach an 802.11 wireless access point to it. Every few years the latter has to be replaced as new standards are created.
Here's how your tablet gets internet via 5G: There is a strand of fiber that comes near your house and then the telco attaches a cellular wireless access point to it. Every few years the latter has to be replaced as new standards are created.
They should have just built the fiber network and put cell sites on some of the poles. Then you sell fiber to anybody who buys it and cellular to anybody who buys it and you don't have to care which one wins.
marketing / sales /operations people. These people usually are pretty good at understanding what the customer wants and so has a decent feel for the product, perhaps innovation goes down, but the customer is getting what they want, but then once you saturate the market sales and marketing are no longer going to move the needle so you promote...
Finance people. They usually don't have a great feel for product nor even what the customer wants, but they understand how to increase revenue and decrease costs and at this point in the company lifestyle that is what matters most. The risk is that you are in a competitive space where competitors are willing to jump on any product stumble. Often companies get stuck at this stage and stagnate, but usually they are so large and entrenched they keep doing just fine anyway.
Before someone says, "but they lost mobile to ARM during that period," lithography isn't why they lost mobile to ARM. Apple was using TSMC's 16nm process in their September 2016 iPhone while Intel started shipping 14nm processors 2 years earlier. Mobile chose ARM when Intel wasn't behind on lithography.
With Google Fiber, not choosing had immediate repercussions. With Intel, the repercussions took the better part of a decade to manifest. Google just decided it didn't really care about the home internet business. No one at Google could say "yea, we're not rolling out wired or wireless home internet and the business is booming." Intel didn't decide that they were exiting the processor business, but their processor business was doing "fine" without this decision being made. Intel could say, "we aren't investing in future lithography and the business is booming anyway. Maybe future lithography is just a big waste of money."
You're correct that not choosing means you lose. However, sometimes it isn't obvious for a while. Google Fiber's lack of decision had obvious, immediate results and you couldn't delude yourself otherwise. Intel could delude itself. Execs could write reports about how they were still ahead of the competition (they were) and how they weren't wasting money on unproven technology. Fast forward a decade and they're not fine, but it took a while for that to manifest.
Plus, if Apple hadn't helped push TSMC forward so much, would Intel be in quite as bad a situation? Qualcomm has been happy to just package together ARM reference designs with their modems and it's really just their poor performance compared to Apple really pushing them forward. While Android users on HN might be buying Snapdragon 8 series processors, the vast majority of Android devices aren't using high-end ARM cores. The vast majority of the market for high-end ARM cores is Apple. If Apple hadn't made a long-term commitment to TSMC for 2016-2021, would TSMC have pushed as hard on EUV? It's a lot easier to invest when you have a guaranteed customer like TSMC had in Apple.
If Apple hadn't pushed performance so strongly, would we have seen as much EUV investment as quickly? It's unlikely it would be pushed by the Android ecosystem where most processors are low-end. TSMC serving Apple meant EUV investment. Once Apple was shipping extremely fast processors, Qualcomm and others wanted to be able to get to at least 50-70% of what Apple was offering (so there were more buyers). Once it was available, AMD could use it to push hard against Intel. Once there were more buyers, Samsung wanted to make sure that its fabrication business was at least in the ballpark.
But if Apple hadn't been focused on taking a strong performance lead, it might have been another 5+ years before Intel's lack of decision came back to haunt it. If it had taken 12-17 years instead of 7-9 years for others to put the screws to Intel, they would have basked in its profits for a long time as its execs were touted as having amazing insight. Of course: you're right. Eventually, Intel would have gotten its comeuppance. But Intel could have pretended it didn't need to invest in the future for a long time. By contrast, when Google didn't make a decision on wireless or wired, that was just the end of expanding that business.
I'm pretty sure that ARM won mobile because it is much lower power.
Deleted Comment