Readit News logoReadit News
ashtonkem · 5 years ago
I think one day we’re going to wake up and discover that AWS mostly runs on Graviton (ARM) and not x86. And on that day intel’s troubles will go from future to present.

My standing theory is that the m1 will accelerate it. Obviously all the wholly managed AWS services (Dynamo, Kinesis, S3, etc.) can change over silently, but the issue is EC2. I have a MBP, as do all of my engineers. Within a few years all of these machines will age out and be replaced with m1 powered machines. At that point the idea of developing on ARM and deploying on x86 will be unpleasant, especially since Graviton 2 is already cheaper per compute unit than x86 is for some work loads; imagine what Graviton 3 & 4 will offer.

rauhl · 5 years ago
> I have a MBP, as do all of my engineers. Within a few years all of these machines will age out and be replaced with m1 powered machines. At that point the idea of developing on ARM and deploying on x86 will be unpleasant

Is it not at least somewhat possible that at least some of those Apple laptops will age out and be replaced with GNU/Linux laptops? Agreed that developing on ARM and deploying on x86 is unpleasant, but so too is developing on macOS and deploying on Linux. Apple’s GNU userland is pretty ancient, and while the BSD parts are at least updated, they are also very austere. Given that friction is already there, is it likelier that folks will try to alleviate it with macOS in the cloud or GNU/Linux locally?

Mac OS X was a godsend in 2001: it put a great Unix underneath a fine UI atop good hardware. It dragged an awful lot of folks three-quarters of the way to a free system. But frankly I believe Apple have lost ground UI-wise over the intervening decades, while free alternatives have gained it (they are still not at parity, granted). Meanwhile, the negatives of using a proprietary OS are worse, not better.

ogre_codes · 5 years ago
> Is it not at least somewhat possible that at least some of those Apple laptops will age out and be replaced with GNU/Linux laptops?

Has Linux desktop share been increasing lately? I'm not sure why a newer Mac with better CPU options is going to result in increasing Linux share. If anything, it's likely to be neutral or favor the Mac with it's newer/ faster CPU.

> But frankly I believe Apple have lost ground UI-wise over the intervening decades, while free alternatives have gained it (they are still not at parity, granted).

Maybe? I'm not as sold on Linux gaining a ton of ground here. I'm also not sold on the idea that the Mac as a whole is worse off interface wise than it was 10 years ago. While there are some issues, there are also places where it's significantly improved as well. Particularly if you have an iPhone and use Apple's other services.

surajrmal · 5 years ago
I develop on GNU/Linux begrudgingly. It has all of my tools, but I have a never-ending stream of issues with WiFi, display, audio, etc. As far as I'm concerned, GNU/Linux is something that's meant to be used headless and ssh'd into.
old-gregg · 5 years ago
> Is it not at least somewhat possible that at least some of those Apple laptops will age out and be replaced with GNU/Linux laptops?

And I personally hope that by then, GNU/Linux will have an M1-like processor available to happily run on. The possibilities demonstrated by this chip (performance+silence+battery) are so compelling that it's inevitable we'll see them in non-Apple designs.

Also, as it usually happens with Apple hardware advancements, Linux experience will be gradually getting better on M1 Macbooks as well.

mulmen · 5 years ago
Approximately zero MacBooks will be replaced by Linux laptops in the next couple years. There is no new story in the Linux desktop world to make a Linux Laptop more appealing. That people already selected to develop on MacOS and deploy to Linux tells you all you need to know there.

MacPorts and Homebrew exist. Both support M1 more or less and support is improving.

Big Sur is a Big Disaster but hopefully this is just the MacOS version of iOS 13 and the next MacOS next year goes back to being mostly functional. I have more faith in that than a serviceable Linux desktop environment.

narrator · 5 years ago
This is why you run your environment on Linux and MacOS in Docker, so you don't have these screwy deployment issues caused by MacOs vs Linux issues.
john_alan · 5 years ago
You can easily bring macOS up to Linux level GNU with brew.

I agree generally though. I see macOS as an important Unix OS for the next decade.

majormajor · 5 years ago
> Is it not at least somewhat possible that at least some of those Apple laptops will age out and be replaced with GNU/Linux laptops?

Sadly, fewer of my coworkers use Linux now than they did 10 years ago.

jjoonathan · 5 years ago
> GNU/Linux laptops

Could we do a roll call of experiences so I know which ones work and which ones don't? Here are mine.

    Dell Precision M6800: Avoid.
        Supported Ubuntu: so ancient that Firefox
        and Chrome wouldn't install without source-building
        dependencies.
        Ubuntu 18.04: installed but resulted in the
        display backlight flickering on/off at 30Hz.

    Dell Precision 7200:
        Supported Ubuntu: didn't even bother.
        Ubuntu 18.04: installer silently chokes on the NVMe
        drive.
        Ubuntu 20.04: just works.

eyelidlessness · 5 years ago
> Is it not at least somewhat possible that at least some of those Apple laptops will age out and be replaced with GNU/Linux laptops?

Some definitely will. Significant enough to assume they're not well-situated other-configs? Probably not. Even the most VIM- and CLI-oriented devs I know still prefer a familiar GUI for normal day to day work. Are they all going Ubuntu? Or Elementary? I mean, I welcome any migration that doesn't fracture the universe. But I don't think it's likely.

ashtonkem · 5 years ago
There is literally no chance of that. IT would find this an intolerable burden for them to manage, and I doubt the devs would like it either. Most of them seem pretty enthused to get their hands on a m1.

I’ve known colleagues that tried to run Linux professionally using well reviewed Linux laptops, and their experience has been universally awful. Like “I never managed to get the wifi to work, ever” bad. The idea of gambling every developer on that is a non-starter even at my level, let alone across the org.

Const-me · 5 years ago
Building server software on Graviton ARM creates a vendor lock-in to Amazon, with very high costs of switching elsewhere. Despite using A64 ISA and ARM’s cores, they are Amazon’s proprietary chips no one else has access to. Migrating elsewhere gonna be very expensive.

I wouldn’t be surprised if they sponsor their Graviton offering taking profits elsewhere. This might make it seem like a good deal for customers, but I don’t think it is, at least not in the long run.

This doesn’t mean Graviton is useless. For services running Amazon’s code as opposed to customer’s code (like these PAAS things billed per transaction) the lock-in is already in place, custom processors aren’t gonna make it any worse.

dragontamer · 5 years ago
I'm not necessarily disagreeing with you, but... maybe elaborating in a contrary manner?

Graviton ARM is certainly vendor lock-in to Amazon. But a Graviton ARM is just a bog-standard Neoverse N1 core. Which means the core is going to show similar characteristics as the Ampere Altra (also a bog-standard Neoverse N1 core).

There's more to a chip than its core. But... from a performance-portability and ISA perspective... you'd expect performance-portability between Graviton ARM and Ampere Altra.

Now Ampere Altra is like 2x80 core, while Graviton ARM is... a bunch of different configurations. So its still not perfect compatibility. But a single-threaded program probably couldn't tell the difference between the two platforms.

I'd expect that migrating between Graviton and Ampere Altra is going to be easier than Intel Skylake -> AMD Zen.

timthorn · 5 years ago
Ubuntu 64 looks the same on Graviton as on a Raspberry Pi. You can take a binary you've compiled on the RPi, scp it to the Graviton instance and it will just run. That works the other way round too, which is great for speedy Pi software builds without having to set up a cross-compile environment.
pjmlp · 5 years ago
My Java and .NET applications don't care most of the time in what hardware they are running, and many of other languages managed languages I use also do not, even if AOT compiled to native code.

That is the beauty of having proper defined numeric types and memory model, instead of the C and derived approaches of whatever the CPU gives, with whatever memory model.

gchamonlive · 5 years ago
I think OP was talking about managed services, like lambda, Ecs and beanstalk internal control, EC2 internal management system, that is systems that are transparent for the user.

AWS could very well run their platform systems entirely on graviton. After all, serverless and cloud is in essence someone else's server. AWS might as well run all their paas software on in-house architecture

treve · 5 years ago
Maybe I'm missing something, but don't the vast majority of applications don't care about what architecture they run on?

The main difference for us was lower bills.

deaddodo · 5 years ago
> they are Amazon’s proprietary chips no one else has access to.

Any ARM licensee (IP or architecture) has access to them. They're just NeoVerse N1 cores and can be synthesized on Samsung or TSMC processes.

jorblumesea · 5 years ago
Really, you could make the argument for any AWS service and generally using a cloud service provider. You get into the cloud, use their glue (lambda, kinesis, sqs etc) and suddenly migrating services somewhere else is a multi-year project.

Do you think that vendor lock in has stopped people in the past (and future)? Thinking about those kinds of things are long term and many companies think short term.

rapsey · 5 years ago
Why would it be lock in. If you can compile for arm you can compile for x86.
chasil · 5 years ago
As I understand it, ARM's new willingness to allow custom op-codes is dependent upon the customer preventing fragmentation of the ARM instruction set.

In theory, your software could run faster, or slower, depending upon Amazon's use of their extensions within their C library, or associated libraries in their software stack.

Maybe the wildest thing that I've heard is Fujitsu not implementing either 32-bit or Thumb on their new supercomputer. Is that a special case?

"But why doesn’t Apple document this and let us use these instructions directly? As mentioned earlier, this is something ARM Ltd. would like to avoid. If custom instructions are widely used it could fragment the ARM ecosystem."

https://medium.com/swlh/apples-m1-secret-coprocessor-6599492...

echelon · 5 years ago
> Building server software on Graviton ARM creates a vendor lock-in to Amazon

Amazon already has lock-in. Lambda, SQS, etc. They've already won.

You might be able to steer your org away from this, but Amazon's gravity is strong.

skohan · 5 years ago
This is kind of what should happen right? I'm not an expert, but my understanding is that one of the takeaways from the M1 success has been the weaknesses of x86 and CISC in general. It seems as if there is a performance ceiling which exists for x86 due to things like memory ordering requirements, and complexity of legacy instructions, which just don't exist for other instruction sets.

My impression is that we have been living under the cruft of x86 because of inertia, and what are mostly historical reasons, and it's mostly a good thing if we move away from it.

zucker42 · 5 years ago
M1's success shows how efficient and advanced the TSMC 5 nm node is. Apple's ability to deliver it with decent software integration also deserves some credit. But I wouldn't interpret it as the death knell for x86.
kllrnohj · 5 years ago
> weaknesses of x86 and CISC in general

"RISC" and "CISC" distinctions are murky, but modern ARM is really a CISC design these days. ARM is not at all in a "an instruction only does one simple thing, period" mode of operation anymore. It's grown instructions like "FJCVTZS", "AESE", and "SHA256H"

If anything CISC has overwhelmingly and clearly won the debate. RISC is dead & buried, at least in any high-performance product segment (TBD how RISC-V ends up fairing here).

It's largely "just" the lack of variable length instructions that helps the M1 fly (M1 under Rosetta 2 runs with the same x86 memory model, after all, and is still quite fast).

sf_rob · 5 years ago
Isn't most of M1's performance success due to being a SoC / increasing component locality/bandwidth? I think ARM vs x86 performance on its own isn't a disadvantage. Instead the disadvantages are a bigger competitive landscape (due to licensing and simplicity), growing performance parity, and SoCs arguable being contrary to x86 producers' business models.
erosenbe0 · 5 years ago
There isn't any performance ceiling issue. Intel ISA operates at a very slight penalty in terms of achievable performance per watt, but nothing in an absolute sense.

I would argue it isn't time for Intel to switch until we see a little more of the future as process nodes may shrink at a slower rate. Will we have hundreds of cores? Field programmable cores? More fixed function hardware on chip, or less? How will high-bandwidth high-latency gddr style memory mix with lower-latency lower-bandwidth ddr memory? Will there be on die memory like hbm for cpus?

tapirl · 5 years ago
afavour · 5 years ago
On the flip side that post illustrates just how things can go wrong, too: Windows RT was a flop.
mhh__ · 5 years ago
I can see this happening for things that run in entirely managed environments but I don't think AWS can make the switch fully until that exact hardware is on people's benches. Doing microbenchmarking is quite awkward on the cloud, whereas anyone with a Linux laptop from the last 20 years can access PMCs for their hardware
sitkack · 5 years ago
Very little user code generates binaries that can _tell_ it is running on non-x86 hardware. Rust is Arm Memory Model safe, existing C/C++ code that targets the x86 memory model is slowly getting ported over, but unless you are writing multithreaded C++ code that cuts corners it isn't an issue.

Running on the JVM, Ruby, Python, Go, Dlang, Swift, Julia or Rust and you won't notice a difference. It will be sooner than you think.

Someone · 5 years ago
I would think the number of developers that have “that exact hardware” on their bench is extremely small (does AWS even tell you what cpu you get?)

What fraction of products deployed to the cloud even has its developers seen doing _any_ microbenchmarking?

ashtonkem · 5 years ago
Professional laptops don’t last that long, and a lot of developers are given MBPs for their work. I personally expect that I’ll get a M1 laptop from my employer within the next 2 years. At that point the pressure to migrate from x86 to ARM will start to increase.
api · 5 years ago
I don't think it takes "exact" hardware. It takes ARM64, which M1 delivers. I already have a test M1 machine with Linux running in a Parallels (tech preview) VM and it works great.
d33lio · 5 years ago
While I generally agree with this sentiment a lot of people don't realize how much enterprise supply chain / product chain vastly varies from the consumer equivalent. Huge customers that buy intel chips at datacenter scale are pandered to and treated like royalty by both intel and amd. Companies are courted in the earliest stages of cutting edge technical development and product development and given rates so low (granted for huge volume) that most consumers would not even believe. The fact that companies like Serve The Home exist proves this - for those who don't know, the realy business model of Serve The Home is to give enterprise clients the ability to play around with a whole data center of leading edge tech, Serve The Home is simply a marketing "edge api" of sorts for the operation. Sure it might look like intel isn't "competitive" but many of the intel V amd flame wars in the server space for un released tech have already had their bidding wars settled years ago for this very tech.

One thing to also consider is why amazon hugely prioritizes using their "services" and not deploying on bare metal is likely because they can execute their "services" on cheapo arm hardware. Bare metal boxes and VM's give the impression that customer's software will perform in an x86 esque matter. For amazon, the cost of the underlying compute per core is irrelevant since they've already solved the issue of using blazing fast network links to mesh their hardware together - in this way, the ball is heavily in Arm's court for the future of Amazon data centers, although banking and gov clients will likely not move away from X86 any time soon.

ksec · 5 years ago
I commented [1] on something similar a few days ago,

>Cloud (Intel) isn’t really challenged yet....

AWS are estimated to be ~50% of HyperScalers.

HyperScalers are estimated to be 50% of Server and Cloud Business.

HyperScalers are expanding at a rate faster than other market.

HyperScaler expanding trend are not projected to be slowing down anytime soon.

AWS intends to have all of their own workload and SaaS product running on Graviton / ARM. ( While still providing x86 services to those who needs it )

Google and Microsoft are already gearing up their own ARM offering. Partly confirmed by Marvell's exit of ARM Server.

>The problem is single core Arm performance outside of Apple chips isn’t there.

Cloud computing charges per vCPU. On all current x86 instances, that is one hyper-thread. On AWS Graviton, vCPU = Actual CPU Core. There are plenty of workloads, and large customers like Twitter and Pinterest has tested and shown AWS Graviton 2 vCPU perform better than x86. All while being 30% cheaper. At the end of the day, it is workload / dollars that matters on Cloud computing. And right now in lots of applications Graviton 2 are winning, and in some cases by large margin.

If AWS sell 50% of their services with ARM in 5 years time, that is 25% of Cloud Business Alone. Since it offer a huge competitive advantage Google and Microsoft has no other choice but to join the race. And then there will be enough of a market force for Qualcomm, or may be Marvell to Fab a commodity ARM Server part for the rest of the market.

Which is why I was extremely worried about Intel. (Half of) The lucrative Server market is basically gone. ( And I haven't factored in AMD yet ) 5 years in Tech hardware is basically 1-2 cycles. And there is nothing on Intel's roadmap that shown they have the chance to compete apart from marketing and sales tactics. Which still goes a long way if I have to be honest, but not sustainable in long term. It is more of a delaying tactics. Along with a CEO that despite trying very hard, had no experience in market and product business. Luckily that is about to change.

Evaluating ARM switch takes time, Software preparation takes time, and more importantly, getting wafer from TSMC takes time as demand from all market are exceeding expectations. But all of them are already in motion, and if these are the kind of response you get from Graviton 2, imagine Graviton 3.

[1] https://news.ycombinator.com/item?id=25808856

spideymans · 5 years ago
>Which is why I was extremely worried about Intel. (Half of) The lucrative Server market is basically gone.

Right. I suspect in time we'll look back to this time, and realize that it was already too late for Intel to right the ship, despite ARM having a tiny share of PC and server sales.

Their PC business is in grave danger as well. Within a few years, we're going to see ARM-powered Windows PCs that are competitive with Intel's offerings in several metrics, but most critically, in power efficiency.

These ARM PCs will have tiny market share (<5%) for the first few years, because the manufacturing capacity to supplant Intel simply does not exist. But despite their small marketshare, these ARM PCs will have a devastating impact on Intel's future.

Assuming these ARM PCs can emulate x86 with sufficient performance (as Apple does with Rosetta), consumers and OEMs will realize that ARM PCs work just as well as x86 Intel PCs. At that point, the x86 "moat" will have been broken, and we'll see ARM PCs grow in market share in lockstep with the improvements in ARM manufacturing capacity (TSMC, etc...).

Intel is in a downward spiral, and I've seen no indication that they know how to solve it. Their best "plan" appears to be to just hope that their manufacturing issues get sorted out quickly enough that they can right the ship. But given their track record, nobody would bet on that happening. Intel better pray that Windows x86 emulation is garbage.

Intel does not have the luxury of time to sort out their issues. They need more competitive products to fend off ARM, today. Within a year or two, ARM will have a tiny but critical foothold in the PC and server market that will crack open the x86 moat, and invite ever increasing competition from ARM.

jayd16 · 5 years ago
I guess I don't understand why the M1 makes developing on Graviton easier. It doesn't make Android or Windows ARM dev any easier.

I guess the idea is to run a Linux flavor that supports both the M1 and Graviton on the macs and hope any native work is compatible?

wmf · 5 years ago
It's not hope; ARM64 is compatible with ARM64 by definition. The same binaries can be used in development and production.

Windows ARM development (in a VM) should be much faster on an M1 Mac than on an x86 computer since no emulation is needed.

_alex_ · 5 years ago
dev in a linux vm/container on your M1 macbook, then deploy to a graviton instance.
dfgdghdf · 5 years ago
Aren't most of us already programming against a virtual machine, such as Node, .NET or the JVM? I think the CPU architecture hardly matters today.
DreadY2K · 5 years ago
Many people do code against some sort of VM, but there are still people writing code in C/C++/Rust/Go/&c that gets compiled to machine code and run directly.

Also, even if you're running against a VM, your VM is running on an ISA, so performance differences between them are still relevant to your code's performance.

dboreham · 5 years ago
Having worked some on maintaining a stack on both Intel and ARM, it matters less than it did, but it's not a NOOP. e.g. Node packages with native modules are often not available prebuilt for ARM, and then the build fails due to ... <after 2 days debugging C++ compilation errors, you might know>.
ghettoimp · 5 years ago
If it can emulate x86, is there really a motivation for developers to switch to ARM? (I don't have an M1 and don't really know what it's like to compile stuff and deploy it to "the cloud.")
ashtonkem · 5 years ago
Emulation is no way to estimate performance.
Steltek · 5 years ago
How much does arch matter if you're targeting AWS? Aren't the differences between local service instances vs instances running in the cloud a much bigger problem for development?
BenoitEssiambre · 5 years ago
Yeah and I assume we are going to see Graviton/Amazon linux based notebooks any day now.
agloeregrets · 5 years ago
Honestly, if Amazon spun this right and they came pre-setup for development and distribution and had all the right little specs (13 and 16 inch sizes, HiDPI matte displays, long battery life, solid keyboard, macbook-like trackpad) they could really hammer the backend dev market. Bonus points if they came with some sort of crazy assistance logic like each machine getting a pre-setup AWS Windows server for streaming windows X86 apps.
hendry · 5 years ago
Can't take Graviton seriously until I can run my binaries via Lambda on it.
dogma1138 · 5 years ago
At that point if it will be trouble for Intel it would be a death sentence for AMD...

Intel has fabs, yes it’s what maybe holding them back atm but it also a big factor in what maintains their value.

If x86 dies and neither Intel nor AMD pivot in time Intel can become a fab company they already offer these services, yes no where near the scale of say TSMC but they have a massive portfolio of fabs and their fabs are located in the west, they also have a massive IP portfolio related to everything form IC design to manufacturing.

oldgradstudent · 5 years ago
> Intel can become a fab company

Not unless they catch up with TSMC in process technology.

Otherwise, they become an uncompetitive foundry.

api · 5 years ago
How hard would it be for AMD to make an ARM64 based partly on the IP of the Zen architecture? Seems like AMD could equal or beat M1 if they wanted.
nickik · 5 years ago
AMD makes great designs, switching to ARM/RISC-V would make them lose value but not kill them.
skohan · 5 years ago
AMD also has a GPU division.
WoodenChair · 5 years ago
The thing about all of these articles analyzing Intel's problems is that nobody really knows the details of Intel's "problems" because it comes down to just one "problem" that we have no insight into: node size. What failures happened in Intel's engineering/engineering management of its fabs that led to it getting stuck at 14 nm? Only the people in charge of Intel's fabs know exactly what went wrong, and to my knowledge they're not talking. If Intel had kept chugging along and got down to 10 nm years ago when they first said they would, and then 7 nm by now, it wouldn't have any of these other problems. And we don't know exactly why that didn't happen.
ogre_codes · 5 years ago
Intel's problem was that they were slow getting their 10nm design online. That's no longer the case. Intel's new problem is much bigger than that at this point.

Until fairly recently, Intel had a clear competitive advantage: Their near monopoly on server and desktop CPUs. Recent events have illustrated that the industry is ready to move away from Intel entirely. Apple's M1 is certainly the most conspicuous example, but Microsoft is pushing that way (a bit slower), Amazon is already pushing their own server architecture and this is only going to accelerate.

Even if Intel can get their 7nm processes on line this year, Apple is gone, Amazon is gone, and more will follow. If Qualcomm is able to bring their new CPUs online from their recent acquisition, that's going to add another high performance desktop/ server ready CPU to the market.

Intel has done well so far because they can charge a pretty big premium as the premier x86 vendor. The days when x86 commands a price premium are quickly coming to and end. Even if Intel fixes their process, their ability to charge a premium for chips is fading fast.

JoshTko · 5 years ago
We actually have a lot of insight in that Intel still doesn't have a good grasp on the problem. Their 10nm was supposed to enter volume production in mid 2018, and they still haven't truly entered volume production today. Additionally Intel announced in July 2020 that their 7nm is delayed by at least a year which means they figured out their node delay problem.
WoodenChair · 5 years ago
> We actually have a lot of insight in that Intel still doesn't have a good grasp on the problem. Their 10nm was supposed to enter volume production in mid 2018, and they still haven't truly entered volume production today. Additionally Intel announced in July 2020 that their 7nm is delayed by at least a year which means they figured out their node delay problem.

Knowing something happened is not the same as knowing "why" it happened. That's the point of my comment. We don't know why they were not able to achieve volume production on 10 nm earlier.

Spooky23 · 5 years ago
Wasn’t the issue that the whole industry did a joint venture, but Intel decided to go it alone?

I worked at a site (in a unrelated industry) where there was a lot of collaborative semiconductor stuff going on, and the only logo “missing” was Intel.

visceral · 5 years ago
I think it's pretty clear from the article what happened. They didn't have the capital (stemming from a lack of foresight and incentives) to invest in these fabs, relative to their competition.

If you look at this from an engineering standpoint, I think you'll miss the forest for the trees. From a business and strategy standpoint, this was classic case of disruption. Dominant player, Intel, was making tons of money on x86 and missed mobile opportunity. TSMC and Samsung seized on the opportunity to manufacture these chips when Intel wouldn't. As a result, they had more money to build/invest in research to build better fabs, which could be funded by the many customers buying mobile chips. Intel, being the only customer of their fabs, would only have money to improve their fabs if they sold more x86 chips (which were stagnating). By this time, it was too late.

ineedasername · 5 years ago
I found the geopolitical portion to be the most important aspect here. China has shown a willingness to flex its muscles on enforcing its values beyond their borders. China is smart, and plays a long game. We don't want to wake up one day and find they've flexed their muscles on their regional neighbors similar to their rare earths strong-arming from 2010-2014 and not have fab capabilities to fall back on in the West.

(For that matter, I'm astounded that after 2014 the status quo returned on rare earths with very little state-level strategy or subsidy to address the risk there.)

npunt · 5 years ago
Ben missed an important part of the geopolitical difference between TSMC and Intel: Taiwan is much more invested in TSMC's success than America is in Intel's.

Taiwan's share of the semiconductor industry is 66% and TSMC is the leader of that industry. Semiconductors helps keep Taiwan from China's encroachment because it buys them protection from allies like the US and Europe, whose economies heavily rely on them.

To Taiwan, semiconductor leadership is an existential question. To America, semiconductors are just business.

This means Taiwan is also likely to do more politically to keep TSMC competitive, much like Korea with Samsung.

blackrock · 5 years ago
Taiwan nor TSMC cannot produce the key tool to make this all work: The photolithography device itself.

Only ASML currently has that technology.

And it turns out, the photolithography device isn’t really a plug and play device. It’s very fussy. It breaks often. And it requires an army of engineers (as cheap as possible), to man the devices, and to produce the required yield, in order to make the whole operation profitable.

This is the Achilles’ Heel of the whole operation.

I suspect that China is researching and producing their own photolithography devices, independent of American, or western technology. And when they crack it, then they will recapture the entire Chinese market for themselves. And TSMC will become irrelevant to any strategic or tactical plans for them.

mc10 · 5 years ago
> Semiconductors helps keep Taiwan from China's encroachment because it buys them protection from allies like the US and Europe, whose economies heavily rely on them.

Are there any signed agreements that would enforce this? If China one day suddenly decides to take Taiwan, would the US or Europe step in with military forces?

PKop · 5 years ago
>I'm astounded

Our political system and over financialized economy seem to suffer from same hyper short term focus that many corporations chasing quarterly returns run in to. No long term planning or focus, and perpetual "election season" thrashing one way or another while nothing is followed through with.

Plus, in 2, 4 or 8 years many of the leaders are gone and making money in lobbying or corporate positions. No possibly short-term-painful but long term beneficial policy gets enacted, etc.

And many still uphold our "values" and our system as the ideal, and question any that would look towards the Chinese model as providing something to learn from. So, I anticipate this trend will continue.

echelon · 5 years ago
It appears the Republicans are all-in on the anti-China bandwagon. Now you just have to convince the Democrats.

I don't think this will be hard. Anyone with a brain looking at the situation realizes we're setting ourselves up for a bleak future by continuing the present course.

The globalists can focus on elevating our international partners to distribute manufacturing: Vietnam, Mexico, Africa.

The nationalists can focus on domestic jobs programs and factories. Eventually it will become clear that we're going to staff them up with immigrant workers and provide a path to citizenship. We need a larger population of workers anyway.

okl · 5 years ago
> [...] and not have fab capabilities to fall back on in the West.

I'm not too concerned:

- There are still a number of foundries in western countries that produce chips which are good enough for "military equipment".

- Companies like TSMC are reliant on imports of specialized chemicals and tools mostly from Japan/USA/Europe.

- Any move from China against Taiwan would likely be followed by significant emigration/"brain drain".

ineedasername · 5 years ago
National security doesn't just extend to direct military applications. Pretty much every industry and piece of critical infrastructure comes into play here. It won't matter if western fabs can produce something "good enough" if every piece of technological infrastructure from the past 5 years was built with something better.

As for moves again at Taiwan, China hasn't given up that prize. Brain drain would be moot if China simply prevented emigration. I view Hong Kong right now as China testing the waters for future actions of that sort.

Happily though I also view TSMC's pending build of a fab in Arizona as exactly that sort of geographical diversification of industrial and human resources necessary. We just need more of it.

Lopiolis · 5 years ago
The issue isn't just military equipment though. When your entire economy is reliant on electronic chips, it's untenable for all of those chips to come from a geopolitical opponent. That gives them a lot of influence over business and politics without having to impact military equipment.
bee_rider · 5 years ago
Yeah, for some reason, I assumed that military equipment mostly used, like, low performance but reliable stuff. In-order processors, real time operating systems, EM-hardening. Probably made by some company like Texas Instruments, who will happily keep selling you the same chip for 30 years.
Spooky23 · 5 years ago
That’s a good comparison... CPUs are increasingly a commodity.
totalZero · 5 years ago
> This is why Intel needs to be split in two. Yes, integrating design and manufacturing was the foundation of Intel’s moat for decades, but that integration has become a straight-jacket for both sides of the business. Intel’s designs are held back by the company’s struggles in manufacturing, while its manufacturing has an incentive problem.

The only comparable data point says that this is a terrible idea. AMD spun out GlobalFoundries after a deep slide in their valuation, and the stock (as well as the company's reputation) remained in the doldrums for several years after that. Chipmaking is a big business and there are many advantages to vertical integration when both sides of the company function appropriately. If you own the fabs and there is a surge in demand (as we see now at the less extreme end of the lithography spectrum), your designs get preferential treatment.

Intel's problem isn't the structure of the company, it's the execution. Swan was not originally intended as the permanent replacement to Krzanich[0], and it's a bit strange to draw conclusions about whether the company can steer away from the rocks when the new captain isn't even going to take the helm until the middle of next month.

People are viewing Intel's suggestion that it may use TSMC's fabs for some products as a negative for Intel, but I just see it as a way to exert pressure on AMD's gross margin by putting some market demand pressure on the extreme end of the lithography spectrum (despite sustained demand in TSMC's HPC segment, TSMC's 7nm+ and 5nm are not the main driver of current semiconductor shortages).

[0] https://www.engadget.com/2019-01-31-intel-gives-interim-ceo-...

garaetjjte · 5 years ago
>The only comparable data point says that this is a terrible idea.

Huh, I would say completely opposite thing. AMD wouldn't have survived if it kept trying to improve their own process instead of going to TSMC.

ZeroCool2u · 5 years ago
The problem here is not the success of AMD after splitting, but the complete retreat of Global Foundries from the SOTA process node. If this happens again with an Intel split then we have only TSMC left, off the coast of mainland China in Taiwan, in the middle of a game of thermonuclear tug of war between the West and China.

While Capitalism will likely be part of the solution, through subsidizes for Intel or some other form, it must take a back seat to preventing the scenario described above from becoming reality. We are on the brink of this happening already with so many people suggesting such a split and ignoring what happened to AMD and GF.

The geopolitical ramifications of completely centralizing the only leading process node in such a sensitive area between the world's super powers cannot be understated.

Full disclosure: I'm a shareholder in Intel, TSMC, and AMD.

twblalock · 5 years ago
AMD had to go through that in order to become a competitive business again. Look at them now! Maybe Intel's chip design business needs to go through the same thing.

Maybe there is a way for Intel to open up its fab business to other customers and make it more independent, without splitting it off into another company. However, it seems like that would require a change in direction that goes against decades of company culture. It might be easier to achieve that by actually splitting the fab business off.

totalZero · 5 years ago
Self-immolation is only a path to growth if you're a magical bird -- it's not a reasonable strategy for a healthy public company. AMD went through seven years of pain and humiliation between that spinoff and its 2015 glow-up. I understand that sometimes the optimal solution involves a short-term hit, but you don't just sell your organs on a lark (nor because some finance bros at Third Point said so). There are obvious strategic reasons to remain an IDM, and AMD would never have gone fabless if the company hadn't been in an existential crisis. Intel is nowhere near that kind of crisis; it may have some egg on its face but the company still dominates market share in its core businesses and is making profits hand over fist.

> Maybe there is a way for Intel to open up its fab business to other customers and make it more independent, without splitting it off into another company.

Intel Custom Foundry. They have several years of experience doing exactly what you describe, and that's how their relationship with Altera (which they later acquired) began. I see AMD's subsequent bid for Xilinx as a copycat acquisition that demonstrates one of the competitive advantages of Intel's position as an IDM: information.

Covzire · 5 years ago
But look at Global Foundries now. The article does suggest that Intel's spun off fabs would need state funding to survive but is that really tenable for the long term? Is that TSMC's secret thus far?
jjoonathan · 5 years ago
The "US manufacturing is actually stronger than ever" camp used to cook their books by over-weighting Intel profits. Hopefully this will be a wakeup call.
colinmhayes · 5 years ago
Manufacturing isn't an industry that the U.S. should be interested in currently. Wait until the entire factory can be automated and it will come back. Until then enjoy the cheaper goods provided by globalized labor.
thoughtsimple · 5 years ago
How does Moore's law figure into this? I suspect that TSMC runs into the wall that is quantum physics at around 1-2nm. Considering that TSMC has said that they will be in full production of 3nm in 2022, I can't see 1nm being much beyond 2026-2028. What happens then? Does a stall in die shrinks allow other fabs to catch up?

It appears to me that Intel stalling at 14nm is what opened the door for TSMC and Samsung to catch up. Does the same thing happen in 2028 and allow China to finally catch up?

jng · 5 years ago
Modern process node designations (5nm, 3nm...) are not measurements any more, they are marketing terms. The actual measure of shrinking is a lot smaller than the name would mean to indicate, and not approaching the quantum limits as fast as it may seem.
sobellian · 5 years ago
If I recall correctly from my uni days, one of the big challenges with further shrinking the physical gates is that the parasitic capacitance on the gates becomes very hard to control, and the power consumption of the chip is directly related to that capacitance. Of course, nothing is so simple and I'm sure Intel can make some chips at very small process sizes, but at the cost of horrible yield.
chaorace · 5 years ago
I did not know that! Though, that answer raises its own questions...

If the two are entirely unlinked, what's stopping Intel from slapping "Now 3nm!" on their next gen processors? Surely some components must be at the advertised size, even if it's no longer a clear cut all-or-nothing descriptor, right? What's actually being sized down and why is it seemingly posing so many challenges for Intel's supply chain?

kasperni · 5 years ago
Jim Keller believes that at least 10-20 years of shrinking is possible [1].

[1] https://www.youtube.com/watch?v=Nb2tebYAaOA&t=1800

MangoCoffee · 5 years ago
> I can't see 1nm being much beyond 2026-2028. What happens then?

whatever marketing people come up? Moore's law is not a law but an observation. it doesn't really matter tho. we are going to 3D chip, chiplet, advance packaging ...etc.

wffurr · 5 years ago
Quantum effects haven't been relevant for a while now. The "nanometer" numbers are marketing around different transistor topologies like FinFET and GAA (Gate-all-around). There's a published roadmap out to "0.7 eq nm). Note how the "measurements" all have quotes around them:

https://www.extremetech.com/computing/309889-tsmc-starts-dev...

viktorcode · 5 years ago
Eventually, CPUs will have to focus on going wide, i.e. growing number of cores and improving interconnections.
kache_ · 5 years ago
moar coars
klelatti · 5 years ago
Feel this piece ducks one of the most important questions - what is the future and value of x86 to Intel? For a long time x86 was one half of the moat but it feels like that moat is close to crumbling.

Once that happens the value of the design part of the business will be much, much lower - especially if they have to compete with an on form AMD. Can they innovate their way out of this? Doesn't look entirely promising at the moment.

stefan_ · 5 years ago
Why are people so hung up about the x86 thing? ARM continues to be sold on because everyone has now understood they don't really matter; they are not driving the innovations, they were simply the springboard for the Apples, Qualcomms and Amazons to drive their own processor designs, and they are not setup to profit from that. ARMs reference design isn't competitive, the M1 is.

Instruction set architecture at this point is a bikeshed debate, it's certainly not what is holding Intel back.

usefulcat · 5 years ago
I'm not sure that's entirely true. According to this (see "Why can’t Intel and AMD add more instruction decoders?"):

https://debugger.medium.com/why-is-apples-m1-chip-so-fast-32...

..a big part of the reason the M1 is so fast is the large reorder buffer, which is enabled by the fact that arm instructions are all the same size, which makes parallel instruction decoding far easier. Because x86 instructions are variable length, the processor has to do some amount of work to even find out where the next instruction starts, and I can see how it would be difficult to do that work in parallel, especially compared to an architecture with a fixed instruction size.

amluto · 5 years ago
I would argue that ISA does matter. Beyond the decode width issue, x86 has some material warts compared to ARM64:

The x86 atomic operations are fundamentally expensive. ARM’s new LSE extensions are more flexible and can be faster. I don’t know how much this matters in practice, but there are certainly workloads for which it’s a big deal.

x86 cannot context-switch or handle interrupts efficiently. ARM64 can. This completely rules x86 out for some workloads.

ARM64 has TrustZone. x86 has SMM. One can debate the merits of TrustZone. SMM has no merits.

Finally, x86 is more than an ISA - it’s an ecosystem, and the x86 ecosystem is full of legacy baggage. If you want an Intel x86 solution, you basically have to also use Intel’s chipset, Intel’s firmware blobs, Intel’s SMM ecosystem, all of the platform garbage built around SMM, Intel’s legacy-on-top-of-legacy poorly secured SPI flash boot system, etc. This is tolerable if you are building a regular computer and can live with slow boot and with SMM. But for more embedded uses, it’s pretty bad. ARM64 has much less baggage. (Yes, Intel can fix this, but I don’t expect them to.)

blinkingled · 5 years ago
Well put. People are being their usual teamsport participants on x86 vs ARM. Intel has execution problems in two departments - manufacturing and integration. ISA is not an issue - they can very well solve the integration issues and investing in semiconductor manufacturing is the need of the hour for the US so I can imagine they getting some traction there with enough money and will.

IOW even if Intel switched ISA to ARM it won't magically fix any of the issues. We've had a lot of ARM vendors trying to do what Apple did for too long.

totalZero · 5 years ago
The demise of x86 isn't something that can be fiated. It could come about, but there would need to be a very compelling reason to motivate the transition. Technologies that form basic business and infrastructural bedrock don't go away just because of one iteration -- look at Windows Vista for example.

Even if every PC and server chip manufacturer were to eradicate x86 from their product offerings tomorrow, you'd still have over a billion devices in use that run on x86.

marcosdumay · 5 years ago
It's not the demise of x86. It's the demise of x86 as a moat.

Those are different things. We have seen a minuscule movement on the first, but we've been running towards the second since the 90's, and looks like we are close now.

spideymans · 5 years ago
Windows Vista's problems were relatively easy to solve though. Driver issues naturally sorted themselves out over time, performance became less of an issue as computers got more powerful, and the annoyances with Vista's security model could be solved with some tweaking around the edges. There wasn't much incentive to jump from the Windows ecosystem, as there was no doubt that Microsoft could rectify these issues in the next release of Windows. Indeed, Windows 7 went on to be one the greatest Windows release ever, despite being nothing more than a tweaked version of the much maligned Vista.

Intel's problems are a lot more structural in nature. They lost mobile, they lost the Mac, and we could very well be in the early stages of them losing the server (to Graviton, etc...) and the mobile PC market (if ARM PC chips take off in response to M1). Intel needs to right the ship expeditiously, before ARM gets a foothold and the x86 moat is irreversibly compromised. Thus far, we've seen no indication that they know how to get out of this downward spiral.

nemothekid · 5 years ago
>look at Windows Vista for example.

This is a terrible example for the reasons stated in the article. Microsoft is already treating windows more and more like a step child everyday - office and azure are the new cool kids

klelatti · 5 years ago
I was extremely careful not to say that x86 would go away!

But it doesn't have to for Intel to feel the ill effects. There just have to be viable alternatives that drive down the price of their x86 offerings.

mhh__ · 5 years ago
It's worth saying that CPU design isn't like software. Intel and AMD cores are fairly different, and the ISA is the only thing that unites them.

If X86 finally goes, and Intel and AMD both switched elsewhere we'd be seeing the same battle as usual but in different clothes.

On top of the raw uarch design, there is also the peripherals and ram standard etc. etc.

klelatti · 5 years ago
Fair points, but if you're saying that if we moved to a non-x86 (and presumably Arm based) world then its business as usual for Intel and AMD then I'd strongly disagree - it's a very different (and much less profitable) commercial environment with lots more competition.
tyingq · 5 years ago
I agree that the moat is falling away. There used to be things like TLS running faster because there was optimized x86 ASM in that path, but none for other architectures. That's no longer true.

I suppose Microsoft would be influential here. Native Arm64 MS Office, for example.

varispeed · 5 years ago
My view is that currently the only way for Intel to salvage themselves it to go ARM route and start licensing x86 IP and perhaps even open source some bits of tech. They are unable to sustain this tech by themselves nor with AMD anymore. It seems to me when Apple releases their new CPUs I am going to have to move to that platform in order to keep up with the competition (quicker the core, I can do calculations quicker and quicker deliver the product). Currently I am on AMD, but it is only marginally faster than M1 it seems.
AlotOfReading · 5 years ago
Are they able to even do that legally? I'm pretty sure the licensing agreement for x86 with AMD explicitly prohibited this for both parties.
samus · 5 years ago
M1 has a more advanced node compared to both Intel and AMD designs. Architecture goes a long way of course.
eutropia · 5 years ago
I worked at Intel in 2012 and 2013. Back then, we had a swag t-shirt that said "I've got x86 problems but ARM aint one".

I went and dug that shirt out of a box and had a good laugh when Apple dropped the M1 macs.

Back then, the company was confident that they could make the transition to EUV lithography and had marketing roadmaps out to 5nm...