Readit News logoReadit News
ChuckMcM · 4 years ago
I found the news of Intel releasing this chip quite encouraging. If they have enough capacity on their 10nm node to put it into production then they have tamed many of the problems that were holding them back. My hope is that Gelsinger's renewed attention to engineering excellence will allow the folks who know how to iron out a process to work more freely than they did under the previous leadership.

That said, fixing Intel is a three step process right? First they have to get their process issues under control (seems like they are making progress there). Second, they need to figure out the third party use of that process so that they can bank some some of revenue that is out there from the chip shortage. And finally, they need to answer the "jelly bean" market, and by that we know that "jelly bean" type processors have become powerful enough to be the only processor in a system so Intel needs to play there or it will lose that whole segment to Nvidia/ARM.

nickysielicki · 4 years ago
Forget all the technical problems, fixing Intel is mostly about figuring out how they let a bunch of MBAs mismanage the company to the point that they squandered a two-decade long global monopoly on computing, in an era where computing absolutely dominated global economic development.

I mean, when you put it that way, it's genuinely astounding that they managed to find themselves technically so far behind, and so quickly. This was not a minor mismanagement fuck up, this is complete incompetence. Things clearly need to be burnt to the ground, they need a lot of churn.

If they can get back to being engineering-driven rather than financially driven, it will all fall into place.

foobiekr · 4 years ago
Once you invite the vampire into the house should you really be surprised that you find it sucking your blood? The answer is that like most large companies they decided technologist leadership no longer mattered. It’s a pattern that repeats again and again and that is because the people who need to learn the lesson most are counter-incentivized to do so.
wpietri · 4 years ago
> bunch of MBAs mismanage the company to the point that they squandered a two-decade long global monopoly on computing

I'm generally happy to blame MBAs for quite a bit. But isn't this also part of the nature of monopolies? A lack of competition means that in the short term, strategy doesn't matter at all. There was a period where Intel execs could have run the company via astrological chart or coin flip and done just as well for themselves. Managerialism and other MBA ideologies make that worse for sure.

rich_sasha · 4 years ago
While I largely agree, it is also true that running a business requires very different skills than building its product.

MBA or not, technologist or not, you can still be good or bad at what you do. Smart or stupid. I think simply looking at a mismanaged tech company and saying “ah too many MBAs in charge” is missing the point.

My guess is, if you look at the well-ran large companies, these will also have an MBA-rich management core.

I worked at a mid-sized company completely driven by technological people, with no one having a solid business background. It wasn’t good.

tshaddox · 4 years ago
> fixing Intel is mostly about figuring out how they let a bunch of MBAs mismanage the company to the point that they squandered a two-decade long global monopoly on computing, in an era where computing absolutely dominated global economic development.

Isn’t the situation you described precisely where we would expect stagnation to be very likely?

totalZero · 4 years ago
> they squandered a two-decade long global monopoly on computing

This conclusion is extremely premature. Intel has greater market share now than it did in the Athlon 64 days.

DrBazza · 4 years ago
> I mean, when you put it that way, it's genuinely astounding that they managed to find themselves technically so far behind, and so quickly. This was not a minor mismanagement fuck up, this is complete incompetence.

Is it though?

Or is it just the hardware of equivalent of the risky move of throwing it all away and rewriting from scratch for version 2? Intel have just incrementally improved their design, whereas AMD threw theirs away and almost disappeared in the process. In a parallel dimension, Intel could be exactly where they are now, with no AMD.

But... I agree that there was/is clearly a huge amount of incompetence letting non-engineers run an engineering company.

ska · 4 years ago
> figuring out how they let a bunch of MBAs mismanage the company

I think this is often stated as a cause, but more usually a symptom.

jl2718 · 4 years ago
I think they should replace every person in the hierarchy with a randomly-selected subordinate.
ksec · 4 years ago
>how they let a bunch of MBAs mismanage the company to the point that they squandered a two-decade long global monopoly on computing

A MBA Story.

Once upon a time, a MBA joined a company, worked his way from Finance Director to C grade. He could probably be described as pure breed MBA. He made friends with other rare MBA in the company and improve their career path along the way. Why? Because that is what you do, MBA always help another MBA. Somewhere along the line, the company's first non-technical / engineer MBA CEO was born.

As the company grow there are more MBAs, but not all MBAs are bad, you just need engineers and product people around these MBA. But some of these engineers or product genius [1] are pain in the ass to manage, and since many of them dont have MBA, they dont understand MBA speak and they get driven out of company decision making forum. So what happened was these product engineers got rotted out over the years even when they are C grade and out very publicly.

With leading edge tech company there is a very long lead time in their product R&D roadmap, so they were fine for the first few years, and continue to make record breaking results. Then MBA move his way up to the board and become chairman just in the time before his MBA CEO friend retired. Which means MBA Chairman gets to pick a new CEO again, guess which CEO would a MBA pick? Another MBA! Since MBA always help another MBA, new MBA CEO promoted more MBA to various rolls within the company. And MBA cult now officially dictate the company decision making.

Everything was great for a few more years, the market was growing, they have a monopoly and most important of all, they are doing great because they are MBA. At least that is what those MBA thought. Until one day cracks started appearing, MBA CEO thought it was all fine, but with time external pressure were piling in and the whole industry moved on. The market looked very differently to what company were 10 years ago. And somehow the MBA CEO already had an on going internal investigation which shows he had an inappropriate relationship and was let go. Which is the MBA speak of CEO being fired. And a recently arrived CFO was named as interim CEO while MBA Chairman and Board continue their CEO search. They could hire other MBA from outside the company, but since the leading edge tech company are a gigantic company with so many different specific domain knowledge and culture. The suitable choice outside the company are virtually nil. MBA Chairman looked at CFO, despite not being his first choice and is fairly new to the tech company, but he has a MBA! And the third MBA CEO was born.

The market is moving at a much faster pace than before. To give credit to the new MBA CEO he was trying very hard to steer the ship to a new path and direction. The tech company are already behind. These new direction and requirement are completely new, MBA middle management have no idea what they are doing and cant turn the ship fast enough. And while they are still turning, the industry has moved further along. I guess this is an example of MBA being killed by MBA.

Finally shareholder were so fed up, some of them asked the same question;

>how they mismanage the company to the point that they squandered a two-decade long global monopoly on computing, in an era where computing absolutely dominated global economic development.

There are very little choice of what shareholder could do. They need those product engineers back. And a new CEO. But the perfect candidate had a fallout with MBA Chairman a decade ago, the only option was to rally enough support to get rid of him. In the end, MBA Charimand retired, a new Board Chairman was named, and those product engineers are back to save the company.

As far as the MBA story goes, this is the end. We dont know what happen to the leading edge company, I guess that part is To be continued.

[1] https://www.youtube.com/watch?v=lmFlOd0MGZg

DCKing · 4 years ago
The Amlogic S905X3/4 are "jelly bean" chips under this definition - you can buy full computers with those including a motherboard of sorts, RAM and standard USB/HDMI/Ethernet/eMMC/Wifi for under $30 bulk in China. They're branded as "TV boxes". The S905X3 is also the SoC of the $55 Odroid C4 SBC.

Yet those chips are apparently made on either TSMC's or GloFo's 12nm nodes (which one I can't find). Whilst Intel's 14nm is still superior to either TSMC/GloFo 12nm by all accounts, they're definitely the same ballpark. Competition for Intel even looks rough for cheaper high volume chips. You'd need to consider that Intel still needs some time to adapt to being a third party foundry too.

ChuckMcM · 4 years ago
Good analysis, I didn't say it would be easy :-).
judge2020 · 4 years ago
Production for a datacenter CPU is not the same as production for datacenter + enthusiast-grade consumer CPUs like Zen 3 currently achieves, unfortunately. Rocket lake being backported to 14nm is still not a good sign for actual production volume, although it probably means next generation will be 10nm all the way.
willis936 · 4 years ago
Datacenter CPUs are much larger than consumer parts and yield goes down with the square of the die area. They start with these because the margins go up faster than square of the die area.
totalZero · 4 years ago
> Rocket lake being backported to 14nm is still not a good sign for actual production volume

I'm genuinely having trouble understanding what you mean by this.

Rocket Lake being backported to 14nm means that 10nm can be allocated in greater proportion toward higher-priced chips like Alder Lake and Ice Lake SP. Seems like it would be good for production volume.

knz_ · 4 years ago
> Rocket lake being backported to 14nm is still not a good sign for actual production volume,

I'm not seeing a good reason for thinking this is the case. Server CPUs are harder to fab (much larger die area) and they need to fab more of them (desktop CPUs are relatively niche compared to mobile and server CPUs).

If anything this is a sign that 10nm is fully ready.

bushbaba · 4 years ago
I assume for intel, they make more server CPUs per year than the entirety of AMDs output.
buu700 · 4 years ago
What's a "jelly bean" processor? Trying to search for that just gets a bunch of hits about Android 4.1.
dragontamer · 4 years ago
http://sparks.gogo.co.nz/assets/_site_/downloads/smd-discret...

> Jellybean is a common term for components that you keep in your parts inventory for when your project just needs “a transistor” or “a diode” or “a mosfet”

-----------

For many hobbyists, a Raspberry Pi or Arduino is a good example of a Jellybean. You buy 10x Raspberry Pis and stuff your drawer full of them, because they're cheap enough to do most tasks. You don't really know what you're going to use all 10x Rasp. Pi for, but you know you'll find a use of it a few weeks from now.

---------

At least, in my Comp. Engineering brain, I think N2222 transistors or 3904-transistors, or the 741 Op-amp. There are better op-amps and better transistors for any particular job. But I chose these parts because they're familiar, comfortable, cheap and well understood by a wide variety of engineers.

Well, not the 741 OpAmp anymore anyway. 741 was a jellybean back in the 12V days. Today I think 5V compatibility has become the standard voltage (because of USB). So 5V op-amps are a more important "jellybean".

ChuckMcM · 4 years ago
Sometimes referred to as "applications specific processor" (ASP) or "System on chip" (SoC). These are the bulk of semiconductor sales these days as they have replaced all of the miscellaneous gate logic on devices with a single programmable block that has a bunch of built in peripherals.

Think Atmel AtMega parts, there are trillions of these in various roles. When you think of something like a 555 timer[1] that is now more cost effectively and capably replaced with an 8 pin micro-processor you can get an idea of the shift.

While these are rarely built on the "leading edge" process node, when a process node takes over for high margin chips, the previous node gets used for lower margin chips, which effectively does a shrink on their die increasing their cost (most of these chips seem to keep their performance specs fairly constant, preferring cost reduction over performance improvement.)

Anyway, the zillions of these chips in lots of different "flavors" are colloquially referred to as "jelly bean" chips.

madsushi · 4 years ago
https://news.ycombinator.com/item?id=17376874

> [1] Jelly Bean chips are those that are made in batches of 1 - 10 million with a set of functions that are fairly specific to their application.

carlhjerpe · 4 years ago
What I don't understand is: ASML is building these machines for making ICs. Why can TSMC use them for 7nm but Intel can only use them for 10 right now? Doesn't ASML make the lenses as well so that you're "only" stuck making the etching thingy (forgot what it's called, but the reflective template of a CPU).

It seems like nobody is talking about this, could anyone shine some light?

dragontamer · 4 years ago
Consider that the wavelength of red light is 700 nm, and the wavelength of UV-C is 100nm to 280nm.

And immediately, we see the problem about dropping to 10nm: that's literally smaller than the distance that photons vibrate on their way to the final target.

And yeah, 10nm and 7nm is a marketing term, but that doesn't change the fact that these processes are all smaller than the wavelength of light.

-------

So there are two ways to get around this problem.

1. Use smaller light: "Extreme UV" is even smaller than normal UV at 13.5nm. Kind of the obvious solution, but higher energy and changes the chemistry slightly, since the light is a different color. Things are getting mighty close to literal "X-Ray Lasers" as they are, so the power requirements are getting quite substantial.

2. Multipatterning -- Instead of developing the entire thing in one shot, do it in multiple shots, and "carefully line up" the chips between different shots. As difficult as it sounds, its been done before at 40nm and other processes. (https://en.wikipedia.org/wiki/Multiple_patterning#EUV_Multip...)

3. Do both at the same time to reach 5nm, 4nm, or 3nm. Either way, 10nm and 7nm is the point where the various companies had to decide to do #1 first or #2 first. Either way, your company needs to learn to do both in the long term. TSMC and Samsung went with #1 EUV, and I think Intel though that #2 multi-patterning would be easier.

And the rest is history. Seems like EUV was easier after all, and TSMC / Samsung's bets paid off.

Mind you, I barely know any of the stuff I'm talking about. I'm not a physicist or chemist. But the above is my general understanding of the issues. I'm sure Intel had their reasons to believe why multipatterning would be easier. Maybe it was easier, but other company issues drove away engineers and something unrelated caused Intel to fall behind.

erik · 4 years ago
For one, "7nm" and "10nm" are just marketing names at this point, and don't really correspond to the physical dimensions of anything on the chips produced. Intel 10nm and TSMC 7nm are considered to have very comparable density. TSMC 5nm and is quite ahead though.

As to why it's not just a matter of buying a bunch of ASML lithography machines and plugging them in: In addition to what the other replies have noted, there is so much complexity and precision required in a fab. Consider all of the challenges that would be involved with starting with a bunch of industrial robots, and trying to build a fully automated assembly line that manufactures cars. Then scale precision requirements up by many orders of magnitude.

vzidex · 4 years ago
I'll take a crack at it, though I'm only in undergrad (took a course on VLSI this semester).

Making a device at a specific technology node (e.g. 14nm, 10nm, 7nm) isn't just about the lithography, although litho is crucial too. In effect, lithography is what allows you to "draw" patterns onto a wafer, but then you still need to do various things to that patterned wafer (deposition, etching, polishing, cleaning, etc.). Going from "we have litho machines capable of X nm spacing" to "we can manufacture a CPU on this node at scale with good yield" requires a huge amount of low-level design to figure out transistor sizings, spacings, and then how to actually manufacture the designed transistors and gates using the steps listed above.

datagram · 4 years ago
Like others have said, the numbers aren't comparable between manufacturers.

Here's a neat video where they use an electron microscope to actually compare the transistor sizes for Intel 14nm and AMD 7nm: https://www.youtube.com/watch?v=1kQUXpZpLXI

mqus · 4 years ago
TSMCs 7nm is roughly equivalent to intels 10nm, the numbers don't really mean anything (anymore) and are not comparable
nrp · 4 years ago
Intel tried and failed at jelly beans at least once in recent history: https://en.wikipedia.org/wiki/Intel_Quark

I'm not sure that is a worthwhile segment for them to try to compete in though. It's high mix, has a ton of competition, and doesn't necessarily leverage their fab strength.

csunbird · 4 years ago
Forgive me for my ignorance, but what is a "jelly bean" processor? Do you mean the "extremely cheap, almost no power consuming" processors?
sitkack · 4 years ago
If they price it right, it could be amazing. Computing is mostly about economics. The new node sizes greatly increase the production capacity. Half the dimension in x and y gets you 4x the transistors on the same wafer. It is like making 4x the number of fabs.

It also has speed and power advantages.

I think this release is excellent news on many levels.

Retric · 4 years ago
Intel 10nm is really just a marketing term at this point and has nothing to do with transistor density.
paulpan · 4 years ago
TLDR from Anandtech is that while this is a good improvement over previous gen, it still falls behind AMD (Epyc) and ARM (Altra) counterparts. What's somewhat alarming is that on a per-core comparison (28-core 205W designs), the performance increase can be a wash. Doesn't bode well for Intel as both their competitors are due for refreshes that will re-widen the gap.

Key question will be how quickly Intel will shift to the next architecture, Sapphire Rapids. Will this release be like the consumer/desktop Rocket Lake? E.g. just a placeholder to essentially volume test the 10nm fabrication for datacenter. Probably at least a year out at this point since Ice Lake SP was supposed to be originally released in 2H2020.

gsnedders · 4 years ago
> Key question will be how quickly Intel will shift to the next architecture, Sapphire Rapids. Will this release be like the consumer/desktop Rocket Lake? E.g. just a placeholder to essentially volume test the 10nm fabrication for datacenter. Probably at least a year out at this point since Ice Lake SP was supposed to be originally released in 2H2020.

Alder Lake is meant to be a consumer part contemporary with Sapphire Rapids, which is server only. They're likely based on the same (performance) core, with Adler Lake additionally having low-power cores.

Last I heard the expectation was still that these new parts would enter the market at the end of this year.

CSSer · 4 years ago
Lately Intel seems to be getting a lot of flack here. As a layperson in the space who's pretty out of the loop (I built a home PC about a decade ago), could someone explain to me why that is? Is Intel really falling behind or dressing up metrics to mislead or something like that? I also partly ask because I feel that I only really superficially understand why Apple ditched/is ditching Intel, although I understand if that is a bit off-topic for the current article.
mhh__ · 4 years ago
Intel's processes (i.e. turning files on a computer into chips) have been a complete disaster in recent years, to the point of basically missing one of their key die shrinks entirely as far as I can tell.

They are, in a certain sense, suffering from their own success in that their competitors have basically been nonexistant up until Zen came about (and even then only until Zen 3 have Intel truly been knocked off their single thread perch). This has led to them getting cagey, and a bit ridiculous in the sense that they are not only backporting new designs to old processes but also pumping them up to genuinely ridiculous power budgets. With Apple, AMD, and TSMC they have basically been caught with their trousers down by younger and leaner companies.

Ultimately this is where Intel need good leadership. The mba solution is to just give up and do something else (e.g. spin off the fabs), but I think they should have the confidence (as far as I can tell this is what they are doing) to rise to the technical challenge - they will probably never have a run like they did from Nehalem to shortly before now, but throwing in the towel means that the probability is zero.

Intel have been in situations like this before, e.g. When Itanium was clearly doomed and AMD were doing well (amd64), they came back with new processors and basically ran away to the bank for years - AMD's server market share is still pitiful compared to Intel (10% at most), for example.

martinald · 4 years ago
Yeah, it feels like Intel is sort of in the Pentium 4 days again, but without the Core microarchitecture sitting in the wings, and way more well funded competition - not just AMD.

It's hard to see what they can do. AMD is winning on every measure I can see on x86, and M1/Ampere feel like a huge curveball thrown at them. I don't even think they could switch to using TSMC fabs to help them shrink the process size as all the capacity is booked up for years (especially by Apple).

I think also the dynamics have changed in the server market in the last 10 years or so. You now have a much more condensed market with this big cloud operators buying millions of CPUs and having much more leverage over them before.

I would be surpised if by the end of the 2020s if ARM wasn't standard in nearly all server workloads. Most software can run on it unmodified now - databases all work great on ARM, and most applications are interpreted or run on a VM, so it is pretty easy to move to.

Symmetry · 4 years ago
I don't want to council despair but I'm not as sanguine as you either. Intel has had disastrous microarchitectures before. Itanium, P4, and previous ones. But it's never had to worry about recovering from a process disaster before. It might very well be able to but I worry.
ac29 · 4 years ago
> Intel's processes (i.e. turning files on a computer into chips) have been a complete disaster in recent years, to the point of basically missing one of their key die shrinks entirely as far as I can tell.

Which one? I dont believe they missed a die shrink, it just took a long time. Intel 14nm came out in 2014 with their Broadwell Processors, and the next node, 10nm came out in 2019 (technically 2018, but very few units shipped that year).

jimbob21 · 4 years ago
Yes, quite simply they have fallen behind while also promising things they have failed to deliver. As an example, their most recent flagship release is the 11900k, which has 2 fewer cores (now 8) than its predecessor (had 10, 10900k), and almost no improvement to speak of otherwise (in some games its ~1% faster). On the other hand, AMD's flagship, which to be fair is $150 more expensive, has 16 cores, very similar clock speeds, and is much more energy efficient (intel and amd calculate TDP differently). Overall, AMD is the better choice by a large margin and Intel is getting flock because it sat on its laurels for the last decade(?) and hasn't done anything to improve itself.

To put it in numbers alone, look at this benchmark. Flagship vs Flagship: https://www.cpubenchmark.net/compare/Intel-i9-11900K-vs-AMD-...

formerly_proven · 4 years ago
Naturally the 11900K performs quite a bit worse than the 10900K in anything which uses all cores, but the remarkable thing about the 11900K is that it even performs worse in a bunch of game benchmarks, so as a product it genuinely doesn't make any sense.
ineedasername · 4 years ago
They can't get their next-gen fabs (chip factories) into production. It's been a problem long enough that they're not even next-gen anymore: it's current-gen, about to be previous-gen.

So what you're seeing isn't really anti-Intel, it's probably often more like bitter disappointment that they haven't done better. Though I'm sure there's a tiny bit of fanboy-ism for & against Intel.

There's definitely some of that pro-AMD fanboy sentiment in the gaming community where people build their own rigs: AMD chips are massively cheaper than a comparable Intel chip.

M277 · 4 years ago
Just a minor nitpick regarding your last paragraph, this is no longer the case. Intel is now significantly cheaper after they heavily cut prices across the board.

For instance, you can now get an i7-10700K (which is roughly equivalent in single thread and better in multi thread) for cheaper than a R5 5600X.

MangoCoffee · 4 years ago
>So what you're seeing isn't really anti-Intel, it's probably often more like bitter disappointment that they haven't done better.

its back to where everyone design its own chip for their own product but don't need a fab 'cause of foundry like TSMC and Samsung.

blackoil · 4 years ago
A perfect storm. Intel had trouble with its 10nm/7nm engineering processes, which TSMC has been able to achieve. AMD had a resurgence with Zen arch. and ARM/Apple/TSMC/Samsung put 100s of billions to catchup with the x86 performance.

Intel is still biggest player in the game, because even though they are stuck at 14nm, AMD isn't able to manufacture enough to take bigger chunks of the market. Apple won't sell it to PC/Datacenter space, rest are still niche.

ac29 · 4 years ago
> even though they are stuck at 14nm

I think this isnt quite fair, their laptop 10nm chips have been shipping in volume since last year, and their server chips were released today, with 200k+ units already shipped (according to Anandtech). The only line left on 14nm is socketed Desktop processors, which is a relatively small market compared to laptops and servers.

tyingq · 4 years ago
Lots of shade because they first missed the whole mobile market, then got beat by AMD Zen by missing the chiplet concept and a successful current-gen process size, then finally also overshadowed by Apple's M1. The M1 thing is interesting, because it likely means the next set of ARM Neoverse CPUs for servers, from Amazon and others, will be really impressive. Intel is behind on many fronts.
mhh__ · 4 years ago
>likely means the next set of ARM Neoverse CPUs from Amazon and others will be really impressive

M1 is proof that it can be done, however you can absolutely make a bad CPU for a good ISA so I wouldn't take it for granted.

s_dev · 4 years ago
>Is Intel really falling behind

Intel is already behind AMD -- they have no product segment where they are absolutely superior. The means AMD is setting the market pace.

On top of this Apple is switching to ARM designed CPUs. This also looks to be a vote of no confidence in Intel.

The consensus seems to be that Intel who have their own fabs -- never really nailed anything under 14nm and are now being outcompeted.

meepmorp · 4 years ago
Apple designs it’s own chips, it doesn’t use ARM’s designs. They do use the ARM ISA, tho.
totalZero · 4 years ago
> Intel is already behind AMD -- they have no product segment where they are absolutely superior.

There are some who would argue this claim, but I think it's at least a defensible one.

Still, availability is an important factor that isn't captured by benchmarking. AMD has had CPU inventory trouble in the low-end laptop segment and high-end desktop segment alike.

> The consensus seems to be that Intel who have their own fabs -- never really nailed anything under 14nm and are now being outcompeted.

Intel has done well with 10nm laptop CPUs. They were just very late to the party. Desktop and server timelines have been quite a bit worse. I agree Intel did not nail 10nm, but they're definitely hanging in there. It's one process node at the cusp of transition to EUV, so some of the defeatism around Intel may be overzealous if we keep in mind that 7nm process development has been somewhat parallel to 10nm because of the difference in the lithographic technology.

JohnJamesRambo · 4 years ago
https://jamesallworth.medium.com/intels-disruption-is-now-co...

I think that summarizes it pretty well in that one graph.

chx · 4 years ago
Absolutely. Intel has been stuck on the 14nm node for a very, very long time. 10nm CPUs were supposed to ship in 2015, they did really only in late 2019, 2020. Meanwhile AMD caught up and Intel has been doing the silliest shenanigans to appear as if they were competitive, like in 2018 they demonstrated a 28 core 5GHz CPU and kinda forgot to mention the behind-the-scenes one horsepower (~745W) industrial chiller keeping that beast running.

Also, the first 10nm "Ice Lake" mobile CPUs were not really an improvement over the by then many times refined 14nm chips "Comet Lake". It's been a faecal pageant.

matmatmatmat · 4 years ago
Some of the other comments above have touched on this, but I think there is also a bit of latent anti-Intel sentiment in many people's minds. Intel extracted a non-trivial price premium out of consumers for many, many years (both for chips and by forcing people to upgrade motherboards by changing CPU sockets) while AMD could only catch up to them for brief periods of time. People paid that price premium for one reason or another, but it doesn't mean they were thrilled about it.

Many people, I'd say especially enthusiasts, were quite happy when AMD was able to compete on a performance/$ basis and then outright beat Intel.

Of course, now the tables have turned and AMD is able to extract that price premium while Intel cut prices. Who knows how long this will last, but Intel is still the 800 lb gorilla in terms of capacity, engineering talent, and revenue. I don't think we've heard the last from them.

yoz-y · 4 years ago
Intel was unable to improve their fabrication process year after year, while promising to do so repeatedly. Now, they have been practically lapped twice. Apple has a somewhat specific use case, but their cpus have significantly better performance per watt.
jvanderbot · 4 years ago
From Anandtech[1]:

"As impressive as the new Xeon 8380 is from a generational and technical stand-point, what really matters at the end of the day is how it fares up to the competition. I’ll be blunt here; nobody really expected the new ICL-SP parts to beat AMD or the new Arm competition – and it didn’t. The competitive gap had been so gigantic, with silly scenarios such as where a competing 1-socket systems would outperform Intel’s 2-socket solutions. Ice Lake SP gets rid of those more embarrassing situations, and narrows the performance gap significantly, however the gap still remains, and is still undeniable."

This sounds about right for a company fraught with so many process problems lately: Play catch up for a while and hope you experience fewer in the future to continue to narrow the gap.

"Narrow the gap significantly" sounds like good technical progress for Intel. But the business message isn't wonderful.

1. https://www.anandtech.com/show/16594/intel-3rd-gen-xeon-scal...

ajross · 4 years ago
I don't know that it's all so bad. The final takeaway is that a 660mm2 Intel die at 270W got about 70-80% of the performance that AMD's 1000mm2 MCM gets at 250W. So performance per transistor is similar, but per watt Intel lags. But then the idle draw was significantly better (AMD's idle power remains a problem across the Zen designs), so for many use cases it's probably a draw.

That sounds "competetive enough" to me in the datacenter world, given the existing market lead Intel has.

ComputerGuru · 4 years ago
I would argue that for high-end servers, idle draw is a bit of a non-issue as presumably either you have only one of these machines and it’s sitting idle (so no matter how inefficient it doesn’t matter) or you have hundreds/thousands of them and they’ll be as far from idle as it’s possible to be.

AMD’s idle power consumption is a bigger issue for desktop, laptop, and HEDT.

monocasa · 4 years ago
You can't really compare die sizes of a MCM and a single die and expect to get transistor counts out of that. So much of the area of the MCM is taken up by all the separate phys to communicate between the chiplets and the I/O die, and the I/O die itself is on GF14nm (about equivalent to Intel 22nm) last time I checked, not a new competitive logic node.

There's probably a few more gates still on the AMD side, but it's not the half again larger that you'd expect looking at area alone.

Symmetry · 4 years ago
I'm not sure that's a fair area comparison? AMD only has around 600 mm2 of expensive leading edge 7nm silicon and uses chiplets to up their yields. The rest is the connecting bits from an older and cheaper process. Intel's full size is a single monolithic die on a leading edge process.
marmaduke · 4 years ago
It's impressive how you and parent comment copied over to/from the dupe posting verbatim.

edit oops nevermind, I see my comment was also mysteriously transported from the dupe.

Dead Comment

jvanderbot · 4 years ago
Furthermore:

"At the end of the day, Ice Lake SP is a success. Performance is up, and performance per watt is up. I'm sure if we were able to test Intel's acceleration enhancements more thoroughly, we would be able to corroborate some of the results and hype that Intel wants to generate around its product. But even as a success, it’s not a traditional competitive success. The generational improvements are there and they are large, and as long as Intel is the market share leader, this should translate into upgraded systems and deployments throughout the enterprise industry. Intel is still in a tough competitive situation overall with the high quality the rest of the market is enabling."

jandrese · 4 years ago
I found it a little weird that they conclusions section didn't mention the AMD or ARM competition at all, given that the Intel chip seemed to be behind them in most of the tests.
quelsolaar · 4 years ago
>This sounds about right for a company fraught with so many process problems lately

Publicly the problems have been lately, but the things that caused these problems have happened much further back.

I'm cautiously bullish on Intel. From what I gather, Intel is in a much better place internally. They have much better focus, there is less infighting, its more engineering then sales lead, they have some very good people and they are no longer complacent. It will however take years before this is becomes visible from the outside.

Given the demand for CPUs and the competitions inability to deliver, I think intel will do OK even if they are no ones first choice of CPU vendor, while they try to catch up.

ksec · 4 years ago
It is certainly good enough to compete, prioritising Fab capacity to Server unit and lock in those important ( Swaying ) deals from clients. Sales and Marketing work their connection along with software tools that HPC markets needs and AFAIK is still far ahead of AMD.

And I can bet those prices have lots of room for special discount to clients. Since RAM and NAND Storage dominate the cost of server, the difference of Intel and AMD shrinks rapidly in the grand scheme of things, giving Intel a chance to fight. And there is something not mentioned enough, the importance of PCI-E 4.0 Support.

I wanted to rant about AMD, but I guess there is not much point. ARM is coming.

marmaduke · 4 years ago
Nice to see that AVX512 hasn't died with Xeon Phi. I see it coming out in a number of high end but lightweight notebooks too (Surface Pro with i7 10XXG7, MacBookPro 13" idem). This is a nice way to avoid needing GPU for heavily vectorizable compute tasks, assuming you don't need the CUDA ecosystem.
dragontamer · 4 years ago
GPGPU will never really be able to take over CPU-based SIMD.

GPUs have far more bandwidth, but CPUs beat them in latency. Being able to AVX512 your L1 cached data for a memcpy will always be superior to passing data to the GPU.

With Ice Lake's 1MB L2 cache, pretty much all tasks smaller than 1MB will be superior in AVX512 rather than sending it to a GPU. Sorting 250,000 Float32 elements? Better to SIMD Bitonic sort / SIMD Mergepath (https://web.cs.ucdavis.edu/~amenta/f15/GPUmp.pdf) on your AVX512 rather than spend a 5us PCIe 4.0 traversal to the GPU.

It is better to keep the data hot in your L2 / L3 cache, rather than pipe it to a remote computer (even if the 16x PCIe 4.0 pipe is 32GB/s and the HBM2 RAM is high bandwidth once it gets there).

--------

But similarly: CPU SIMD can never compete against GPGPUs at what they do. GPUs have access to 8GBs @500GB/s VRAM on the low-end and 40GBs @1000GB/s on the high end (NVidia's A100). EDIT: Some responses have reminded me about the 80GB @ 2000GB/s models NVidia recently released.

CPUs barely scratch 200GB/s on the high end, since DDR4 is just slower than GPU-RAM. For any problem where data-bandwidth and parallelism is the bottleneck, that fits inside of GPU-VRAM (such as many-many sequences of large scale matrix multiplications), it will pretty much always be better to compute that sort of thing on a GPU.

volta83 · 4 years ago
> Being able to AVX512 your L1 cached data for a memcpy will always be superior to passing data to the GPU.

The two last apps I worked on have been GPU-only. The CPU process starts running and launches GPU work, and that's it, the GPU does all the work until the process exits.

There is no need to "pass data to the GPU" because data is never on CPU memory, so there is nothing to pass from there. All network and file I/O goes directly to the GPU.

Once all your software runs on the GPU, passing data to the CPU for some small task doesn't make much sense either.

celrod · 4 years ago
FWIW, the A64FX has 1TB/s bandwidth because it has 32GiB of HBM2.
marmaduke · 4 years ago
In my experience, the most important aspect missing in most CPU GPU discussions, is that CPUs have a massive cache compared to GPUs, and that cache has pretty good bandwidth (~30 GB/core?), even if main memory doesn't. So even if your task's hot data doesn't fit in L2 but in L3/core, AVX-whatever per core processing is a good bet regardless of what a GPU can do.

Another aspect that seems like a hidden assumption in CPU-GPU discussions is that you have the time-energy-expertise budget to (re)build your application to fit GPUs.

ajross · 4 years ago
FWIW: your DRAM numbers are quoting clock speeds and not bandwidth. They aren't linear at all. In fact with enough cores you can easily saturate memory that wide, and CPUs are getting wider just as fast as GPUs are. The giant Epyc AMD pushed out last fall has 8 (!) 64 bit DRAM channels, where IIRC the biggest NVIDIA part is still at 6.
bitcharmer · 4 years ago
AVX-512 is an abomination in my field and we avoid it like the plague. It looks like we're not the only ones. Linus has a lot to say about it as well.

https://www.phoronix.com/scan.php?page=news_item&px=Linus-To...

aseipp · 4 years ago
Skylake-X has already had its die shots examined and the AVX-512 register file, the dominant part of the layout, is something like .5 of a single core, so it's not even going to buy you much area for anything if Intel deleted it, the whining about how it's better spent on extra cores by Linus is totally overblown. Ice Lake has also dramatically improved the per-core frequency tuning for client SKUs to the point AVX-512 is quite viable on my laptop with no serious problems; a single thread doing something isn't going to tank anything. Ice Lake-X almost certainly has 2FMAs instead of the 1FMA of client SKUs however, so it'll be interesting to see what the new power licensing situation is, but this is clearly something they've had in the books to improve.

The problem is that for the workloads that need specialization, you sometimes really need it. You could also delete the vectorized AES units in your Intel machines too and the general purpose performance wouldn't be affected much, but cryptographic performance specifically would tank, and it turns out, that matters a lot in aggregate for many people.

Ultimately there are literally dozens of specialized inactive units on any CPU at any given time that could be "better spent on general purpose units" (which also isn't necessarily true if other architectural choices prevent those units from being utilized effectively). People just like complaining about AVX-512 because it's easily digestible water cooler chat they read about on a blog.

37ef_ced3 · 4 years ago
For example, AVX-512 neural net inference: https://NN-512.com

Only interesting if you care about price (dollars spent per inference)

For raw speed (no matter the price) the GPU wins

api · 4 years ago
The 2020 Intel MacBook Air and 13" Pro have 10nm Ice Lake with AVX512. The Ice Lake MacBook Air performs pretty well and very close to the Ice Lake Pro, though of course the M1 destroys it.
mhh__ · 4 years ago
> though of course the M1 destroys it.

SIMD throughput?

Deleted Comment

totalZero · 4 years ago
Key takeaway for me:

"As impressive as the new Xeon 8380 is from a generational and technical stand-point, what really matters at the end of the day is how it fares up to the competition. I’ll be blunt here; nobody really expected the new ICL-SP parts to beat AMD or the new Arm competition – and it didn’t. The competitive gap had been so gigantic, with silly scenarios such as where a competing 1-socket systems would outperform Intel’s 2-socket solutions. Ice Lake SP gets rid of those more embarrassing situations, and narrows the performance gap significantly, however the gap still remains, and is still undeniable."

lifeisstillgood · 4 years ago
This might be a very dumb question but it always bothered me - silicon wafers are always shown as great circles, but processor dies are obviously square. But it looks like the etching etc goes right to the circular edges - wouldn't it be better to leave the dead space untouched?
w0utert · 4 years ago
Most semiconductor production processes like etching, doping, polish etc are done on the full wafer, not on individual images/fields. So there is nothing to be gained there in terms of production efficiency.

The litho step could in theory be optimized by skipping incomplete fields at the edges, but the reduction in exposure time would be relatively small, especially for smaller designs that fit multiple chips within a single image field. I imagine it would als introduce yield risk because of things like uneven wafer stress & temperature, higher variability in stage move time when stepping edge fields vs center fields, etc.

pas · 4 years ago
I think these are just press/PR wafers and real production ones don't pattern on the edge. (First of all it takes time, and in case of EUV it means things amortize even faster, because every shot damages the "optical elements" a bit.)

edit: it also depends on how many dies the mask (reticle) has on it. Intel uses one die reticles, so i. theory their real wafers have no situation in which they have partial dies at the edge.

dogma1138 · 4 years ago
Real wafers have the chip patterning going to the edge and this can results and often does in partial dies.

There is no reason not too so this and the edges are also often used for calibration and (potentially) destructive testing.

The only area of the wafer that will not be exposed is the notch, it’s always on one side of the circle and it’s used for moving the wafer around, this is why you often see wafers with one of the sides cut off giving it the flat tire shape.

andromeduck · 4 years ago
Many of the process steps involve rotation so this is impractical.
Sephr · 4 years ago
As disappointing as the perf is for server workloads, what I'm really interested in is SLI gaming performance. I can imagine that this would be a boon for high end gaming with multiple x16 PCIe 4.0 slots and 8 DDR4 channels.

SLI really shines on HEDT platforms, and this is probably the last non-multi-chip quasi-HEDT CPU for a while with this kind of IO.

(Yes, I know SLI is 'dead' with the latest generation of GPUs)

zamadatix · 4 years ago
These would be absolute trash for SLI performance vs top end standard consumer desktop parts. The best SKU has a peak boost clock of 3.7 GHz, the core to core latencies are about twice as high as the desktop parts, and the memory+PCIe bandwidth mean little to nothing for gaming performance (remember SLI bandwidth goes over a dedicate bridge as well) which is highly sensitive to latencies instead.