Readit News logoReadit News
dougmwne · 6 years ago
From a quick read, in case anyone is wondering, this is built on the latest-gen 10nm process which is a refinement on the previous 10nm process. Intel's goal was to focus on clock increases, so the instruction per clock is apparently not much changed. From looking at the tables, the whole line of processors appear to have nicely improved one core max clock speeds, which should hopefully help single core performance, which isn't something that has been increasing much lately.

Anecdote: a family member and software dev just bought the latest macbook pro coming from about an 8 year old macbook pro. Apparently he doesn't perceive much real world performance difference between the 2 machines. He said single core synthetic benchmarks were higher, but not by much.

PragmaticPulp · 6 years ago
> Anecdote: a family member and software dev just bought the latest macbook pro coming from about an 8 year old macbook pro. Apparently he doesn't perceive much real world performance difference between the 2 machines. He said single core synthetic benchmarks were higher, but not by much.

It really depends on the workload. My old MacBook Pro feels the same as the newer model 98% of the time that I'm using it, but I really appreciate the extra RAM and double the core count when I need it.

Reducing build times from 30 seconds down to 15 seconds doesn't sound like much when you're not pressing the compile button very often, but it really does help improve my engagement and focus.

The newer graphics cards are also much better at handling large and high-resolution external monitors. The difference isn't pronounced if you're just using the built-in laptop screen, but start using multiple external 4K+ monitors and the faster GPU starts to shine.

parsimo2010 · 6 years ago
Also anecdotally, I'd say the only things that would make a clearly noticeable performance difference to a layperson or casual computer user from the last few years would be:

1. If they previously had less than 4GB of memory and were often swapping

2. If they previously had a spinning HDD and moved to an SSD

3. If they previously had a really low end CPU like something outside the Core range, and moved to an i5 or better

Beyond that, all the other stuff is either a marginal performance gain or a convenience feature. The only other "must have" thing from the last few years that I wouldn't want to lose is a high res monitor, but that falls into the convenience category for me and not performance.

Power users like your family member got quantifiable benefit from performance upgrades over the years, but if they were already running decent hardware like a Macbook Pro, they didn't get any life altering upgrades. Taking 15 seconds to compile vs 30 is definitely faster, but you still have to sit and wait each time you push the button, so any habits that you learned from the 30 second pause will probably continue.

deergomoo · 6 years ago
> but start using multiple external 4K+ monitors and the faster GPU starts to shine

And heat up, unfortunately.

I'm completely fine with it getting loud and toasty when I push it, but the 5300M and 5500M in the 16" MacBook Pro ramp their memory clocks up to max as soon as two displays are on, even when idling. This results in a constant ~20W power draw that destroys the battery life, has the fans constantly audible, and makes the top case above the keyboard uncomfortably hot.

Apparently similar issues in desktop AMD cards have been addressed via driver updates, but considering whoever is responsible for updating Mac GPU drivers never actually does it, I'm not holding my breath.

tashoecraft · 6 years ago
Worked swapped out my mid 2015, 2-core 16gb with a 2019 8 core, 32gb, and this machine is night and day better. It's so much faster at every task I throw at it.
ithkuil · 6 years ago
I concur. The machine "feels" not much faster because of the OS, but raw computation speed like compilation etc are noticeably faster
burntcookie90 · 6 years ago
What is the difference between the storage drives?
srtjstjsj · 6 years ago
In 2015 the 15" was 4-core.
joakleaf · 6 years ago
I agree. I don't feel much difference between the 15" MB-Pro i7 4-core from 2014 and the i9 8-core from 2019. The i9 really isn't noticeable faster on single core. Instead it gets hotter and its fans starts sooner. So I recommend keeping the old one if you have it.
iKevinShah · 6 years ago
Wouldn't newer ones be more power-efficient as compared to the older one? I remember reading a post on HN which stated the OP buying older, high spec hardware for cheaper than latest one and almost all comments pointed to power-efficient thing
foobiekr · 6 years ago
It doesn't help that Catalina seems to have some serious performance regressions.

As one of may examples, set up an SMB3 connection to a network-attached storage device and then write a short program that uses two or three threads that walk a big director calling stat and lstat on every file. For large complex file hierarchies, Catalina (unlike Mojave) will start stuttering the entire UI and media playback.

"rclone" with --transfers=3 can bring a 2019 mbp to its knees, like Windows 3.1 writing to a floppy disk.

Spooky23 · 6 years ago
I run a pretty big VDI environment — my observation is that the the latest gen stuff on Xeon is about the same as the Haswell era gear, and slower for some use cases.

We’re actually planning on running that gear longer and moving people with more memory needs to newer gear with more memory — it’s all about the maintenance cost curve. We can get much more memory now with the same density, which offsets the CPU suckage.

ci5er · 6 years ago
Is that network constrained?
whynotminot · 6 years ago
FWIW, I upgraded to one of the 2020 MBP Ice Lake machines. I've seen much, much improved CPU and GPU performance that makes my daily life better.

I think much of the increase comes from going from dual core to quad core and perhaps not as much the single core gains, but still, I'd be curious what kind of software he's deving to not see much improvement.

If he's doing compiling or web-dev transpiling kinda stuff, I'm really surprised he's not seeing better gains.

dr_zoidberg · 6 years ago
I recently got a chance to use a 1005g1 (lower end i3 of Icelake family) and was quite surprised. I had it in my hands a bit more than a week before we had to send it back due to issues (SSD "died" and resurrected 3 times in 10 days, was definitely faulty).

Felt definitely faster than the 7200u I keep using, though I didn't get much of a chance to benchmark both. I also noticed that, despite having a much weaker battery (some 36W/hr mess), it managed to pull 6hs on a single charge, on merit of dropping to 1.2Ghz as fast as it could (though it quickly climbed to 3.4Ghz if required).

fomine3 · 6 years ago
It's not well known due to Intel's fab failure, but Ice Lake is huge improvement for IPC and perf/watt.
sitkack · 6 years ago
Like the move away from clock speed to signal computational power, focusing on the XXnm process for a cpu isn't something the user should use as a selection factor.

Dollar cost per compute, Watt-hr/compute, latency, io bandwidth, running on real world workloads. The process size is a red herring.

Any increases in single node performance are not red herrings and should be called out. This is excellent news.

SkyPuncher · 6 years ago
> Anecdote: a family member and software dev just bought the latest macbook pro coming from about an 8 year old macbook pro. Apparently he doesn't perceive much real world performance difference between the 2 machines. He said single core synthetic benchmarks were higher, but not by much.

I'm in a not dissimilar position with my (older) personal laptop and my (newish) work laptop. Two things I notice:

* Battery life. My work laptop lasts longer on battery because it uses less power to do the same amount of work. Not really a problem 99% of the time when I'm plugged into an outlet. * Compile times. Work laptop is much faster at compiling, but this is really only noticeable on cold-builds. Incremental builds are fast enough in both cases.

macjohnmcc · 6 years ago
This is why I haven't plunked down the money to go to the 2019 model when my early 2013 model isn't quite half as fast. The only thing I will be missing is that this laptop won't be able to run the new OS coming out in the fall.
srtjstjsj · 6 years ago
Since 2017 (and much worse in 2019 version) the (15") MBP dGPU overheats whenever it is use (including whenever external monitor is used) due to a driver bug. Stay away
lelandbatey · 6 years ago
On the topic of noticing performance improvements, I'd definitely say that we've plateaued as far as "general snappyness" in all aspects but one: high-refresh rate, low latency displays.

I recently upgraded my monitor from a Dell U3014 to an OMEN 27i (it's gaudy and "gamer" oriented, but it was the best I could get in an afternoon at a physical store). That's an upgrade from a 60 Hz monitor to a 165 Hz monitor, and a significant (though not numerically measured so far) drop in input delay, some tens-of-milliseconds.

This has been the biggest improvement in "snappyness" that I've experienced since I moved to solid-state storage 10 years ago.

gruez · 6 years ago
>He said single core synthetic benchmarks were higher, but not by much.

Really? i7-9750H (from 2020) scores more than twice as high in cinebench single threaded than i7-3615QM (from 2012)

https://hwbot.org/benchmark/cinebench_-_r15/rankings?hardwar...

https://hwbot.org/benchmark/cinebench_-_r15/rankings?hardwar...

vijaybritto · 6 years ago
For me, the perceived performance improvements have been always slower. The only mind blowing perf improvement was when I first used SSD, Thunderbolt/USB-C. The other things like RAM and CPU are hard to notice unless we are doing some computation heavy tasks like video editing.

I also notice windows starting up faster than it used to before, but I know its because of the SSDs and caching. These 20% and 30% YOY improvements are simply not worthy to invest in because the returns are negligible for me.

brandonmenc · 6 years ago
> Anecdote: a family member and software dev just bought the latest macbook pro coming from about an 8 year old macbook pro. Apparently he doesn't perceive much real world performance difference between the 2 machines. He said single core synthetic benchmarks were higher, but not by much.

I just upgraded my 13" MBP from a 2017 i7 to the 2020 i7.

Single core benchmarks are basically the same but it's also running at 2/3 the clock speed, so much cooler. Plus two additional cores.

chx · 6 years ago
There's only 15-20% IPC increase from Ivy Bridge (which a 2012 Macbook Pro would've used) to Skylake, there's a similar increase once again to Ice Lake but the clock has actually dropped from 2.5 to 2.0 at least on the 13" side (at least that's what everymac tells me). It is only the core count that shows an increase and their use case might not be more than two cores.
throwaway4good · 6 years ago
I did a similar shift. I would say the new one is quite a bit faster.
rpastuszak · 6 years ago
And much hotter! I’m happy with the purchase but sometimes the machine gets so hot I can barely use it.
agumonkey · 6 years ago
what about power consumption ? and igpu perf ?
qes · 6 years ago
> a family member and software dev just bought the latest macbook pro coming from about an 8 year old macbook pro. Apparently he doesn't perceive much real world performance difference between the 2 machines.

I'd say the same thing about upgrading from an i7-6800K (2016 Broadwell-E) to a Threadripper 3960X. Barely a difference (my main work is a ~250k loc c# solution, also a similar medium-ish sized Angular app).

marmaduke · 6 years ago
Maybe your build systems work incrementally or otherwise don’t use 4x the cores? A Ryzen with the higher clock speed might have been a better choice.
sdflhasjd · 6 years ago
I went from a i7-4770k to an AMD 3900X and my workload is almost identical. The difference has been absolutely night-and-day
danudey · 6 years ago
> Anecdote: a family member and software dev just bought the latest macbook pro coming from about an 8 year old macbook pro. Apparently he doesn't perceive much real world performance difference between the 2 machines. He said single core synthetic benchmarks were higher, but not by much.

Honestly, I think this speaks to the quality of Apple's hardware and engineering (8 years ago), combined with the lack of serious improvement from Intel lately.

That said, one thing I've noticed when doing IT and desktop support is that, unless someone's hardware is woefully inadequate, they won't notice a huge change going from slower to faster, but they will notice faster to slower after they've been using it for a week or two.

lend000 · 6 years ago
Not too interesting for those interested in high performance desktop and server offerings. The fact that Intel's only new releases on 10nm are <=6 core low power products suggests their yields are still poor.

AMD did a really smart thing with their processor designs. Instead of doing a large monolithic multiprocessor like Intel (which may be able to eek out some extra performance due to optimized wire placements), they use smaller chiplets. This results in requiring no manufacturing errors on a much smaller area of the die (which becomes exponentially less probable as the area increases). Even if AMD was using Intel's 10nm process, their yields would be better because of their modular designs that get soldered together after the lithography. Intel can still bin out some errors (the 10300-10900k are all the same chip with different parts turned off due to manufacturing errors), but the less modular design likely makes this less efficient.

That's my understanding, anyway -- correct me if I'm wrong.

kllrnohj · 6 years ago
> That's my understanding, anyway -- correct me if I'm wrong.

Since these are mobile processors your understanding is incorrect. What you've described is what AMD did on the server & desktop side. But on mobile, to hit the power efficiency targets, AMD stuck with the single monolithic die. And it's a decently large one, at that: https://www.anandtech.com/show/15381/amd-ryzen-mobile-4000-m...

So no AMD wouldn't yield better on Intel's 10NM+, not in this case. In the desktop usage they would, as their desktop CPU dies are way smaller than their mobile CPU dies. But the mobile die sizes are fairly equivalent between AMD & Intel.

celrod · 6 years ago
If I understood lend000 correctly:

> The fact that Intel's only new releases on 10nm are <=6 core low power products suggests their yields are still poor.

Their argument was that, had they followed AMD's approach, they would be able to produce desktop and server chips despite being yield-limited to "<= 6 core" per chiplet.

disillusioned · 6 years ago
>(the 10300-10900k are all the same chip with different parts turned off due to manufacturing errors)

Wow. Is this a relatively new phenomenon, or something that's been the norm in processor mfg for a long time? I know that the Ks were usually just binned/higher tested chips so they'd unlock those and charge a premium for them, but the idea that they're disabling parts of the die and separating the chip offering that way is crazy to me. Are the parts being disabled redundant or are they reducing the instruction set space? I don't know anything about how any of that would work.

cercatrova · 6 years ago
This price differentiation actually occurs in all products and companies. Look at SaaS, it's all the same software product but features are selectively enabled for higher-paying customers.

Something that's different between software and hardware such as chips are that the chips that have things disabled are done so because those disabled parts actually don't work. If you only yield 2 out of 4 cores, it's easier to just make it a dual-core CPU rather than throw the chip away.

dr_zoidberg · 6 years ago
I remember hearing 10 years ago that Phenom x3 was a 4 core part that always got a dead core in manufacturing (:

So no, nothing new under the sun.

leovailati · 6 years ago
I remember hearing something about the Cell processors on Playstation 3's being the ones with manufacturing errors. The 100% good ones were supposedly sold to the US military. I realize that second part was probably just a myth. Who knows, maybe it was all an internet myth! The PS3 was released in 2006, so the idea itself is not new.
tikkabhuna · 6 years ago
A few years ago we had fun with the AMD HD 6950 graphics card. From what I remember, it was more popular than the higher specced 6970, so they simply detuned 6970s by deactivating shaders. Tools were then published by enthusiasts to enable those shaders and some manufacturers even added features to help you do it (mine had a little switch to reset the firmware if you screwed it up).

https://www.zdnet.com/article/upgrade-your-radeon-hd-6950-to...

srtjstjsj · 6 years ago
The disabled parts are the broken parts.
jagannathtech · 6 years ago
insert the ohio astronauts meme

wait, it's all just binning?

it always has been.

;)

Voliokis · 6 years ago
It's been done for as long as I can remember.

https://en.m.wikipedia.org/wiki/Product_binning

Iirc, early Celeron were often just Pentiums with defective (and disabled) cache

Edit: https://www.anandtech.com/show/568/2

ajross · 6 years ago
> The fact that Intel's only new releases on 10nm are <=6 core low power products suggests their yields are still poor.

That logic seems backwards. If you have poor yields you tend to favor higher-margin products (i.e. datacenter CPUs). This is a laptop CPU intended to sell at significantly lower $/mm2.

As far as chiplets: multichip packages are an ancient idea, and the industry goes back and forth on them. Both Intel and AMD shippsed multi-chip solutions way back in the day, the current age of integration is actually the anomaly. You win on yield but lose on package costs, and the decision as to which to use depends on the specifics of the market you're trying to target.

Certainly AMD would prefer to ship single-chip solutions and pocket the savings, but they can't. Likewise Intel accepts some loss of scaling because of the need to share die designs across the product line.

ustolemyname · 6 years ago
The issue is your defect rate for larger chips. If a small laptop CPU only has 25% of chips being fully functional, a 4x the size server chip will only yield 0.39% of chips being defect free.

I say this having no idea what Intel's defect rate is right now, and I acknowledge fusing of bad sections can mitigate this a bit.

aryonoco · 6 years ago
Laptops (and for that matter mobile SoCs) are smaller, so higher defect rates can be tolerated as you can still get reasonable yields. you need a mature process to manufacture large chips (economically).
yread · 6 years ago
This article is really a lot more informative with synthetic benchmark numbers.

https://www.notebookcheck.net/All-core-4-3-GHz-at-28-W-Intel...

Intel's 4 core parts getting within 3% of AMD's 8 core parts, 20% lead in single core perf.

kasabali · 6 years ago
Note that the 28W TDP number is for 3.0 GHz base clock. 4.8GHz turbo would be 50W.
throwme45353464 · 6 years ago
I would like to see the measured power usage attached to those results. Depending on the laptop and its TDP the OEM has set, these results can vary quite a bit.
IanCutress · 6 years ago
Those are pre-sample non-final numbers. We didn't post Intel's numbers because they are unverified.
zionic · 6 years ago
The contrast in community enthusiasm between this and Nvidia's 3xxx announcement could not be more severe.
btian · 6 years ago
Well, nVidia is selling a new GPU that's faster than 2080 Ti for $500, then 3090 that's faster and $1000 cheaper than Titan RTX.

Intel announced 10th Gen CPU lineup with a new logo.

jrockway · 6 years ago
Also Intel's 10th Gen was their 9th Gen with a new logo, and their 9th Gen was 8th Gen with a new logo. Maybe 8 was the same as 7; I don't remember. Basically, Intel isn't wowing people with their releases -- all their recent progress is very incremental and doesn't unlock new computing possibilities for the end user.

Meanwhile AMD is shipping 16 core CPUs to enthusiasts and 24/32/64 core CPUs to high-end desktop users. Getting 56 extra cores is something to be excited about; your 8 hour render is now a 1 hour render. Your 15 minute build is now a 2 minute build. Making your 8 core processor do one synthetic benchmark 10% faster is comparatively very "meh". People are expecting each Intel launch event to be something revolutionary like Zen 2, but Intel just isn't doing that. So people are consistently underwhelmed even though a computer with a 10th or 11th generation Intel chip is going to be quite adequate.

(As for nVidia, I am also not amazingly excited. The 30XX series feels like a refresh of the 20XX series; yeah they're cheap, but if you already have a 20XX your world is not going to change dramatically.)

fomine3 · 6 years ago
You can say "just rebadging" for Coffee/Comet Lake lower core SKUs but 11th Tiger Lake is true new product.
headmelted · 6 years ago
That was exciting. The performance gains are exciting. The product is exciting. The architecture is exciting.

This is not exciting, unless I’m entirely missing the exciting part of this, which is wholly possible.

whynotminot · 6 years ago
I actually think Tiger Lake is the most exciting thing Intel has released in years. I know that's a pretty low bar, but I'm still surprised that the reaction has been this meh.
wffurr · 6 years ago
Native Thunderbolt, much faster clock speeds, and greatly improved graphics. Solid offering all around I think. Should give some competition for the Ryzen 4000-series laptop chips.
cma · 6 years ago
This claims they are announcing and not actually launching. No dates or prices:

https://semiaccurate.com/2020/09/02/intel-doesnt-actually-la...

fomine3 · 6 years ago
> SemiAccurate's checks with OEMs all say they Intel is shorting them on supply by 7-digit numbers in several cases.

I'm not good at English, What means this sentence?

holbrad · 6 years ago
It's a low power mobile part, that's just not relevant for any of my use cases and I'm guessing that's true of a lot of HN.

Intel 10nm desktop CPU's are what I'm actually interested in.

paulpan · 6 years ago
Weird to see this "partial" announcement - since it's mostly for the U-series chips (up to 28W TDP). This suggests that Intel must really be getting pressure to release something competitive against AMD's Renoir offerings. The more performance-orient chips (traditionally known as "H series" but who know the new branding...) is still unannounced and unknown.

Will have for 3rd party benchmarks to determine the real performance improvements. The single-core and GPU are certainly the highlights, but it's odd that even the top-end i7 chips still retain the 4 physical cores. AMD's 4700U is already at 8 physical cores.

icegreentea2 · 6 years ago
Yeah. But I think 4 cores at 50% higher base single thread vs 8 cores is actually a pretty reasonable tradeoff for many workloads. If they managed 6 cores though... would have been pretty awesome.

I hope this delivers. I'd hate to see Intel go down in flames.

paulpan · 6 years ago
Right, that's perhaps a glaring omission. Last year's (Ice Lake) i7-10710U was a 6-core, 12-thread part with a TDP of 25W. https://ark.intel.com/content/www/us/en/ark/products/196448/...

But it's entirely missing from this announcement. Seems like a regression.

pedrocr · 6 years ago
Compared to the AMD chips already shipping it will have +17% single-threaded boost clock. The base clock is much higher so only benchmarks will tell what the real performance comparison is. And when this ships it will be up against the 5000 Ryzen mobile APUs. What's curious is that it seems these chips do compare favorably to AMD but on the GPU side, with things like AV1 decode and a big performance boost. Between Intel-only high-end laptop models and AMD not being able to keep up with demand Intel is not under all that much stress though.
jjoonathan · 6 years ago
Is 10nm finally yielding or is this a paper launch / "10nm"?

EDIT: ah, these are laptop chips, that's the catch.

hydroreadsstuff · 6 years ago
The rumor mill suggets the volume is similar to Ice Lake (very very low)
thunkshift1 · 6 years ago
Any links or sources?
dougmwne · 6 years ago
Apparently this is on a next gen 10nm process that was designed to increase yeilds, but Intel would say that, wouldn't they. We'll see!
Aperocky · 6 years ago
laptop chips at 50W? Someone's going to be buying a brick.
wffurr · 6 years ago
50W is peak under all core turbo and only if the laptops cooling system can handle it.

Most systems won't come anywhere near that

akeck · 6 years ago
Samsung might not be pleased about the "EVO" branding.
bserge · 6 years ago
Eh, Mitsubishi didn't care :D
dyingkneepad · 6 years ago
They bought the space from the fighting game competition that was cancelled this year :)
worldmerge · 6 years ago
It will be interesting to see how these perform compared to AMDs laptop chips. I imagine the graphics tech is where they'll be able to shine especially on lower end laptops that don't have a discrete GPU.
OldHand2018 · 6 years ago
Forget about AMD chips for a moment. They're x86 and Intel can play that game for years to come.

With a lineup of chips aimed at fanless ultraportables, pay attention to their performance compared to the upcoming Apple Silicon chips, or more importantly, the Qualcomm chips that Microsoft and Samsung are putting in their fanless ultraportables. Intel needs to head off the rise of Windows on ARM.