Readit News logoReadit News
senttoschool commented on Intel's Core Ultra 2 Chip Posts Nearly 24-Hour Battery Life in Lunar Lake   pcmag.com/news/lunar-lake... · Posted by u/gavi
Sakos · a year ago
Geekbench is also terrible though? I'm not an Intel fanboy that's upset about Intel not winning, I've owned AMD chips for most of my life. These are just terrible benchmarks that don't tell us anything. I'm more interested in seeing what Phoronix or gaming benchmarks (particularly Factorio and MSFS) have to show. Real-world benchmarks are infinitely more useful to guaging the real-world benefit of buying one or the other hardware.
senttoschool · a year ago
AMD fanboys will back Intel if it’s against ARM.
senttoschool commented on Intel's Core Ultra 2 Chip Posts Nearly 24-Hour Battery Life in Lunar Lake   pcmag.com/news/lunar-lake... · Posted by u/gavi
nahnahno · a year ago
The efficiency tests are garbage. Notebookcheck are comparing whole system power draw and equating it with SOC power draw, when in reality the SOC may draw a fraction of total system power, especially under single core workloads. Take those numbers with a full truck of salt.
senttoschool · a year ago
They take full power and subtract idle power.
senttoschool commented on Intel's Core Ultra 2 Chip Posts Nearly 24-Hour Battery Life in Lunar Lake   pcmag.com/news/lunar-lake... · Posted by u/gavi
wtallis · a year ago
Performance vs power across a CPUs operating range is not a linear relationship. Which is why a naive perf/Watt metric like Notebookcheck does at each chip's top operating point is almost worthless for comparing efficiency. You need to at the very least normalize to either the same power or same performance, but preferably reviewers should be measuring and reporting a full perf/power curve for each chip instead of just one data point. Geekerwan seems to be the only reviewer that understands this, and they mostly focus on phones.
senttoschool · a year ago

  >Which is why a naive perf/Watt metric like Notebookcheck does at each chip's top operating point is almost worthless for comparing efficiency.
It isn't worthless. It clearly gives a good enough picture on efficiency to draw conclusions. It's not like Apple and Qualcomm drastically slow their chips down in order to get better perf/watt. No. They have better raw performance than Intel's chips regardless of perf/watt.

You can't even get perf/watt curves on Apple's A series and M series of chips because it's impossible to manually control the wattage given to the SoC. On PCs, you can do that. But not on iPhones and Macs. Therefore, Geekerwan's curves are not real curves for Apple chips - just projections.

senttoschool commented on Intel's Core Ultra 2 Chip Posts Nearly 24-Hour Battery Life in Lunar Lake   pcmag.com/news/lunar-lake... · Posted by u/gavi
Sakos · a year ago
Cinebench is really a terrible benchmark and not indicative of real-world numbers for performance or efficiency (particularly not for any of my use cases). I'll wait for better reviews and benchmarks before deciding who's "won".
senttoschool · a year ago
Geekbench shows the same gap in performance. Cinebench has historically favored Intel chips more than Arm.
senttoschool commented on Intel's Core Ultra 2 Chip Posts Nearly 24-Hour Battery Life in Lunar Lake   pcmag.com/news/lunar-lake... · Posted by u/gavi
UniverseHacker · a year ago
Aren't those roughly equivalent in a cpu which dynamically varies its clock speed and power consumption in response to compute demand?
senttoschool · a year ago
I think Notebookcheck uses peak power in their perf/watt measurements.

They go over it in detail here: https://www.notebookcheck.net/Our-Test-Criteria.15394.0.html

senttoschool commented on Intel's Core Ultra 2 Chip Posts Nearly 24-Hour Battery Life in Lunar Lake   pcmag.com/news/lunar-lake... · Posted by u/gavi
UniverseHacker · a year ago
Am I interpreting this correctly- the M3 still uses only roughly half the power of this new Intel cpu discussed here?
senttoschool · a year ago
It doesn't necessarily use half the power. But it does have greater than 2x in perf/watt and it has noticeably faster ST performance.

Deleted Comment

senttoschool commented on Intel's Core Ultra 2 Chip Posts Nearly 24-Hour Battery Life in Lunar Lake   pcmag.com/news/lunar-lake... · Posted by u/gavi
tester756 · a year ago
Where are all those people who for years (or since M1) were claiming that x86 is dead because ARM ISA (magically) offers significantly better energy-efficiency than x86 ISA.

Of course they ignored things like node advantage, but who cares? ;)

Meanwhile industry veterans were claiming something different and turns out they were right

https://chipsandcheese.com/2021/07/13/arm-or-x86-isa-doesnt-...

Asking which - x86 or ARM is faster/more energy eff is like asking which syntax (letters) is faster - syntax of Rust, Java or C++?

And same as with CPUs - everything is up to the implementation - compiler, runtime/vm, libraries, etc.

senttoschool · a year ago
Actually, Apple's M3 and even Qualcomm's X Elite are significantly ahead of the new Intel chip in raw performance and especially perf/watt.

Cinebench R24 ST[0]:

* M3: 12.7 points/watt, 141 score

* X Elite: 9.3 points/watt, 123 score

* Intel Ultra 7 258V (new): 5.36 points/watt, 120 score

* AMD HX 370: 3.74 points/watt, 116 score

* AMD 8845HS: 3.1 points/watt, 102 score

* Intel 155H: 3.1 points/watt, 102 score

Cinebench R24 MT[0]:

* M3: 28.3 points/watt, 598 score

* X Elite: 22.6 points/watt, 1033 score

* AMD HX 370: 19.7 points/watt, 1213 score

* Intel Ultra 7 258V (new): 17.7 points/watt, 602 score

* AMD 8845HS: 14.8 points/watt, 912 score

* Intel 155H: 14.5 points/watt, 752 score

PCMark did a battery life comparison using identical Dell XPS 13s[1]:

* X Elite: 1,168 minutes, performance of 204,333 in Procyon Office

* Intel Ultra 7 256V (new): 1,253 minutes, performance of 123,000 in Procyon Office

* Meteor Lake 155H: 956 minutes, performance of 129,000 in Procyon Office

Basically, Intel's new chip has 7% more battery life than X Elite but the X Elite is 66% faster while on battery. In other words, Intel's new chip throttles heavily to get that battery life.

  >Of course they ignored things like node advantage, but who cares? ;)
Intel's new chip is using TSMC's N3B in the compute tile, same as M3 and better than X Elite's N4P.

  >Where are all those people who for years (or since M1) were claiming that x86 is dead because ARM ISA (magically) offers significantly better energy-efficiency than x86 ISA.
I'm still here.

------

[0]Data for M3, X Elite, AMD, Meteor Lake taken from the best scores available here: https://www.notebookcheck.net/AMD-Zen-5-Strix-Point-CPU-anal...

[0]Data for Core Ultra 7 taken from here: https://www.notebookcheck.net/Asus-Zenbook-S-14-UX5406-lapto...

[1]https://youtu.be/QB1u4mjpBQI?si=0Wyf-sohY9ZytQYK&t=2648

Deleted Comment

senttoschool commented on An interview with AMD CEO Lisa Su about solving hard problems   stratechery.com/2024/an-i... · Posted by u/wallflower
DanielHB · a year ago
No I did not read it, we just arrived at the same conclusions although you were a bit earlier than me to realise this. What opened my eyes was the easy of transition to the ARM-based macs. I fully agree with all your points and that has been my view since around 2021 (when I got an M1 mac).

Once dev computers are running ARM at large no one is going to bother cross-compiling their server code to x64, they will just compile to ARM which will tear through AMD server demand. In fact my own org already started migrating to AWS graviton servers.

And this bodes poorly for Nvidia as well, I bet all cloud providers are scrambling to design their own in-house alternatives to nVidia hardware. Maybe alternatives to CUDA as well to either remove the nVidia lock-in or create their own lock-ins. Although Nvidia is much better positioned to stay ahead in the space.

senttoschool · a year ago
The problem with the Nvidia replacement goal of big tech is that they don't have an ARM-like organization to design cores for them. Big tech use their own ARM CPUs because they use stock ARM core designs and its ISA. The hardwork was already done for big tech.

Big tech must design their own GPUs. From the looks of it, it's much harder to do it on your own than license cores from ARM.

https://www.businessinsider.com/amazon-nvidia-aws-ai-chip-do...

u/senttoschool

KarmaCake day2061May 4, 2020View Original