My daily workhorse is a M1 Pro that I purchased on release date, It has been one of the best tech purchases I have made, even now it really deals with anything I throw at it. My daily work load is regularly having a Android emulator, iOS simulator and a number of Dockers containers running simultaneously and I never hear the fans, battery life has taken a bit of a hit but it is still very respectable.
I wanted a new personal laptop, and I was debating between a MacBook Air or going for a Framework 13 with Linux. I wanted to lean into learning something new so went with the Framework and I must admit I am regretting it a bit.
The M1 was released back in 2020 and I bought the Ryzen AI 340 which is one of the newest 2025 chips from AMD, so AMD has 5 years of extra development and I had expected them to get close to the M1 in terms of battery efficiency and thermals.
The Ryzen is using a TSMC N4P process compared to the older N5 process, I managed to find a TSMC press release showing the performance/efficiency gains from the newer process: “When compared to N5, N4P offers users a reported +11% performance boost or a 22% reduction in power consumption. Beyond that, N4P can offer users a 6% increase in transistor density over N5”
I am sorely disappointed, using the Framework feels like using an older Intel based Mac. If I open too many tabs in Chrome I can feel the bottom of the laptop getting hot, open a YouTube video and the fans will often spin up.
Why haven’t AMD/Intel been able to catch up? Is x86 just not able to keep up with the ARM architecture? When can we expect a x86 laptop chip to match the M1 in efficiency/thermals?!
To be fair I haven’t tried Windows on the Framework yet it might be my Linux setup being inefficient.
Cheers, Stephen
If you fully load the CPU and calculate how much energy a AI340 needs to perform a fixed workload and compare that to a M1 you'll probably find similar results, but that only matters for your battery life if you're doing things like blender renders, big compiles or gaming.
Take for example this battery life gaming benchmark for an M1 Air: https://www.youtube.com/watch?v=jYSMfRKsmOU. 2.5 hours is about what you'd expect from an x86 laptop, possibly even worse than the fw13 you're comparing here. But turn down the settings so that the M1 CPU and GPU are mostly idle, and bam you get 10+ hours.
Another example would be a ~5 year old mobile qualcomm chip. It's a worse process node than an AMD AI340, much much slower and significantly worse performance per watt, and yet it barely gets hot and sips power.
All that to say: M1 is pretty fast, but the reason the battery life is better has to do with everything other than the CPU cores. That's what AMD and Intel are missing.
> If I open too many tabs in Chrome I can feel the bottom of the laptop getting hot, open a YouTube video and the fans will often spin up.
It's a fairly common issue on Linux to be missing hardware acceleration, especially for video decoding. I've had to enable gpu video decoding on my fw16 and haven't noticed the fans on youtube.
Apple spent years incrementally improving efficiency and performance of their chips for phones. Intel and AMD were more desktop based so power efficiency wasnt the goal. When Apple's chips got so good they could transition into laptops, x86 wasn't in the same ballpark.
Also the iPhone is the most lucrative product of all time (I think) and Apple poured a tonne of that money into R&D and taking the top engineers from Intel, AMD, and ARM, building one of the best silicon teams.
[0] https://en.wikipedia.org/wiki/Intel_Atom
how much silicon did Apple actually create? I thought they outsourced all the components?
No, the main reason for better battery life is the RISC architecture. PC on ARM architecture has the same gains.
Apple is vertically integrated and can optimize at the OS and for many applications they ship with the device.
Compare that to how many cooks are in the kitchen in Wintel land. Perfect example is trying to get to the bottom of why your windows laptop won't go to sleep and cooks itself in your backpack. Unless something's changed, last I checked it was a circular firing squad between laptop manufacturer, Microsoft and various hardware vendors all blaming each other.
> Compare that to how many cooks are in the kitchen in Wintel land. Perfect example is trying to get to the bottom of why your windows laptop won't go to sleep and cooks itself in your backpack
So, I was thinking like this as well, and after I lost my Carbon X1 I felt adventurous, but not too adventurous, and wanted a laptop that "could just work". The thinking was "If Microsoft makes both the hardware and the software, it has to work perfectly fine, right?", so I bit my lip and got a Surface Pro 8.
What a horrible laptop that was, even while I was trialing just running Windows on it. Overheated almost immediately by itself, just idling, and STILL suffers from the issue where the laptop sometimes wake itself while in my backpack, so when I actually needed it, of course it was hot and without battery. I've owned a lot of shit laptops through the years, even some without keys in the keyboard, back when I was dirt-poor, but the Surface Pro 8 is the worst of them all, I regret buying it a lot.
I guess my point is that just because Apple seem really good at the whole "vertically integrated" concept, it isn't magic by itself, and Microsoft continues to fuck up the very same thing, even though they control the entire stack, so you'll still end up with backpack laptops turning themselves on/not turning off properly.
I'd wager you could let Microsoft own every piece of physical material in the world, and they'd still not be able to make a decent laptop.
> Perfect example is trying to get to the bottom of why your windows laptop won't go to sleep and cooks itself in your backpack
Same thing happens in Apple land: https://news.ycombinator.com/item?id=44745897. My Framework 16 hasn't had this issue, although the battery does deplete slowly due to shitty modern standby.
> Framework 16
> The 2nd Gen Keyboard retains the same hardware as the 1st Gen but introduces refreshed artwork and updated firmware, which includes a fix to prevent the system from waking while carried in a bag.
Apple Silicon is 2-4x more efficient than AMD and Intel CPUs during load while also having higher top end speed.
Another thing that makes Apple laptops feel way more efficient is that they use a true big.Little design while AMD and Intel's little cores are actually designed for area efficiency and not power efficiency. In the case of Intel, they stuff as many little cores as possible to win MT benchmarks. In real world applications, the little cores are next to useless because most applications prefer a few fast cores over many slow cores.
This is false, in cross platform tasks it's on par if not worse than latest X86 arches. As others pointed out: 2.5h in gaming is about what you'd expect from a similarly built X86 machine.
They are willing due to lower idle and low load consumption, which they achieve by integrating everything as much as possible - something that's basically impossible for AMD and Intel.
> The faster the CPU can finish a task, the faster it can go back to sleep, aka race to sleep.
May have been true when CPU manufacturers left a ton of headroom on the V/F curve, but not really true anymore. Zen 4 core's power draw shoots up sharply pass 4.6 GHz and nearly triples when you approach 5.5 GHz (compared to 4.6), are you gonna complete the task 3 times faster at 5.5 GHz?
This is not true. For high-throughput server software x86 is significantly more efficient than Apple Silicon. Apple Silicon optimizes for idle states and x86 optimizes for throughput, which assumes very different use cases. One of the challenges for using x86 in laptops is that the microarchitectures are server-optimized at their heart.
ARM in general does not have the top-end performance of x86 if you are doing any kind of performance engineering. I don't think that is controversial. I'd still much rather have Apple Silicon in my laptop.
I've worked in video delivery for quite a while.
If I were to write the law, decision-makers wilfully forcing software video decoding where hardware is available would be made to sit on these CPUs with their bare buttocks. If that sounds inhumane, then yes, this is the harm they're bringing upon their users, and maybe it's time to stop turning the other cheek.
Are you telling me that for some reason it's not using any hardware acceleration available while watching YouTube? How do I fix it?
A good demonstration is the Android kernel. By far the biggest difference between it and the stock Linux kernel is power management. Many subsystems down to the process scheduler are modified and tuned to improve battery life.
Deleted Comment
To be fair, usually the linux itself has hardware acceleration available but the browser vendors tend to disable gpu rendering except on controlled/known perfectly working combinations of OS/Hardware/Drivers and they have much less testing in Linux. In most case you can force enabling gpu rendering in about:config and try it out yourself and leave it unless you get recurring crashes.
All the Blink-based ones just work as long the proper libraries are installed and said libraries properly detect hardware support.
If you manually go in and limit a modern Windows laptop's max performance to just under what the spec sheet indicates, it'll be fairly quiet and cool. In fact, most have a setting to do this, but it's rarely on by default because the manufacturers want to show off performance benchmarks. Of course, that's while also touting battery life that is not possible when in the mode that allows the best performance...
This doesn't cover other stupid battery life eaters like Modern Standy (it's still possible to disable it with registry tweaks! do it!), but if you don't need absolute max perf for renders or compiling or whatever, put your Windows or Linux laptop into "cool & quiet" mode and enjoy some decent extra battery.
It would also be really interesting to see what Apple Silicon could do under some Extreme OverClocking fun with sub-zero cooling or such. Would require a firmware & OS that allows more tuning and tweaking, so it's not going to happen anytime soon, but could actually be a nice brag for Apple it they did let it happen.
Incredible discipline. The Chrome graph in comparison was a mess.
Looks like general purpose CPUs are on the losing train.
Maybe Intel should invent desktop+mobile OS and design bespoke chips for those.
I assume this is referring to the tweet from the launch of the M1 showing off that retaining and releasing an NSObject is like 3x faster. That's more of a general case of the ARM ISA being a better fit for modern software than x86, not some specific optimization for Apple's software.
x86 was designed long before desktops had multi-core processors and out-of-order execution, so for backwards compatibility reasons the architecture severely restricts how the processor is allowed to reorder memory operations. ARM was designed later, and requires software to explicitly request synchronization of memory operations where it's needed, which is much more performant and a closer match for the expectations of modern software, particularly post-C/C++11 (which have a weak memory model at the language level).
Reference counting operations are simple atomic increments and decrements, and when your software uses these operations heavily (like Apple's does), it can benefit significantly from running on hardware with a weak memory model.
Here's an M4 Max running macOS running Parallels running Windows when compared to the fastest AMD laptop chip: https://browser.geekbench.com/v6/cpu/compare/13494385?baseli...
M4 Max is still faster even with 14 out of 16 possible cores being used. You can't chalk that up to optimizations anymore because Windows has no Apple Silicon optimizations.
Intel is busy fixing up their shit after what happened with their 13 & 14th gen CPU. Making imagine they making OS its called IntelOS and the only thing you can run is only by using Intel CPU
Or, contribute efficiency updates to popular open projects like firefox, chromium, etc...
Wouldn't it be easier for Intel to heavily modify the linus kernel instead of writing their own stack?
They could even go as far as writing the sleep utilities for laptops, or even their own window manager to take advantage of the specific mods in the ISA?
Dead Comment
it least on mobile platform apple advocate the other way with race to sleep - do calculation as fast as you can with powerful cores so that whole chip can go back to sleep earlier and more often take naps.
But when Apple says it, software devs actually listen.
Like, would I prefer an older-style Macbook overall, with an integrated card reader, HDMI port, ethernet jack, all that? Yeah, sure. But to get that now I have to go to a PC laptop and there's so many compromises there. The battery life isn't even in the same zip code as a Mac, they're much heavier, the chips run hot even just doing web browsing let alone any actual work, and they CREAK. Like my god I don't remember the last time I had a Windows laptop open and it wasn't making all manner of creaks and groans and squeaks.
The last one would be solved I guess if you went for something super high end, or at least I hope it would be, but I dunno if I'm dropping $3k+ either way I'd just as soon stay with the Macbook.
Modern MacBook pros have 2/3 (card reader and HDMI port), and they brought back my beloved MagSafe charging.
AMD kind of has, the "Max 395+" is (within 5% margin or so) pretty close to M4 Pro, on both performance and energy use. (it's in the 'Framework Desktop', for example, but not in their laptop lineup yet)
AMD/Intel hasn't surpassed Apple yet (there's no answer for the M4 Max / M3 Ultra, without exploding the energy use on the AMD/Intel side), but AMD does at least have a comparable and competitive offering.
That said Hardware Canucks did a review of the 395 in a mobile form factor (Asus ROG Flow F13) with TDP at 70w (lower than the max 120w TDP you see in desktop reviews). This lower-than-max TDP also gets you closer to the perf/watt sweet spot.
The M4 Pro scores slightly higher in Cinebench R24 despite being 10P+4E vs a full 16P cores on the 395 all while using something like a 30% less power. M4 Pro scores nearly 35% higher than the single-core R24 benchmark too. 395 GPU performance is than M4 Pro in productivity software. More specifically, they trade blows based on which is more optimized in a particular app, but AMD GPUs have way more optimizations in general and gaming should be much better with an x86 + AMD GPU vs Rosetta 2 + GPU translation layers + Wine/crossover.
M4 Pro gets around 50% better battery life for tasks like web browsing when accounting for battery size differences and more than double the battery life per watt/hr when doing something simple like playing a video. Battery life under full load is a bit better for the 395, but doing the math, this definitely involves the 395 throttling significantly down from it's 70w TDP.
Rule of the thumb is roughly 15% advantage to distribute between power and performance there.
Catching up while remaining on older nodes is no joke.
I am usually the one doing it on HN. But it is still not common knowledge after we went from M1 to M4.
And this thread S/N ratio is already 10 times better than most other Apple Silicon discussions.
AMD's Max 395+ is N4, or 5nm Class Product.
The Apple M4 is N3 or 3nm Class product.
Some Chinese companies have also announced laptops with it coming out soon.
Right now, AMD is not even in the ballpark.
In fact, the real kick in the 'nads was my fully kitted M4 laptop outperforming the AMD. I just gave up.
I'll keep checking in with AMD and Intel every generation though. It's gotta change at some point.
https://frame.work/laptop16?tab=whats-new
I'm working in IT and I get all new machines for our company over my desk to check them, and I observed the exact same points as the OP.
The new machines are either fast and loud and hot and with poor battery life, or they are slow and "warm" and have moderate battery life.
But I had no business laptop yet, ARM, AMD, or Intel, which can even compete with the M1 Air, not to speak of the M3 Pro! Not to speak about all the issues with crappy Lenovo docks, etc.
It doesn’t matter if I install Linux or Windows. The funny point is that some of my colleagues have ordered a MacBook Air or Pro and use their Windows or Linux and a virtual machine via Parallels.
Think about it: Windows 11 or Linux in a VM is even faster, snappier, more silent, and has even longer battery life than these systems native on a business machine from Lenovo, HP, or Dell.
Well, your mileage may vary, but IMHO there is no alternative to a Mac nowadays, even if you want to use Linux or Windows.
With Nvidia Now I can even play games on it, though I wouldn't recommend it for any serious gamers.
And at the shop we are doing technology refreshes for the whole dev team upgrading them to M4s. I was asked if I wanted to upgrade my M1 Pro to an M4, and I said no. Mainly because I don't want to have to move my tooling over to a new machine, but I am not bottlenecked by anything on my current M1.
I need to point this out all the time these days it seems, but this opinion is only valid if all you use is a laptop and all you care about is single core perfomance.
The computing world is far bigger than just laptops.
Big music/3d design/video editing production suites etc still benefit much more from having workstation PCs with higher PCI bandwidth, more lanes for multiple SSDs and GPUs, and high level multicore processing performance which cannot be matched by Apple silicon.
For studio movies, render farms are usually Linux but I think many workstation tasks are done on Apple machines. Or is that no longer true?
I guess I'd slightly change that to "MacBook" or similar, as Apple are top-in-class when it comes to laptops, but for desktop they seem to not even be in the fight anymore, unless reducing power consumption is your top concern. But if you're aiming for "performance per money spent", there isn't really any alternative to non-Apple hardware.
I do agree they do the best hardware in terms of feeling though, which is important for laptops. But computing is so much larger than laptops, especially if you're always working in the same place everyday (like me).
Deleted Comment
Second, the x86 platform has a lot of legacy, and each operation on x86 is translated from an x86 instruction into RISC-like micro-ops. This is an inherent penalty that Apple doesn't have pay, and it is also why Rosetta 2 can achieve "near native" x86 performance; both platform translate the x86 instructions.
Third, there are some architectural differences even if the instruction decoding steps are removed from the discussion. Apple Silicon has a huge out-of-order buffer, and it's 8-wide vs x86 4-wide. From there, the actual logic is different, the design is different, and the packaging is different. AMD's Ryzen AI Max 300 series does get close to Apple by using many of the same techniques like unified memory and tossing everything onto the package, where it does lose is due to all of the other differences.
In the end, if people want crazy efficiency Apple is a great answer and delivers solid performance. If people want the absolute highest performance, then something like Ryzen Threadripper, EPYC, or even the higher-end consumer AMD chips are great choices.
1) Apple Silicon outperforms all laptop CPUs in the same power envelope on 1T on industry-standard tests: it's not predominantly due to "optimizing their software stack". SPECint, SPECfp, Geekbench, Cinebench, etc. all show major improvements.
2) x86 also heavily relies on micro-ops to greatly improve performance. This is not a "penalty" in any sense.
3) x86 is now six-wide, eight-wide, or nine-wide (with asterisks) for decode width on all major Intel & AMD cores. The myth of x86 being stuck on four-wide has been long disproven.
4) Large buffers, L1, L2, L3, caches, etc. are not exclusive to any CPU microarchitecture. Anyone can increase them—the question is, how much does your core benefit from larger cache features?
5) Ryzen AI Max 300 (Strix Halo) gets nowhere near Apple on 1T perf / W and still loses on 1T perf. Strix Halo uses slower CPUs versus the beastly 9950X below:
Fanless iPad M4 P-core SPEC2017 int, fp, geomean: 10.61, 15.58, 12.85 AMD 9950X (Zen5) SPEC2017 int, fp, geomean: 10.14, 15.18, 12.41 Intel 285K (Lion Cove) SPEC2017 int, fp, geomean: 9.81, 12.44, 11.05
Source: https://youtu.be/2jEdpCMD5E8?t=185, https://youtu.be/ymoiWv9BF7Q?t=670
The 9950X & 285K eat 20W+ per core for that 1T perf; the M4 uses ~7W. Apple has a node advantage, but no node on Earth gives you 50% less power.
There is no contest.
2. X86 micro-ops vs ARM decode are not equivalent. X86’s variable length instructions make the whole process far more complicated than it is on something like ARM. This is a penalty due to legacy design.
3. The OP was talking about M1. AFAIK, M4 is now 10-wide, and most x86 is 6-wide (Ryzen 5 does some weird stuff). X86 was 4-wide at the time of M1’s introduction.
4. M1 has over 600 reorder buffer registers… it’s significantly larger than competitors.
5. Close relative to x86 competitors.
From the AMD side it was 4 wide until Zen 5. And now it's still 4 wide, but there is a separate 4-wide decoder for each thread. The micro-op cache can deliver a lot of pre-decoded instructions so the issue width is (I dunno) wider but the decode width is still 4.
Deleted Comment
3. The claim was never "stuck on 4-wide", but that going wider would incur significant penalties which is the case. AMD uses two 4-wide encoders and pays a big penalty in complexity trying to keep them coherent and occupied. Intel went 6-wide for Golden Cove which is infamous for being the largest and most power-hungry x86 design in a couple decades. This seems to prove the 4-wide people right.
4. This is only partially true. The ISA impacts which designs make sense which then impacts cache size. uop cache can affect L1 I-cache size. Page size and cache line size also affect L1 cache sizes. Target clockspeeds and cache latency also affect which cache sizes are viable.
It's an energy penalty, even if wall clock time improves.
https://dougallj.github.io/applecpu/firestorm.html
Can we please stop with this myth? Every superscalar processor is doing the exact same thing, converting the ISA into the µops (which may involve fission or fusion) that are actually serviced by the execution units. It doesn't matter if the ISA is x86 or ARM or RISC-V--it's a feature of the superscalar architecture, not the ISA itself.
The only reason that this canard keeps coming out is because the RISC advocates thought that superscalar was impossible to implement for a CISC architecture and x86 proved them wrong, and so instead they pretend that it's only because x86 somehow cheats and converts itself to RISC internally.
Which hasn't even been the case anymore for several years now. Some µOPs in modern x86-64 cores combine memory access with arithmetic operations, making them decidedly non-RISC.
ARM processors ALSO decode instructions to micro-ops. And Apple chips do too. Pretty much a draw. The first stage in the execution pipelines of all modern processors is a a decode stage.
Deleted Comment
* Apple has had decades optimizing its software and hardware stacks to the demands of its majority users, whereas Intel and AMD have to optimize for a much broader scope of use cases.
* Apple was willing to throw out legacy support on a regular basis. Intel and AMD, by comparison, are still expected to run code written for DOS or specific extensions in major Enterprises, which adds to complexity and cost
* The “standard” of x86 (and demand for newly-bolted-on extensions) means effort into optimizations for efficiency or performance meet diminishing returns fairly quickly. The maturity of the platform also means the “easy” gains are long gone/already done, and so it’s a matter of edge cases and smaller tweaks rather than comprehensive redesigns.
* Software in x86 world is not optimized, broadly, because it doesn’t have to be. The demoscene shows what can be achieved in tight performance envelopes, but software companies have never had reason to optimize code or performance when next year has always promised more cores or more GHz.
It boils down to comparing two different products and asking why they can’t be the same. Apple’s hardware is purpose-built for its userbase, operating systems, and software; x86 is not, and never has been. Those of us who remember the 80s and 90s of SPARC/POWER/Itanium/etc recall that specialty designs often performed better than generalist ones in their specialties, but lacked compatibility as a result.
The Apple ARM vs Intel/AMD x86 is the same thing.
Apple also has a particular advantage in owning the os and having the ability to force independent developers to upgrade their software, which make incompatible updates (including perf optimizations) possible.
Deleted Comment
They could also just sit down with Microsoft and say "Right, we're going to go in an entirely different direction, and provide you with something absolutely mind-blowing, but we're going to have to do software emulation for backward compatibility and that will suck for a while until things get recompiled, or it'll suck forever if they never do".
Apple did this twice in the last 20 years - once on the move from PowerPC chips to Intel, and again from Intel to Apple Silicon.
If Microsoft and enough large OEMs (Dell, etc.), thought there was enough juice in the new proposed architecture to cause a major redevelopment of everything from mobile to data centre level compute, they'd line right up, because they know that if you can significantly reduce the amount of power consumption while smashing benchmarks, there are going to long, long wait times for that hardware and software, and its pay day for everyone.
We now know so much more about processor design, instruction set and compiler design than we did when the x86 was shaping up, it seems obvious to me that:
1. RISC is a proven entity worth investing in
2. SoC & SiP is a proven entity worth investing in
3. Customers love better power/performance curves at every level from the device in their pocket to the racks in data centres
4. Intel is in real trouble if they are seriously considering the US government owning actual equity, albeit proposed as non-voting, non-controlling
Intel can keep the x86 line around if they want, but their R&D needs to be chasing where the market is heading - and fast - while bringing the rest of the chain along with them.
For an example of why this doesnt work, see 'Intel Itanium'.
Each time they had a pretty good emulation story to keep most stuff (certainly popular stuff) working through a multi-year transition period.
IMO, this is better then carrying around 40 years of cruft.
> IMO, this is better then carrying around 40 years of cruft.
Backwards compatibility is such a strong point, it is why windows survives even though it has become a bloated ad riddled mess. You can argue which is better, but that seriously depends on your requirements. If you have a business application coded 30 years ago on x86 that no developer in your company understands any more, then backwards compatibility is king. On the other end of the spectrum if you are happy to be purchasing new software subscriptions constantly and having bleeding edge hardware is a must for you, then backwards compatibility probably isnt required.
But as you mention - they've at multiple times changed the underlying architecture, which surely would render å large part of prior optimizations obsolete?
> Software in x86 world is not optimized, broadly, because it doesn’t have to be.
Do ARM software need optimization more than x86?
If Windows-World software developers could one day announce that they will only support Intel Gen 14 or later (and not bother with AMD at all), and only support the latest and greatest NVidia GPUs (and only GPUs that cost $900 or more), I'm pretty sure they would optimize their code differently, and would sometimes get dramatic performance improvements.
It's not so much that ARM needs optimizations more, but that x86 software can't practically be broadly optimized.
Deleted Comment
This is why I get so livid regarding Electron apps on the Mac.
I’m never surprised by developer-centric apps like Docker Desktop — those inclined to work on highly technical apps tend not to care much about UX — but to see billion-dollar teams like Slack and 1Password indulge in this slop is so disheartening.
Given that videos spin up those coolers, there is actually a problem with your GPU setup on Linux, and I expect there'd be an improvement if you managed to fix it.
Another thing is that Chrome on Linux tends to consume exorbitant amount of power with all the background processes, inefficient rendering and disk IO, so updating it to one of the latest versions and enabling "memory saving" might help a lot.
Switching to another scheduler, reducing interrupt rate etc. probably help too.
Linux on my current laptop reduced battery time x12 compared to Windows, and a bunch of optimizations like that managed to improve the situation to something like x6, i.e. it's still very bad.
> Is x86 just not able to keep up with the ARM architecture?
Yes and no. x86 is inherently inefficient, and most of the progress over last two decades was about offloading computations to some more advanced and efficient coprocessors. That's how we got GPUs, DMA on M.2 and Ethernet controllers.
That said, it's unlikely that x86 specifically is what wastes your battery. I would rather blame Linux, suspect its CPU frequency/power drivers are misbehaving on some CPUs, and unfortunately have no idea how to fix it.
Nothing in x86 prohibits you from an implementation less efficient than what you could do with ARM instead.
x86 and ARM have historically served very different markets. I think the pattern of efficiency differences of past implementations is better explained by market forces rather than ISA specifics.
Linux can actually meet or even exceed Window's power efficiently, at least at some tasks, but it takes a lot of work to get there. I'd start with powertop and TLP.
As usual, the Arch wiki is a good place to find more information: https://wiki.archlinux.org/title/Power_management
If nothing would be wrong, it'd be at something like 1.5GHz with most of the cores unpowered.
If you're willing to spend a bunch of die area (which directly translates into cost) you can get good numbers on the other two legs of the Power-Performance-Area triangle. The issue is that the market position of Apple's competitors is such that it doesn't make as much sense for them to make such big and expensive chips (particularly CPU cores) in a mobile-friendly power envelope.
What makes Apple silicon chips big is they bolt on a fast GPU on it. If you include the die of a discrete GPU with an x86 chip, it’d be the same or bigger than M series.
You can look at Intel’s Lunar Lake as an example where it’s physically bigger than an M4 but slower in CPU, GPU, NPU and has way worse efficiency.
Another comparison is AMD Strix Halo. Despite being ~1.5x bigger than the M4 Pro, it has worse efficiency, ST performance, and GPU performance. It does have slightly more MT.
Such a decoder is vastly less sophisticated with AArch64.
That is one obvious architectural drawback for power efficiency: a legacy instruction set with variable word length, two FPUs (x87 and SSE), 16-bit compatibility with segmented memory, and hundreds of otherwise unused opcodes.
How much legacy must Apple implement? Non-kernel AArch32 and Thumb2?
Edit: think about it... R4000 was the first 64-bit MIPS in 1991. AMD64 was introduced in 2000.
AArch64 emerged in 2011, and in taking their time, the designers avoided the mistakes made by others.
Where are you getting M4 die sizes from?
It would hardly be surprising given the Max+ 395 has more, and on average, better cores fabbed with 5nm unlike the M4's 3nm. Die size is mostly GPU though.
Looking at some benchmarks:
> slightly more MT.
AMD's multicore passmark score is more than 40% higher.
https://www.cpubenchmark.net/compare/6345vs6403/Apple-M4-Pro...
> worse efficiency
The AMD is an older fab process and does not have P/E cores. What are you measuring?
> worse ST performance
The P/E design choice gives different trade-offs e.g. AMD has much higher average single core perf.
> worse GPU performance
The AMD GPU:
14.8 TFLOPS vs. M4 Pro 9.2 TFLOPS.
19% higher 3D Mark
34% higher GeekBench 6 OpenCL
Although a much crappier Blender score. I wonder what that's about.
https://nanoreview.net/en/gpu-compare/radeon-8060s-vs-apple-...
Dead Comment
Before the M1, I was stuck using an intel core i5 running arch linux. My intel mac managed to die months before the M1 came out. Let's just say that the M1 really made me appreciate how stupidly slow that intel hardware is. I was losing lots of time doing builds. The laptop would be unusable during those builds.
Life is too short for crappy hardware. From a software point of view, I could live with Linux but not with Windows. But the hardware is a show stopper currently. I need something that runs cool and yet does not compromise on performance. And all the rest (non-crappy trackpad, amazingly good screen, cool to the touch, good battery life, etc.). And manages to look good too. I'm not aware of any windows/linux laptop that does not heavily compromise on at least a few of those things. I'm pretty sure I can get a fast laptop. But it'd be hot and loud and have the unusable synaptics trackpad. And a mediocre screen. Etc. In short, I'd be missing my mac.
Apple is showing some confidence by just designing a laptop that isn't even close to being cheap. This thing was well over 4K euros. Worth every penny. There aren't a lot of intel/amd laptops in that price class. Too much penny pinching happening in that world. People think nothing of buying a really expensive car to commute to work. But they'll cut on the thing that they use the whole day when they get there. That makes no sense whatsoever in my view.
I’ve actually been debating moving from the Pro to the Air. The M4 is about on par with the M1 Pro for a lot of things. But it’s not that much smaller, so I’d be getting a lateral performance move and losing ports, so I’m going to wait and see what the future holds.
I've been window shopping for a couple of months now, have test run Linux and really liking the experience there (played on older Intel hardware). I am completely de-appled software-wise, with the 1 exception of iMessages because of my kids using ipads. But that's really about it. So, I'm ready to jump.
But so far, all my research hasn't lead to anything where I would be convinced not to regret in the end. A desktop Ryzen 7700 or 9600X would probably suffice, but it would mean I need to constantly switch machines and I'm not sure if I'm ready for that. All mobile non-macs have significant downsides and you can't even try before you buy anywhere typically. So you'd be relying on reviews. But everybody has a different tolerance for changes like track pad haptics, thermals, noise, screen quality etc. So, those reviews don't give enough confidence. I've had 13 Apple years so far. First 5 were pleasant, next 3 really sucked but since Apple silicon I feel I have totally forgotten all the suffering in the non-Apple world and with those noisy, slow Intel Macs.
I think it has to boil down to serious reasons why the Apple hardware is not fit for one's purpose. Be it better gaming, extreme amount of storage, insane amount of RAM, all while ignoring the value of "the perfect package" and it's low power draw, low noise etc. Something that does not make one regret the change. DHH has done it and so have others, but he switched to Framework Desktop AI Max. So it came with a change in lifestyle. And he also does gaming, that's another good reason (to switch to Linux or dual boot (as he mentioned Fortnite)).
I don't have such reasons currently. Unless we see hardware that is at least as fast and enjoyable like the M1 Pro or higher. I tried Asahi but it's quite cumbersome with the dual boot and also DP Alt not there yet and maybe never will, so I gave up on that.
So, I'll wait another year and will see then. I hope I don't get my company to buy me an M4 Max Ultra or so as that will ruin my desire to switch for 10 more years I guess.
Yeah, those glossy mirror-like displays in which you see yourself much better than the displayed content are polished really well
I’ll take the apple display any day. It’s bright enough to blast through any reflections.
Hah, it's exactly the other way around for me; I can't stand Apple's hardware. But then again I never bought anything Asus... let alone gamer laptops.
First two years it was solid, but then weird stuff started happening like the integrated GPU running full throttle at all times and sleep mode meaning "high temperature and fans spinning to do exactly nothing" (that seems to be a Windows problem because my work machine does the same).
Meanwhile the manufacturer, having released a new model, lost interest, so no firmware updates to address those issues.
I currently have the Framework 16 and I'm happy with it, but I wouldn't recommend it by default.
I for one bought it because I tend to damage stuff like screens and ports and it also enables me to have unusual arrangements like a left-handed numpad - not exactly mainstream requirements.
Apple is just off the side somewhere else.