Readit News logoReadit News
Posted by u/stephenheron 2 days ago
Ask HN: Why hasn't x86 caught up with Apple M series?
Hi,

My daily workhorse is a M1 Pro that I purchased on release date, It has been one of the best tech purchases I have made, even now it really deals with anything I throw at it. My daily work load is regularly having a Android emulator, iOS simulator and a number of Dockers containers running simultaneously and I never hear the fans, battery life has taken a bit of a hit but it is still very respectable.

I wanted a new personal laptop, and I was debating between a MacBook Air or going for a Framework 13 with Linux. I wanted to lean into learning something new so went with the Framework and I must admit I am regretting it a bit.

The M1 was released back in 2020 and I bought the Ryzen AI 340 which is one of the newest 2025 chips from AMD, so AMD has 5 years of extra development and I had expected them to get close to the M1 in terms of battery efficiency and thermals.

The Ryzen is using a TSMC N4P process compared to the older N5 process, I managed to find a TSMC press release showing the performance/efficiency gains from the newer process: “When compared to N5, N4P offers users a reported +11% performance boost or a 22% reduction in power consumption. Beyond that, N4P can offer users a 6% increase in transistor density over N5”

I am sorely disappointed, using the Framework feels like using an older Intel based Mac. If I open too many tabs in Chrome I can feel the bottom of the laptop getting hot, open a YouTube video and the fans will often spin up.

Why haven’t AMD/Intel been able to catch up? Is x86 just not able to keep up with the ARM architecture? When can we expect a x86 laptop chip to match the M1 in efficiency/thermals?!

To be fair I haven’t tried Windows on the Framework yet it might be my Linux setup being inefficient.

Cheers, Stephen

ben-schaaf · 2 days ago
Battery efficiency comes from a million little optimizations in the technology stack, most of which comes down to using the CPU as little as possible. As such the instruction set architecture and process node aren't usually that important when it comes to your battery life.

If you fully load the CPU and calculate how much energy a AI340 needs to perform a fixed workload and compare that to a M1 you'll probably find similar results, but that only matters for your battery life if you're doing things like blender renders, big compiles or gaming.

Take for example this battery life gaming benchmark for an M1 Air: https://www.youtube.com/watch?v=jYSMfRKsmOU. 2.5 hours is about what you'd expect from an x86 laptop, possibly even worse than the fw13 you're comparing here. But turn down the settings so that the M1 CPU and GPU are mostly idle, and bam you get 10+ hours.

Another example would be a ~5 year old mobile qualcomm chip. It's a worse process node than an AMD AI340, much much slower and significantly worse performance per watt, and yet it barely gets hot and sips power.

All that to say: M1 is pretty fast, but the reason the battery life is better has to do with everything other than the CPU cores. That's what AMD and Intel are missing.

> If I open too many tabs in Chrome I can feel the bottom of the laptop getting hot, open a YouTube video and the fans will often spin up.

It's a fairly common issue on Linux to be missing hardware acceleration, especially for video decoding. I've had to enable gpu video decoding on my fw16 and haven't noticed the fans on youtube.

jonwinstanley · 2 days ago
A huge reason for the low power usage is the iPhone.

Apple spent years incrementally improving efficiency and performance of their chips for phones. Intel and AMD were more desktop based so power efficiency wasnt the goal. When Apple's chips got so good they could transition into laptops, x86 wasn't in the same ballpark.

Also the iPhone is the most lucrative product of all time (I think) and Apple poured a tonne of that money into R&D and taking the top engineers from Intel, AMD, and ARM, building one of the best silicon teams.

twilo · 2 days ago
Apple purchased Palo Alto Semi which made the biggest difference. One of their best acquisitions ever in my opinion… not that they make all that many of those anyway.
Cthulhu_ · 2 days ago
I vaguely remember Intel tried to get into the low power / smartphone / table space at the time with their Atom line [0] in the late 00's, but due to core architecture issues they could never reach the efficiency of ARM based chips.

[0] https://en.wikipedia.org/wiki/Intel_Atom

skeezyboy · 2 days ago
> and Apple poured a tonne of that money into R&D and taking the top engineers from Intel, AMD, and ARM, building one of the best silicon teams.

how much silicon did Apple actually create? I thought they outsourced all the components?

RossBencina · 2 days ago
I thought they just acquired P.A. Semi, job done.
DanielHB · 2 days ago
I don't think it is so much efficiency of their chips for their hardware (phones) so much as efficiency of their OS for their chips and hardware design (like unified memory).
jimbokun · 2 days ago
Textbook Innovator’s Dilemma.
alt227 · 2 days ago
> A huge reason for the low power usage is the iPhone.

No, the main reason for better battery life is the RISC architecture. PC on ARM architecture has the same gains.

RajT88 · 2 days ago
> All that to say: M1 is pretty fast, but the reason the battery life is better has to do with everything other than the CPU cores. That's what AMD and Intel are missing.

Apple is vertically integrated and can optimize at the OS and for many applications they ship with the device.

Compare that to how many cooks are in the kitchen in Wintel land. Perfect example is trying to get to the bottom of why your windows laptop won't go to sleep and cooks itself in your backpack. Unless something's changed, last I checked it was a circular firing squad between laptop manufacturer, Microsoft and various hardware vendors all blaming each other.

diggan · 2 days ago
> Apple is vertically integrated and can optimize

> Compare that to how many cooks are in the kitchen in Wintel land. Perfect example is trying to get to the bottom of why your windows laptop won't go to sleep and cooks itself in your backpack

So, I was thinking like this as well, and after I lost my Carbon X1 I felt adventurous, but not too adventurous, and wanted a laptop that "could just work". The thinking was "If Microsoft makes both the hardware and the software, it has to work perfectly fine, right?", so I bit my lip and got a Surface Pro 8.

What a horrible laptop that was, even while I was trialing just running Windows on it. Overheated almost immediately by itself, just idling, and STILL suffers from the issue where the laptop sometimes wake itself while in my backpack, so when I actually needed it, of course it was hot and without battery. I've owned a lot of shit laptops through the years, even some without keys in the keyboard, back when I was dirt-poor, but the Surface Pro 8 is the worst of them all, I regret buying it a lot.

I guess my point is that just because Apple seem really good at the whole "vertically integrated" concept, it isn't magic by itself, and Microsoft continues to fuck up the very same thing, even though they control the entire stack, so you'll still end up with backpack laptops turning themselves on/not turning off properly.

I'd wager you could let Microsoft own every piece of physical material in the world, and they'd still not be able to make a decent laptop.

ben-schaaf · a day ago
This is easy to disprove. The Snapdragon X Elite has significantly better battery life than what AMD or Intel offer, and yet it's got the same number of cooks in the kitchen.

> Perfect example is trying to get to the bottom of why your windows laptop won't go to sleep and cooks itself in your backpack

Same thing happens in Apple land: https://news.ycombinator.com/item?id=44745897. My Framework 16 hasn't had this issue, although the battery does deplete slowly due to shitty modern standby.

bitwize · 2 days ago
Microsoft is pushing "Modern Standby" over actual sleep, so laptops can download and install updates while closed at night.
kube-system · 2 days ago
Also on the HN front page today:

> Framework 16

> The 2nd Gen Keyboard retains the same hardware as the 1st Gen but introduces refreshed artwork and updated firmware, which includes a fix to prevent the system from waking while carried in a bag.

amazingman · 2 days ago
I remember a time when this was supposed to be Wintel's advantage. It's really strange to now be in a time where Apple leads the consumer computing industry in hardware performance, yet is utterly failing at evolving the actual experience of using their computers. I'm pretty sure I'm not the only one who would gladly give up a bit of performance if it were going to result in a polished, consistent UI/UX based on the actual science of human interface design rather than this usability hellscape the Alan Dye era is sending us into.
galad87 · 2 days ago
macOS is a resource hungry pig, I wouldn't bet too much on it making a difference.
aurareturn · 2 days ago

  All that to say: M1 is pretty fast, but the reason the battery life is better has to do with everything other than the CPU cores. That's what AMD and Intel are missing.
This isn't true. Yes, uncore power consumption is very important but so is CPU load efficiency. The faster the CPU can finish a task, the faster it can go back to sleep, aka race to sleep.

Apple Silicon is 2-4x more efficient than AMD and Intel CPUs during load while also having higher top end speed.

Another thing that makes Apple laptops feel way more efficient is that they use a true big.Little design while AMD and Intel's little cores are actually designed for area efficiency and not power efficiency. In the case of Intel, they stuff as many little cores as possible to win MT benchmarks. In real world applications, the little cores are next to useless because most applications prefer a few fast cores over many slow cores.

yaro330 · 2 days ago
> Apple Silicon is 2-4x more efficient than AMD and Intel CPUs during load while also having higher top end speed.

This is false, in cross platform tasks it's on par if not worse than latest X86 arches. As others pointed out: 2.5h in gaming is about what you'd expect from a similarly built X86 machine.

They are willing due to lower idle and low load consumption, which they achieve by integrating everything as much as possible - something that's basically impossible for AMD and Intel.

> The faster the CPU can finish a task, the faster it can go back to sleep, aka race to sleep.

May have been true when CPU manufacturers left a ton of headroom on the V/F curve, but not really true anymore. Zen 4 core's power draw shoots up sharply pass 4.6 GHz and nearly triples when you approach 5.5 GHz (compared to 4.6), are you gonna complete the task 3 times faster at 5.5 GHz?

jandrewrogers · 2 days ago
> Apple Silicon is 2-4x more efficient than AMD and Intel CPUs during load while also having higher top end speed.

This is not true. For high-throughput server software x86 is significantly more efficient than Apple Silicon. Apple Silicon optimizes for idle states and x86 optimizes for throughput, which assumes very different use cases. One of the challenges for using x86 in laptops is that the microarchitectures are server-optimized at their heart.

ARM in general does not have the top-end performance of x86 if you are doing any kind of performance engineering. I don't think that is controversial. I'd still much rather have Apple Silicon in my laptop.

rollcat · 2 days ago
> It's a fairly common issue on Linux to be missing hardware acceleration, especially for video decoding. I've had to enable gpu video decoding on my fw16 and haven't noticed the fans on youtube.

I've worked in video delivery for quite a while.

If I were to write the law, decision-makers wilfully forcing software video decoding where hardware is available would be made to sit on these CPUs with their bare buttocks. If that sounds inhumane, then yes, this is the harm they're bringing upon their users, and maybe it's time to stop turning the other cheek.

throwawaylaptop · 2 days ago
I run Linux Mint Mate on a 10 year old laptop. Everything works fine, but watching YouTube makes my wireless USB dongle mouse stutter a LOT. Basically if CPU usage goes up, mouse goes to hell.

Are you telling me that for some reason it's not using any hardware acceleration available while watching YouTube? How do I fix it?

throwup238 · 2 days ago
> All that to say: M1 is pretty fast, but the reason the battery life is better has to do with everything other than the CPU cores. That's what AMD and Intel are missing.

A good demonstration is the Android kernel. By far the biggest difference between it and the stock Linux kernel is power management. Many subsystems down to the process scheduler are modified and tuned to improve battery life.

qcnguy · 2 days ago
And the more relevant case for laptops is macOS, which is heavily optimized for battery life and power draw in ways that Linux just isn't, neither is Windows. A lot of the problems here can't actually be fixed by intel, amd, or anyone designing x86 laptops because getting that level of efficiency requires the ability to strongly lead the app developer community. It also requires highly competent operating system developers focusing on the issue for a very long time, and being able to co-design the operating system, firmware and hardware together. Microsoft barely cares about Windows anymore, the Linux guys only care about servers since forever, and that leaves Apple alone in the market. I doubt anything will change anytime soon.
stuaxo · 2 days ago
It's a shame they are so bad at upstreaming stuff, and run on older kernels (which in turn makes upstreaming harder).

Deleted Comment

prmoustache · 2 days ago
> It's a fairly common issue on Linux to be missing hardware acceleration, especially for video decoding.

To be fair, usually the linux itself has hardware acceleration available but the browser vendors tend to disable gpu rendering except on controlled/known perfectly working combinations of OS/Hardware/Drivers and they have much less testing in Linux. In most case you can force enabling gpu rendering in about:config and try it out yourself and leave it unless you get recurring crashes.

deaddodo · 2 days ago
The only browser I’ve ever had issues with enabling video acceleration on Linux is Firefox.

All the Blink-based ones just work as long the proper libraries are installed and said libraries properly detect hardware support.

mayama · 2 days ago
I disable turbo boost in cpu on linux. Fans rarely start on the laptop and the system is generally cool. Even working on development and compilation I rarely need the extra perf. For my 10yr old laptop I cap max clock to 95% too to stop the fans from always starting. YMMV
just6979 · 20 hours ago
This is a big reason. Apple tunes their devices to not push the extreme edges of the performance that is possible, so they don't fall off that cliff of inefficiency. Combined with a really great perf/watt, they can run them at "90%" and stay nice and cool and sipping power (relatively), while most Intel/AMD machines are allowed to push their parts to "110%" much more often, which might give them a leg up in raw performance (for some workloads), but runs into the gross inefficiencies of pushing the envelope so that marginal performance increase takes 2-3x more power.

If you manually go in and limit a modern Windows laptop's max performance to just under what the spec sheet indicates, it'll be fairly quiet and cool. In fact, most have a setting to do this, but it's rarely on by default because the manufacturers want to show off performance benchmarks. Of course, that's while also touting battery life that is not possible when in the mode that allows the best performance...

This doesn't cover other stupid battery life eaters like Modern Standy (it's still possible to disable it with registry tweaks! do it!), but if you don't need absolute max perf for renders or compiling or whatever, put your Windows or Linux laptop into "cool & quiet" mode and enjoy some decent extra battery.

It would also be really interesting to see what Apple Silicon could do under some Extreme OverClocking fun with sub-zero cooling or such. Would require a firmware & OS that allows more tuning and tweaking, so it's not going to happen anytime soon, but could actually be a nice brag for Apple it they did let it happen.

koala_man · 2 days ago
I once saw a high resolution CPU graph of a video playing in Safari. It was completely dead except for a blip every 1/30th of a second.

Incredible discipline. The Chrome graph in comparison was a mess.

novok · 2 days ago
Safari team explicitly targets perf as a target. I just wish they weren't so bad about extensions and adblock and I'd use it as my daily driver. But those paper cuts make me go back to chromium browsers all the time.
lenkite · 2 days ago
Hell, Apple CPU's are even optimized for Apple software GC calls like Retain/Release objects. It seems if you want optimal performance and power efficiency, you need to own both hardware and software.

Looks like general purpose CPUs are on the losing train.

Maybe Intel should invent desktop+mobile OS and design bespoke chips for those.

NobodyNada · 2 days ago
> Apple CPU's are even optimized for Apple software GC calls like Retain/Release objects.

I assume this is referring to the tweet from the launch of the M1 showing off that retaining and releasing an NSObject is like 3x faster. That's more of a general case of the ARM ISA being a better fit for modern software than x86, not some specific optimization for Apple's software.

x86 was designed long before desktops had multi-core processors and out-of-order execution, so for backwards compatibility reasons the architecture severely restricts how the processor is allowed to reorder memory operations. ARM was designed later, and requires software to explicitly request synchronization of memory operations where it's needed, which is much more performant and a closer match for the expectations of modern software, particularly post-C/C++11 (which have a weak memory model at the language level).

Reference counting operations are simple atomic increments and decrements, and when your software uses these operations heavily (like Apple's does), it can benefit significantly from running on hardware with a weak memory model.

aurareturn · 2 days ago

  It seems if you want optimal performance and power efficiency, you need to own both hardware and software.
Does Apple optimize the OS for its chips and vice versa? Yes. However, Apple Silicon hardware is just that good and that far ahead of x86.

Here's an M4 Max running macOS running Parallels running Windows when compared to the fastest AMD laptop chip: https://browser.geekbench.com/v6/cpu/compare/13494385?baseli...

M4 Max is still faster even with 14 out of 16 possible cores being used. You can't chalk that up to optimizations anymore because Windows has no Apple Silicon optimizations.

sabdaramadhan · 4 hours ago
> Maybe Intel should invent desktop+mobile OS and design bespoke chips for those.

Intel is busy fixing up their shit after what happened with their 13 & 14th gen CPU. Making imagine they making OS its called IntelOS and the only thing you can run is only by using Intel CPU

davsti4 · 2 days ago
> Maybe Intel should invent desktop+mobile OS and design bespoke chips for those.

Or, contribute efficiency updates to popular open projects like firefox, chromium, etc...

lelanthran · 2 days ago
> Maybe Intel should invent desktop+mobile OS and design bespoke chips for those.

Wouldn't it be easier for Intel to heavily modify the linus kernel instead of writing their own stack?

They could even go as far as writing the sleep utilities for laptops, or even their own window manager to take advantage of the specific mods in the ISA?

Dead Comment

pzo · 2 days ago
> most of which comes down to using the CPU as little as possible.

it least on mobile platform apple advocate the other way with race to sleep - do calculation as fast as you can with powerful cores so that whole chip can go back to sleep earlier and more often take naps.

creshal · 2 days ago
Intel stipulated the same under the name HUGI (Hurry Up and Go Idle) about 15 years ago when ultrabooks were the new hot thing.

But when Apple says it, software devs actually listen.

ben-schaaf · a day ago
Race to sleep is all about using the CPU as little as possible. Given that the modern AMD chips are faster than Apple M1 this clearly does not account for the disparity in battery life.
nikanj · 2 days ago
And then Microsoft adds an animated news tracker to the left corner of the start bar, making sure the cpu never gets to idle.
mrtksn · 2 days ago
Which also should mean that using that M1 machine with Linux will have Intel/AMD like experience, not the M1 with macOS experience.
ben-schaaf · a day ago
Yes and no. The optimizations made for battery life are a combination of software and hardware. You'll get bad battery life on an M1 with Linux when watching youtube without hardware acceleration, but if you're just idling (and if Linux idles properly) then it should be similar to macOS.
whatevaa · 2 days ago
Turning down the settings will get you worse experiece, especially if you turn down that they are "mostly idle". Not comparable.
sys_64738 · 2 days ago
Sounds like death by (2^10)-24) cuts for the x86 architecture.
ToucanLoucan · 2 days ago
I honestly don't see myself ever leaving Macbooks at this point. It's the whole package: the battery life is insane, I've literally never had a dead laptop when I needed it no matter what I'm doing or where I'm at; it runs circles around every other computer I own, save for my beastly gaming PC; the stability and consistency of MacOS, and the underlying unix arch for a lot of tooling, all the way down to the build quality being damn near flawless save for the annoying lack of ports (though increasingly, I find myself needing ports less and less).

Like, would I prefer an older-style Macbook overall, with an integrated card reader, HDMI port, ethernet jack, all that? Yeah, sure. But to get that now I have to go to a PC laptop and there's so many compromises there. The battery life isn't even in the same zip code as a Mac, they're much heavier, the chips run hot even just doing web browsing let alone any actual work, and they CREAK. Like my god I don't remember the last time I had a Windows laptop open and it wasn't making all manner of creaks and groans and squeaks.

The last one would be solved I guess if you went for something super high end, or at least I hope it would be, but I dunno if I'm dropping $3k+ either way I'd just as soon stay with the Macbook.

AlexandrB · 2 days ago
> Like, would I prefer an older-style Macbook overall, with an integrated card reader, HDMI port, ethernet jack, all that? Yeah, sure.

Modern MacBook pros have 2/3 (card reader and HDMI port), and they brought back my beloved MagSafe charging.

solardev · 2 days ago
Even the high end ones (Razers, Asus, Surface Books, Lenovos) are mere lookalikes and don't run anywhere as well as the MacBooks. They're hot and heavy and loud and full of driver issues and discrete graphics switching headaches and of course the endless ads and AI spam of modern Windows. No comparison at all...
maxsilver · 2 days ago
> Why haven’t AMD/Intel been able to catch up? Is x86 just not able to keep up with the ARM architecture? When can we expect a x86 laptop chip to match the M1 in efficiency/thermals?!

AMD kind of has, the "Max 395+" is (within 5% margin or so) pretty close to M4 Pro, on both performance and energy use. (it's in the 'Framework Desktop', for example, but not in their laptop lineup yet)

AMD/Intel hasn't surpassed Apple yet (there's no answer for the M4 Max / M3 Ultra, without exploding the energy use on the AMD/Intel side), but AMD does at least have a comparable and competitive offering.

hajile · 2 days ago
M4 Pro was a massive step back in perf/watt over M3 Pro. To my knowledge, there aren't any M4 die shots around which has led to speculation that yields on M4 Max were predicted to be really bad, so they made the M4 Pro into a binned M4 Max, but that comes with tradeoffs like much worse leakage current.

That said Hardware Canucks did a review of the 395 in a mobile form factor (Asus ROG Flow F13) with TDP at 70w (lower than the max 120w TDP you see in desktop reviews). This lower-than-max TDP also gets you closer to the perf/watt sweet spot.

The M4 Pro scores slightly higher in Cinebench R24 despite being 10P+4E vs a full 16P cores on the 395 all while using something like a 30% less power. M4 Pro scores nearly 35% higher than the single-core R24 benchmark too. 395 GPU performance is than M4 Pro in productivity software. More specifically, they trade blows based on which is more optimized in a particular app, but AMD GPUs have way more optimizations in general and gaming should be much better with an x86 + AMD GPU vs Rosetta 2 + GPU translation layers + Wine/crossover.

M4 Pro gets around 50% better battery life for tasks like web browsing when accounting for battery size differences and more than double the battery life per watt/hr when doing something simple like playing a video. Battery life under full load is a bit better for the 395, but doing the math, this definitely involves the 395 throttling significantly down from it's 70w TDP.

Luker88 · 20 hours ago
I see virtually nobody pointing out that apple is consistently using fab nodes that are more advanced than intel/AMD.

Rule of the thumb is roughly 15% advantage to distribute between power and performance there.

Catching up while remaining on older nodes is no joke.

ksec · 19 hours ago
> using fab nodes that are more advanced than intel/AMD.

I am usually the one doing it on HN. But it is still not common knowledge after we went from M1 to M4.

And this thread S/N ratio is already 10 times better than most other Apple Silicon discussions.

AMD's Max 395+ is N4, or 5nm Class Product.

The Apple M4 is N3 or 3nm Class product.

remify · 2 days ago
I've got an AMD Ryzen 9 365 processor on my new laptop and I really like it. Huge autonomy and good performance when needed, it's comparable to the M3 version (not the Max).
anthonypasq · 2 days ago
I just recently was trying to buy a laptop and was looking at that chip, but like you said, not available in anything except framework desktops and a weird tablet thats 2.5x expensive as a macbook. Its competitive on paper, but is still completely infeasible at the moment.
akvadrako · 2 days ago
There is only the HP ZBook Ultra G1a.

Some Chinese companies have also announced laptops with it coming out soon.

nodesocket · 2 days ago
There there a few mini PCs using the 395+. Checkout the Beelink GTR9 Pro AMD Ryzen AI Max+ 395 and GMKtec EVO-X2.
bilbo0s · 2 days ago
Also, you don't realize until you try them out that other issues make running models on the AMD chip ridiculously slow compared to running the same models on an M4. Some of that's software. But a lot is how the chip/memory/neural etc are organized.

Right now, AMD is not even in the ballpark.

In fact, the real kick in the 'nads was my fully kitted M4 laptop outperforming the AMD. I just gave up.

I'll keep checking in with AMD and Intel every generation though. It's gotta change at some point.

porphyra · 2 days ago
The AI just made it to their laptop lineup today. The Framework 16 has either the AMD Ryzen™ AI 9 HX 370 or the AI 7 350.

https://frame.work/laptop16?tab=whats-new

WinstonSmith84 · 2 days ago
you can find that processor in the 14" HP Zbook Ultra G1A (which is also Ubuntu certified). There is also the Asus Z13, though I'm not certain it's working well with Linux
jeffbee · 2 days ago
This is not even a remotely accurate characterization of the relative performance of the Ryzen AI Max+ 395 and the Apple M4. I have both an expensive implementation of the former and the $499 version of the latter, and my M4 Mac mini beats the Ryzen by 80% or more in many single-threaded workloads, like browser benchmarks.
fafhnir · 2 days ago
I have the same experience here with my MacBook Air M1 from 2020 with 16GB RAM and 512GB SSD. After three years, I upgraded to a MacBook Pro with M3 Pro, 36GB of RAM, and 2TB of storage. I use this as my main machine with 2 displays attached via a TB4 dock.

I'm working in IT and I get all new machines for our company over my desk to check them, and I observed the exact same points as the OP.

The new machines are either fast and loud and hot and with poor battery life, or they are slow and "warm" and have moderate battery life.

But I had no business laptop yet, ARM, AMD, or Intel, which can even compete with the M1 Air, not to speak of the M3 Pro! Not to speak about all the issues with crappy Lenovo docks, etc.

It doesn’t matter if I install Linux or Windows. The funny point is that some of my colleagues have ordered a MacBook Air or Pro and use their Windows or Linux and a virtual machine via Parallels.

Think about it: Windows 11 or Linux in a VM is even faster, snappier, more silent, and has even longer battery life than these systems native on a business machine from Lenovo, HP, or Dell.

Well, your mileage may vary, but IMHO there is no alternative to a Mac nowadays, even if you want to use Linux or Windows.

devjab · 2 days ago
I'm still using my MacBook Air M1 with 8gb of Ram as my personal workhorse. It runs docker desktop and VSC better than my T14 whatever windows machine with 32gb ram. But that is windows and it has a bunch of enterprise stuff running. I assume it would work better with Linux, or even windows without whatever our IT does to control it.

With Nvidia Now I can even play games on it, though I wouldn't recommend it for any serious gamers.

zikduruqe · 2 days ago
Ha. Same here. My personal MBA M1/8GB just chugs along with whatever I need it to do. I have a T480 32GB linux machine at home that I love, but my M1 just does what I need it to do.

And at the shop we are doing technology refreshes for the whole dev team upgrading them to M4s. I was asked if I wanted to upgrade my M1 Pro to an M4, and I said no. Mainly because I don't want to have to move my tooling over to a new machine, but I am not bottlenecked by anything on my current M1.

commakozzi · 2 days ago
I'm using Geforce Now on my M1 Air and it's wonderful. Yeah, i'll play competitive multiplayer on dedicated hardware (primarily Xbox Series X because i refuse to own a Windows machine and i'm too lazy for Linux right now -- also, i'm hoping against hope for a real Steam console), but Geforce Now has been wonderful for other things, survival, crafting, MMO, single player RPGs, Cyberpunk, Battlefield, pretty much anything you can deal with a few milliseconds of input latency. To be honest, what they're doing here is wizardry to my dumb brain. The additional latency, to me, just feels like the amount of latency you will get from a controller on an Xbox. However, if you play something that requires very quick input (competitive FPS, for example) AND you're connected to servers through the game with anywhere from 5ms to 100ms+ latency (playing on EU servers, for example), that added latency just becomes too much. I'll say this though: I've played Warzone solo on Geforce Now, connected to a local server with no more than 5ms latency via that connection, and it felt pretty decent. Definitely playable, and i think i got 2nd or 1st in a few of those games, but as soon as it gets over like 15-20ms, you're cooked.
alt227 · 2 days ago
> there is no alternative to a Mac nowadays

I need to point this out all the time these days it seems, but this opinion is only valid if all you use is a laptop and all you care about is single core perfomance.

The computing world is far bigger than just laptops.

Big music/3d design/video editing production suites etc still benefit much more from having workstation PCs with higher PCI bandwidth, more lanes for multiple SSDs and GPUs, and high level multicore processing performance which cannot be matched by Apple silicon.

shermantanktop · 2 days ago
Doesn’t Apple have significant market share for pro music and video editing?

For studio movies, render farms are usually Linux but I think many workstation tasks are done on Apple machines. Or is that no longer true?

diggan · 2 days ago
> Well, your mileage may vary, but IMHO there is no alternative to a Mac nowadays, even if you want to use Linux or Windows.

I guess I'd slightly change that to "MacBook" or similar, as Apple are top-in-class when it comes to laptops, but for desktop they seem to not even be in the fight anymore, unless reducing power consumption is your top concern. But if you're aiming for "performance per money spent", there isn't really any alternative to non-Apple hardware.

I do agree they do the best hardware in terms of feeling though, which is important for laptops. But computing is so much larger than laptops, especially if you're always working in the same place everyday (like me).

int_19h · 2 days ago
Mac Studio is pretty good on everything except raw GPU speed. Which depending on your use cases may be completely irrelevant.

Deleted Comment

BirAdam · 2 days ago
First, Apple did an excellent job optimizing their software stack for their hardware. This is something that few companies have the ability to do as they target a wide array of hardware. This is even more impressive given the scale of Apple's hardware. The same kernel runs on a Watch and a Mac Studio.

Second, the x86 platform has a lot of legacy, and each operation on x86 is translated from an x86 instruction into RISC-like micro-ops. This is an inherent penalty that Apple doesn't have pay, and it is also why Rosetta 2 can achieve "near native" x86 performance; both platform translate the x86 instructions.

Third, there are some architectural differences even if the instruction decoding steps are removed from the discussion. Apple Silicon has a huge out-of-order buffer, and it's 8-wide vs x86 4-wide. From there, the actual logic is different, the design is different, and the packaging is different. AMD's Ryzen AI Max 300 series does get close to Apple by using many of the same techniques like unified memory and tossing everything onto the package, where it does lose is due to all of the other differences.

In the end, if people want crazy efficiency Apple is a great answer and delivers solid performance. If people want the absolute highest performance, then something like Ryzen Threadripper, EPYC, or even the higher-end consumer AMD chips are great choices.

zipityzi · 2 days ago
This seems mostly misinformed.

1) Apple Silicon outperforms all laptop CPUs in the same power envelope on 1T on industry-standard tests: it's not predominantly due to "optimizing their software stack". SPECint, SPECfp, Geekbench, Cinebench, etc. all show major improvements.

2) x86 also heavily relies on micro-ops to greatly improve performance. This is not a "penalty" in any sense.

3) x86 is now six-wide, eight-wide, or nine-wide (with asterisks) for decode width on all major Intel & AMD cores. The myth of x86 being stuck on four-wide has been long disproven.

4) Large buffers, L1, L2, L3, caches, etc. are not exclusive to any CPU microarchitecture. Anyone can increase them—the question is, how much does your core benefit from larger cache features?

5) Ryzen AI Max 300 (Strix Halo) gets nowhere near Apple on 1T perf / W and still loses on 1T perf. Strix Halo uses slower CPUs versus the beastly 9950X below:

Fanless iPad M4 P-core SPEC2017 int, fp, geomean: 10.61, 15.58, 12.85 AMD 9950X (Zen5) SPEC2017 int, fp, geomean: 10.14, 15.18, 12.41 Intel 285K (Lion Cove) SPEC2017 int, fp, geomean: 9.81, 12.44, 11.05

Source: https://youtu.be/2jEdpCMD5E8?t=185, https://youtu.be/ymoiWv9BF7Q?t=670

The 9950X & 285K eat 20W+ per core for that 1T perf; the M4 uses ~7W. Apple has a node advantage, but no node on Earth gives you 50% less power.

There is no contest.

BirAdam · 2 days ago
1. Apple’s optimizations are one point in their favor. XNU is good, and Apple’s memory management is excellent.

2. X86 micro-ops vs ARM decode are not equivalent. X86’s variable length instructions make the whole process far more complicated than it is on something like ARM. This is a penalty due to legacy design.

3. The OP was talking about M1. AFAIK, M4 is now 10-wide, and most x86 is 6-wide (Ryzen 5 does some weird stuff). X86 was 4-wide at the time of M1’s introduction.

4. M1 has over 600 reorder buffer registers… it’s significantly larger than competitors.

5. Close relative to x86 competitors.

phkahler · 2 days ago
>> x86 is now six-wide, eight-wide, or nine-wide (with asterisks) for decode width on all major Intel & AMD cores. The myth of x86 being stuck on four-wide has been long disproven.

From the AMD side it was 4 wide until Zen 5. And now it's still 4 wide, but there is a separate 4-wide decoder for each thread. The micro-op cache can deliver a lot of pre-decoded instructions so the issue width is (I dunno) wider but the decode width is still 4.

Deleted Comment

hajile · 2 days ago
2. uops are a cope that costs. That uop cache and cache controller uses tons of power. ARM designs with 32-bit support had a uop cache, but they cut it when going to 64-bit only designs (look at ARM a715 vs a710) which dramatically reduced frontend size and power consumption.

3. The claim was never "stuck on 4-wide", but that going wider would incur significant penalties which is the case. AMD uses two 4-wide encoders and pays a big penalty in complexity trying to keep them coherent and occupied. Intel went 6-wide for Golden Cove which is infamous for being the largest and most power-hungry x86 design in a couple decades. This seems to prove the 4-wide people right.

4. This is only partially true. The ISA impacts which designs make sense which then impacts cache size. uop cache can affect L1 I-cache size. Page size and cache line size also affect L1 cache sizes. Target clockspeeds and cache latency also affect which cache sizes are viable.

naasking · 2 days ago
> 2) x86 also heavily relies on micro-ops to greatly improve performance. This is not a "penalty" in any sense.

It's an energy penalty, even if wall clock time improves.

Der_Einzige · 2 days ago
A whole lot of bluster in this thread but finally someone whose actually doing their research chimes in. Thank you for giving me a place to start in understanding why this is so deeply a mystery!
JJJollyjim · 2 days ago
Apple CPUs do decode instructions into micro-ops.

https://dougallj.github.io/applecpu/firestorm.html

Rohansi · a day ago
Pretty much all CPU architectures these days do.
jcranmer · 2 days ago
> Second, the x86 platform has a lot of legacy, and each operation on x86 is translated from an x86 instruction into RISC-like micro-ops. This is an inherent penalty that Apple doesn't have pay, and it is also why Rosetta 2 can achieve "near native" x86 performance; both platform translate the x86 instructions.

Can we please stop with this myth? Every superscalar processor is doing the exact same thing, converting the ISA into the µops (which may involve fission or fusion) that are actually serviced by the execution units. It doesn't matter if the ISA is x86 or ARM or RISC-V--it's a feature of the superscalar architecture, not the ISA itself.

The only reason that this canard keeps coming out is because the RISC advocates thought that superscalar was impossible to implement for a CISC architecture and x86 proved them wrong, and so instead they pretend that it's only because x86 somehow cheats and converts itself to RISC internally.

adwn · 2 days ago
> they pretend that it's only because x86 somehow cheats and converts itself to RISC internally.

Which hasn't even been the case anymore for several years now. Some µOPs in modern x86-64 cores combine memory access with arithmetic operations, making them decidedly non-RISC.

rerdavies · a day ago
The "RISC" thing is an experiment that failed in the 90s. There's nothing particularly RISCy about the ARM instruction set. It is a pretty darned complicated instruction set.

ARM processors ALSO decode instructions to micro-ops. And Apple chips do too. Pretty much a draw. The first stage in the execution pipelines of all modern processors is a a decode stage.

Deleted Comment

stego-tech · 2 days ago
There’s a number of reasons, all of which in concert create the appearance of a performance gap between the two:

* Apple has had decades optimizing its software and hardware stacks to the demands of its majority users, whereas Intel and AMD have to optimize for a much broader scope of use cases.

* Apple was willing to throw out legacy support on a regular basis. Intel and AMD, by comparison, are still expected to run code written for DOS or specific extensions in major Enterprises, which adds to complexity and cost

* The “standard” of x86 (and demand for newly-bolted-on extensions) means effort into optimizations for efficiency or performance meet diminishing returns fairly quickly. The maturity of the platform also means the “easy” gains are long gone/already done, and so it’s a matter of edge cases and smaller tweaks rather than comprehensive redesigns.

* Software in x86 world is not optimized, broadly, because it doesn’t have to be. The demoscene shows what can be achieved in tight performance envelopes, but software companies have never had reason to optimize code or performance when next year has always promised more cores or more GHz.

It boils down to comparing two different products and asking why they can’t be the same. Apple’s hardware is purpose-built for its userbase, operating systems, and software; x86 is not, and never has been. Those of us who remember the 80s and 90s of SPARC/POWER/Itanium/etc recall that specialty designs often performed better than generalist ones in their specialties, but lacked compatibility as a result.

The Apple ARM vs Intel/AMD x86 is the same thing.

shermantanktop · 2 days ago
Intel chose and stuck with backcompat as a strategy. They could, tomorrow, split their designs into legacy hardware and modern hardware. They didn’t, but Apple has done breaking generational change many times.

Apple also has a particular advantage in owning the os and having the ability to force independent developers to upgrade their software, which make incompatible updates (including perf optimizations) possible.

spixy · 2 days ago
Intel also wanted to break backcompat and start fresh with Itanium but it failed.

Deleted Comment

PaulRobinson · 2 days ago
Fair enough, but Apple Silicon is not a specialist chip in the way a SPARC chip was. It's a general purpose SoC & SiP stack. There is nothing stopping Intel being able to invest in SoC & SiP and being able to maintain backward compatibility while providing much better power/performance for a mobile (including laptop and tablet), product strategy.

They could also just sit down with Microsoft and say "Right, we're going to go in an entirely different direction, and provide you with something absolutely mind-blowing, but we're going to have to do software emulation for backward compatibility and that will suck for a while until things get recompiled, or it'll suck forever if they never do".

Apple did this twice in the last 20 years - once on the move from PowerPC chips to Intel, and again from Intel to Apple Silicon.

If Microsoft and enough large OEMs (Dell, etc.), thought there was enough juice in the new proposed architecture to cause a major redevelopment of everything from mobile to data centre level compute, they'd line right up, because they know that if you can significantly reduce the amount of power consumption while smashing benchmarks, there are going to long, long wait times for that hardware and software, and its pay day for everyone.

We now know so much more about processor design, instruction set and compiler design than we did when the x86 was shaping up, it seems obvious to me that:

1. RISC is a proven entity worth investing in

2. SoC & SiP is a proven entity worth investing in

3. Customers love better power/performance curves at every level from the device in their pocket to the racks in data centres

4. Intel is in real trouble if they are seriously considering the US government owning actual equity, albeit proposed as non-voting, non-controlling

Intel can keep the x86 line around if they want, but their R&D needs to be chasing where the market is heading - and fast - while bringing the rest of the chain along with them.

alt227 · 2 days ago
> Right, we're going to go in an entirely different direction, and provide you with something absolutely mind-blowing, but we're going to have to do software emulation for backward compatibility and that will suck for a while until things get recompiled, or it'll suck forever if they never do

For an example of why this doesnt work, see 'Intel Itanium'.

lokar · 2 days ago
It’s a bit unfair to say apple threw out backwards compatibility.

Each time they had a pretty good emulation story to keep most stuff (certainly popular stuff) working through a multi-year transition period.

IMO, this is better then carrying around 40 years of cruft.

layer8 · 2 days ago
This was absolutely not the case for 32-bit iOS apps, which they dropped from one year to the next like a hot potato. I still mourn the loss of some of the apps.
alt227 · 2 days ago
Apple purposely make it so after 3 new versions of the OS you cannot upgrade the OS on the hardware any further. This in turn means you cannot install new software as the applications themselves require the newer versions of the OS. It has been this way on apple hardware for decades, and has laid the foundation of not ever needing to provide backwards compatibility for more than a few years as well as forcing new hardware purchases. The 'emulation story' only needs to work for a couple of generations, then it itself can be sunsetted and is not expected to be backwards compatible with newer OSes. It is also the reason it is pretty much impossible to upgrade CPUs in Apple machines.

> IMO, this is better then carrying around 40 years of cruft.

Backwards compatibility is such a strong point, it is why windows survives even though it has become a bloated ad riddled mess. You can argue which is better, but that seriously depends on your requirements. If you have a business application coded 30 years ago on x86 that no developer in your company understands any more, then backwards compatibility is king. On the other end of the spectrum if you are happy to be purchasing new software subscriptions constantly and having bleeding edge hardware is a must for you, then backwards compatibility probably isnt required.

olejorgenb · 2 days ago
> Apple has had decades optimizing its software and hardware stacks to the demands of its majority users, whereas Intel and AMD have to optimize for a much broader scope of use cases.

But as you mention - they've at multiple times changed the underlying architecture, which surely would render å large part of prior optimizations obsolete?

> Software in x86 world is not optimized, broadly, because it doesn’t have to be.

Do ARM software need optimization more than x86?

rerdavies · a day ago
> Do ARM software need optimization more than x86?

If Windows-World software developers could one day announce that they will only support Intel Gen 14 or later (and not bother with AMD at all), and only support the latest and greatest NVidia GPUs (and only GPUs that cost $900 or more), I'm pretty sure they would optimize their code differently, and would sometimes get dramatic performance improvements.

It's not so much that ARM needs optimizations more, but that x86 software can't practically be broadly optimized.

brookst · 2 days ago
That sure sounds more like the reality of a performance gap than the appearance of one.
jayd16 · 2 days ago
The broader audience/apples to oranges bit is fair. We're not choosing apple hardware for server. x64 is still dominant on the server with some cheap custom arm chips as an option, no?

Deleted Comment

novok · 2 days ago
I don't think backcompat is that big of a deal, since old DOS programs also don't take any compute power to run too and apple has shown layers like rosetta work fine.
Eric_WVGG · 2 days ago
> Software in x86 world is not optimized, broadly, because it doesn’t have to be. The demoscene shows what can be achieved in tight performance envelopes, but software companies have never had reason to optimize code or performance when next year has always promised more cores or more GHz.

This is why I get so livid regarding Electron apps on the Mac.

I’m never surprised by developer-centric apps like Docker Desktop — those inclined to work on highly technical apps tend not to care much about UX — but to see billion-dollar teams like Slack and 1Password indulge in this slop is so disheartening.

jayd16 · 2 days ago
I generally agree but what's Qualcomm's excuse?
rerdavies · a day ago
Samsung seems to have some ARM processors that compete favorably with M-class processors.
gettingoverit · 2 days ago
> might be my Linux setup being inefficient

Given that videos spin up those coolers, there is actually a problem with your GPU setup on Linux, and I expect there'd be an improvement if you managed to fix it.

Another thing is that Chrome on Linux tends to consume exorbitant amount of power with all the background processes, inefficient rendering and disk IO, so updating it to one of the latest versions and enabling "memory saving" might help a lot.

Switching to another scheduler, reducing interrupt rate etc. probably help too.

Linux on my current laptop reduced battery time x12 compared to Windows, and a bunch of optimizations like that managed to improve the situation to something like x6, i.e. it's still very bad.

> Is x86 just not able to keep up with the ARM architecture?

Yes and no. x86 is inherently inefficient, and most of the progress over last two decades was about offloading computations to some more advanced and efficient coprocessors. That's how we got GPUs, DMA on M.2 and Ethernet controllers.

That said, it's unlikely that x86 specifically is what wastes your battery. I would rather blame Linux, suspect its CPU frequency/power drivers are misbehaving on some CPUs, and unfortunately have no idea how to fix it.

fnle · 2 days ago
> x86 is inherently inefficient

Nothing in x86 prohibits you from an implementation less efficient than what you could do with ARM instead.

x86 and ARM have historically served very different markets. I think the pattern of efficiency differences of past implementations is better explained by market forces rather than ISA specifics.

akho · 2 days ago
x12 and x6 do not seem plausible. Something is very wrong.
loudmax · 2 days ago
These figures are very plausible. Most Linux distros are terribly inefficient by default.

Linux can actually meet or even exceed Window's power efficiently, at least at some tasks, but it takes a lot of work to get there. I'd start with powertop and TLP.

As usual, the Arch wiki is a good place to find more information: https://wiki.archlinux.org/title/Power_management

gettingoverit · 2 days ago
My CPU is at over 5GHz, 1% load and 70C at the moment. That's in a "power-saving mode".

If nothing would be wrong, it'd be at something like 1.5GHz with most of the cores unpowered.

DuckConference · 2 days ago
They're big, expensive chips with a focus on power efficiency. AMD and Intel's chips that are on the big and expensive side tend toward being optimized for higher power ranges, so they don't compete well on efficiency, while their more power efficient chips tend toward being optimized for size/cost.

If you're willing to spend a bunch of die area (which directly translates into cost) you can get good numbers on the other two legs of the Power-Performance-Area triangle. The issue is that the market position of Apple's competitors is such that it doesn't make as much sense for them to make such big and expensive chips (particularly CPU cores) in a mobile-friendly power envelope.

aurareturn · 2 days ago
Per core, Apple’s Performance cores are no bigger than AMD’s Zen cores. So it’s a myth that they’re only fast and efficient because they are big.

What makes Apple silicon chips big is they bolt on a fast GPU on it. If you include the die of a discrete GPU with an x86 chip, it’d be the same or bigger than M series.

You can look at Intel’s Lunar Lake as an example where it’s physically bigger than an M4 but slower in CPU, GPU, NPU and has way worse efficiency.

Another comparison is AMD Strix Halo. Despite being ~1.5x bigger than the M4 Pro, it has worse efficiency, ST performance, and GPU performance. It does have slightly more MT.

chasil · 2 days ago
Is it not true that the instruction decoder is always active on x86, and is quite complex?

Such a decoder is vastly less sophisticated with AArch64.

That is one obvious architectural drawback for power efficiency: a legacy instruction set with variable word length, two FPUs (x87 and SSE), 16-bit compatibility with segmented memory, and hundreds of otherwise unused opcodes.

How much legacy must Apple implement? Non-kernel AArch32 and Thumb2?

Edit: think about it... R4000 was the first 64-bit MIPS in 1991. AMD64 was introduced in 2000.

AArch64 emerged in 2011, and in taking their time, the designers avoided the mistakes made by others.

Fluorescence · 2 days ago
> Despite being ~1.5x bigger than the M4 Pro

Where are you getting M4 die sizes from?

It would hardly be surprising given the Max+ 395 has more, and on average, better cores fabbed with 5nm unlike the M4's 3nm. Die size is mostly GPU though.

Looking at some benchmarks:

> slightly more MT.

AMD's multicore passmark score is more than 40% higher.

https://www.cpubenchmark.net/compare/6345vs6403/Apple-M4-Pro...

> worse efficiency

The AMD is an older fab process and does not have P/E cores. What are you measuring?

> worse ST performance

The P/E design choice gives different trade-offs e.g. AMD has much higher average single core perf.

> worse GPU performance

The AMD GPU:

14.8 TFLOPS vs. M4 Pro 9.2 TFLOPS.

19% higher 3D Mark

34% higher GeekBench 6 OpenCL

Although a much crappier Blender score. I wonder what that's about.

https://nanoreview.net/en/gpu-compare/radeon-8060s-vs-apple-...

Dead Comment

al_borland · 2 days ago
I’ve been thinking a lot about getting something from Framework, as I like their ethos around relatability. However, I currently have an M1 Pro which works just fine, so I’ve been kicking the can down the road while worrying that it just won’t be up to par in terms of what I’m used to from Apple. Not just the processor, but everything. Even in the Intel Mac days, I ended up buying a Asus Zephyrus G14, which had nothing but glowing reviews from everyone. I hated it and sold it within 6 months. There is a level of polish that I haven’t seen on any x86 laptop, which makes it really hard for me to venture outside of Apple’s sandbox.
jillesvangurp · 2 days ago
I recently upgraded from an M1 mac book pro 15", which I was pretty happy with, to the M4 max pro 16". I've been extremely impressed with the new laptop. The key metric I use to judge performance is build speed for our main project. It's a thing I do a few dozen times per day. The M1 took about four minutes to run our integration tests. I should add that those tests run in parallel and make heavy use of docker. There are close to 300 integration tests and a few unit tests. Each of those hit the database, Redis, and Elasticsearch. The M4 Pro dropped that to 40 seconds. Each individual test might take a few seconds. It seems to be benefiting a lot from both the faster CPU with lots of cores and the increased amount of memory and memory bandwidth. Whatever it is, I'm seriously impressed with this machine. It costs a lot new but on a three year lease, it boils down to about 100 euros per month. Totally worth it for me. And I'm kind of kicking myself for not upgrading earlier.

Before the M1, I was stuck using an intel core i5 running arch linux. My intel mac managed to die months before the M1 came out. Let's just say that the M1 really made me appreciate how stupidly slow that intel hardware is. I was losing lots of time doing builds. The laptop would be unusable during those builds.

Life is too short for crappy hardware. From a software point of view, I could live with Linux but not with Windows. But the hardware is a show stopper currently. I need something that runs cool and yet does not compromise on performance. And all the rest (non-crappy trackpad, amazingly good screen, cool to the touch, good battery life, etc.). And manages to look good too. I'm not aware of any windows/linux laptop that does not heavily compromise on at least a few of those things. I'm pretty sure I can get a fast laptop. But it'd be hot and loud and have the unusable synaptics trackpad. And a mediocre screen. Etc. In short, I'd be missing my mac.

Apple is showing some confidence by just designing a laptop that isn't even close to being cheap. This thing was well over 4K euros. Worth every penny. There aren't a lot of intel/amd laptops in that price class. Too much penny pinching happening in that world. People think nothing of buying a really expensive car to commute to work. But they'll cut on the thing that they use the whole day when they get there. That makes no sense whatsoever in my view.

al_borland · 2 days ago
The M4 was the first chip that tempted me to upgrade from the M1, which I think is the case for most people. At work, I’m at the mercy of the corporate lease. My personal Mac doesn’t get used in a way where I’ll see a major change, so I’m giving it a while longer.

I’ve actually been debating moving from the Pro to the Air. The M4 is about on par with the M1 Pro for a lot of things. But it’s not that much smaller, so I’d be getting a lateral performance move and losing ports, so I’m going to wait and see what the future holds.

zarzavat · 2 days ago
Considering the amount of engineering that goes into Apple's laptops, and compared to other professional tools, 4000 EUR is extremely cheap. Other tradespeople have to spend 10x more.
yesnomaybe · 2 days ago
I'm in the same boat. Still running an MBP M1 Pro 14". Luckily I bought with 32GB in 2021 when it came out so it can run all things docker similar to your setup. I recently ran a production like workload, real stress test, it was the first time I had the fan spinning constantly but it was still responsive and a pleasure to use (and sit next to!) for a few hours.

I've been window shopping for a couple of months now, have test run Linux and really liking the experience there (played on older Intel hardware). I am completely de-appled software-wise, with the 1 exception of iMessages because of my kids using ipads. But that's really about it. So, I'm ready to jump.

But so far, all my research hasn't lead to anything where I would be convinced not to regret in the end. A desktop Ryzen 7700 or 9600X would probably suffice, but it would mean I need to constantly switch machines and I'm not sure if I'm ready for that. All mobile non-macs have significant downsides and you can't even try before you buy anywhere typically. So you'd be relying on reviews. But everybody has a different tolerance for changes like track pad haptics, thermals, noise, screen quality etc. So, those reviews don't give enough confidence. I've had 13 Apple years so far. First 5 were pleasant, next 3 really sucked but since Apple silicon I feel I have totally forgotten all the suffering in the non-Apple world and with those noisy, slow Intel Macs.

I think it has to boil down to serious reasons why the Apple hardware is not fit for one's purpose. Be it better gaming, extreme amount of storage, insane amount of RAM, all while ignoring the value of "the perfect package" and it's low power draw, low noise etc. Something that does not make one regret the change. DHH has done it and so have others, but he switched to Framework Desktop AI Max. So it came with a change in lifestyle. And he also does gaming, that's another good reason (to switch to Linux or dual boot (as he mentioned Fortnite)).

I don't have such reasons currently. Unless we see hardware that is at least as fast and enjoyable like the M1 Pro or higher. I tried Asahi but it's quite cumbersome with the dual boot and also DP Alt not there yet and maybe never will, so I gave up on that.

So, I'll wait another year and will see then. I hope I don't get my company to buy me an M4 Max Ultra or so as that will ruin my desire to switch for 10 more years I guess.

koiueo · 2 days ago
> There is a level of polish

Yeah, those glossy mirror-like displays in which you see yourself much better than the displayed content are polished really well

crinkly · 2 days ago
Having used both types extensively my dell matte display diffuses the reflections so badly that you can’t see a damn thing. The one that replaced it was even worse.

I’ll take the apple display any day. It’s bright enough to blast through any reflections.

spankibalt · 2 days ago
> "There is a level of polish that I haven’t seen on any x86 laptop, which makes it really hard for me to venture outside of Apple’s sandbox."

Hah, it's exactly the other way around for me; I can't stand Apple's hardware. But then again I never bought anything Asus... let alone gamer laptops.

nik736 · 2 days ago
What exactly is wrong with Apple hardware?
Zanfa · 2 days ago
Most manufacturers just don't give a shit. Had the exact same experience with a well-reviewed Acer laptop a while back, ended up getting rid of it a few months in because of constant annoyances, replaced with a MacBook Air that lasted for many years. A few years back, I got one of the popular Asus NUCs that came without networking drivers installed. I'm guessing those were on the CD that came with it, but not particularly helpful on a PC without a CD drive. The same SKU came with a variety of networking hardware from different manufacturers, without any indication of which combination I had, so trial and error it was. Zero chance non-techy people would get either working on their own.
ziml77 · 2 days ago
My venture outside of MacBooks included a Dell XPS. Supposed to be their high end, and that year's model was well reviewed by multiple sources.... yet I returned it after like a week. The fan would not only run far too often but the sound it made was also atrocious. I have no clue if mine was defective or if all the reviewers are deaf to high frequencies. And the body was so flimsy that I would grab the corner of the laptop to move it and end up triggering a mouse click.
Tade0 · 2 days ago
I had a 2020 Zephyrus G14 - also bought it largely because of the reviews.

First two years it was solid, but then weird stuff started happening like the integrated GPU running full throttle at all times and sleep mode meaning "high temperature and fans spinning to do exactly nothing" (that seems to be a Windows problem because my work machine does the same).

Meanwhile the manufacturer, having released a new model, lost interest, so no firmware updates to address those issues.

I currently have the Framework 16 and I'm happy with it, but I wouldn't recommend it by default.

I for one bought it because I tend to damage stuff like screens and ports and it also enables me to have unusual arrangements like a left-handed numpad - not exactly mainstream requirements.

crinkly · 2 days ago
I suspect the majority of people who recommend particular x86 laptops have only had x86 laptops. There’s a lot of disparity in quality between brands and models.

Apple is just off the side somewhere else.