I watched the first three Mad Max films last night, in one sitting...while there is certainly an argument for absolute performance over time, it's becoming a lot more difficult to justify the power consumption of the current Intel (and, to a lesser degree, AMD) offerings, especially for sustained personal productivity and burst compilation tasks.
We cannot speak of performance without the per-watt quotient. Battery life is a real concern, and current Wintel laptops just don't compete.
Energy concerns are real (as depicted in the George Miller-directed films mentioned above), and it may be the case that the premium paid for Apple Silicon-based devices is actually a bargain.
The ultra isn’t in laptops. The ultra in the Mac Pro starts at 7 grand, if you’re going to pay a 4 grand premium, is 200$ of extra power over its lifetime really that important?
I have a m2 Max, the battery life is awe inspiring. M2 ultra doesn’t best the competition for its use case.
Don't waste your money on the Pro, you can get an M2 Ultra in a Mac Studio for $4000. The only reason to buy the Pro is if you have a bunch of really weird PCIe cards[0].
As for the desktop use case... sure, you aren't going to care about the $200 of saved power draw, but not having a very loud and hot machine at your desk has to count for something, right?
[0] GPUs won't work, their memory access is locked out in hardware and Apple removed the MPX connectors that powered the Intel GPU modules.
It is not the dollars per watt that matters to many users.
What matters more than people may realize is battery runtime. The Apple hardware just spanks x86 hardware, unless that hardware is fitted with a big battery, and or a secondary one.
I typically run Lenovo hardware, and on my faster machine, I got a very large battery and could get 6 solid hours. And that is at a nice 3Ghz speed too.
My current machine (i5) is far slower than the M1 and that hot running Lenovo. (i7) It has two batteries that can yield 5 to 7 hours.
Those machines are heavier, slower, bigger, and just feel crappy compared to the M1 Air I have been using.
And that thing is crazy good. Battery life is longer, sometimes by a considerable amount. It is great hardware, fast, easy to use, light, you name it.
> it's becoming a lot more difficult to justify the power consumption of the current Intel (and, to a lesser degree, AMD) offerings, especially for sustained personal productivity and burst compilation tasks.
I don't get it.
The M2 is 15W. There are mobile Ryzen APUs with the same TDP. They're about the same speed; maybe the M2 is a little faster for single thread and the PC is a little faster for multi-thread, it's not that different. You can get a PC laptop which is under 3 lbs and has 14+ hours of battery life.
You can also get a PC laptop which is heavier and has a shorter battery life because they have no qualms about selling you what amounts to a desktop CPU which is then something like 30% faster for multi-thread. But nobody forces you to buy that one.
can someone recommend a concrete laptop then, please? I have been putting off the buying of a new one for years now. (I have an old XPS 13 and a P50 with an M1000M, so ... yeah, old dogs!)
I want to put Linux on it. So far I have been looking at the legion 5 pro, because a friend got one recently, and seems amazing, though haven't yet got my hands on it long enough to try it with an USB stick with a recent Ubuntu (with a recent kernel) to test the power management feature support.
Intel and AMD sell laptop chips with comparable performance per watt to M2 laptops. If you're buying an M2 Ultra desktop computer, your primary concern is performance period, not performance per watt.
If the chips are designed to sacrifice performance for power usage, then they have no place in a computer without a battery.
Apple should assume that desktop machines have an unlimited power source, and that users want a maximum speed. That the users time is more expensive than the electricity being fed into the computer.
Laptops, sure, sacrifice power to extend the battery. Apple does an excellent job here.
The only reason you would buy one of these desktop machines is because you want to be in the Apple ecosystem.
I find the article quite informative. Yes, M2 and the other chips are completely different products with different goals. If one wants to say that something completely trumps the other, it will be wrong.
But here is what is visible:
The M2 core is probably in the same ballpark as Zen 4 core, likely a tiny bit below. That may become very tiny if Zen 4 core runs at lower frequency to equalize the power. This doesn't account for the AVX512 of Zen4.
24 M2 cores manage to beat 16 Zen 4 cores also at lower power, but these are different products. Zen 4 does scale to far more cores, 96 in an EPYC chip. AMD and Intel have far more investments in interconnects and multi-die chips to do these things.
The M2 GPU is in the same league as a 300$ mid-range nVidia card. It is not competitive at all - Apple produces the largest chip it can manufacture to go against a high margin smaller chip that nVidia orders.
Again all of this doesn't mean each product is not good on its own.
Apple's GPU performance is what makes me sceptical about their gaming related advertising. Sure, you can do 1080p gaming with the highest SKU, but you're paying through the nose if you bought an M2 to play games.
It seems strange to me for Apple to advertise something they haven't exactly mastered yet on stage.
Maybe they have some kind of optimization up their sleeves that will roll out later? I can imagine Apple coming out with their own answer to DLSS and FSX2 based on their machine learning hardware, for example. On the other hand, I would've expected them to demonstrate that in the first place when they shoed off their game port toolkit.
With crossover and Apple's latest release of gameportingtoolkit I'm able to maintain over 120FPS on ultra settings at native resolution on Diablo 4 with my M2 Max MBP. It was fair to be skeptical before that release this week, but there's plenty of evidence out there now that Apple silicon can handle gaming just fine. Other users are reporting 50-60 FPS with ultra settings on their 6k Studio displays.
> Apple's GPU performance is what makes me sceptical about their gaming related advertising.
The issue is that people compare games running under emulated x86 and emulated graphics APIs, when making claims about what the SOC is capable of.
There's nothing wrong with knowing how well the SOC performs when emulating games, but if you claim to be talking about what the SOC can do, then include the performance of native games as well.
>The M2 core is probably in the same ballpark as Zen 4 core, likely a tiny bit below.
The 7950x is running at 5.7Ghz when only a single thread is saturated. The M2 Ultra caps its cores at 3.5Ghz. A 62% higher clock speed, at a monster power profile, to barely beat it isn't evidence of a core advantage.
>24 M2 cores manage to beat 16 Zen 4 cores also at lower power
The M2 ultra has 16 real cores, with 8 additional efficiency cores that are very low performance. And of course the M2 Ultra could pretty handily trounce the 7950x because the latter has to dramatically scale back the clock speed, as the power profile of all 16 cores at 5.7Ghz would melt the chip. And of course the 7950x has hyper-threading and hardware for mini-versions of 16 more cores, so in a way it has more cores than the Apple chip.
>This doesn't account for the AVX512 of Zen4.
AVX512 is used by a tiny, minuscule fraction of computers ever in their history of existence. It is the most absolute non-factor going.
I mean...in an ideal world Apple would get the GPU off the core. It limits their core and power profile, and takes up a huge amount of die space. They could then individually mega-size the GPU and the CPU. They could investigate mega interconnects like nvidia's latest instead of trying to jam everything together.
Was Apple correct to call it the most powerful chip? Certainly not. And there is a huge price penalty. But they're hugely, ridiculously powerful machines that will never leave the user wanting.
It is true that nobody competes in the low power high efficiency workstation market or maybe such a market does not exist yet and Apple is creating it.
But also as users, some were expecting the M series are so good that they are going to take many markets by storm. And it seems it is not happening.
$300 midrange Nvidia card? Did you get stuck in 2010?
That's way below entry-level at this point. You're likely comparing it with a 1666 cards or something, which is based on a chip from 2012.
I wish Apple silicone was actually competitive on performance. Nvidia needs competition or they'll likely double prices again with the next generation.
> The M2 GPU is in the same league as a 300$ mid-range nVidia card
It still has the advantage of a much larger memory pool.
I did a quick comparison exercise - I priced two workstations with similar configurations, one from Dell, the other from Apple. While there are x86 (and ARM) machines that'll blow the biggest M2 out of the water, the prices, as far as Apple can go, aren't much different.
If you buy anything labeled as "workstation", you're paying twice the price already.
The article describes the M2 being blown out of the water by a 4080 and a 13900KS. That's about $2000 + RAM, motherboard, and power supply. Plus you can use the built in GPU in your CPU for acceleration things like transcodes.
You can get a pre-built gaming PC with a 4090 for about $4000, that'll crush the M2 in compute if you use any kind of GPU acceleration.
Of course the M2 has some other advantages (the unified memory and macOS) and some other disadvantages (you're stuck with the amount of RAM you pick at checkout, macOS, you have to sacrifice system RAM for GPU RAM) so it all depends on your use case.
I think the M2 still reigns supreme for mobile devices, though AMD is getting closer and closer with their mobile chips, but if you've got a machine hooked into the wall you'll have to pay some pretty excessive electricity rates for the M2 to become competitive.
> It still has the advantage of a much larger memory pool.
I wonder if given roughly equal power to the GPUs in current gen consoles (PS5/XBSX), it'd yield some advantage in porting console games since those consoles also have a large shared pool of memory (16GB), and neither AMD nor Nvidia want to give up using VRAM as an upsell.
Pedantically, sysadmins always counted the power consumption (or, compute density) when upgrading servers, and would even do early upgrades as a cost saving measure.
Power costs (in datacenters at least) were high enough that buying the €10,000 server that sucked 200w more was worth less than the €15,000 machine that didnt suck that extra 200w.
So electricity prices can be more than a negligible amount on the total.
Where that line is depends on your personal situation.
I live in southern sweden and they hide the total price of power here, but aggregated my cost per watt is 5sek/kWh (roughly €0.45).
So a worst case for me at 200w with 24hrs of usage is about €800/y
You're doing it too. You're assuming how someone would define a workstation, you're assuming that definition includes it being power hungry.
A workstation can be a single core, 256MB RAM, 1 watt SBC. It can be a 96 core, 2TB RAM, 1 kw beast. It can be anything in between or even outside of that range.
I'm not saying everyone cares about power consumption, but several people here seem to be saying that no one does, and that's simply not true.
When I hear "workstation" what comes to mind is a daily driver for business purposes, so rock-solid reliability is utmost. Redundant components, ECC ram, rated to run 24/7 with ample cooling. Performance per watt is also up there.
Whereas raw performance specs are typically high but ultimately dependent on use case.
If power utilization matters, then a more 'efficient' device, that takes longer might actually use more power overall - esp. when you factor in the lighting, Aircon etc that might be needed by the human operator who is waiting for the workstation to finish computation.
Task energy is a more reasonable argument than whinging about how some people care so much about power.
Everyone cares about power, at a fairly similar level - maybe 2-4x differences, but not 10x. And most of the ones who care underestimate how rarely their machine is actually fully busy.
Idle power is probably more interesting than even task power, for anything other than an unusually busy server or cloud hypervisor.
And yeah, as others point out, this is Apples to oranges. x86 desktops are great at some things, M2 Ultras are great at others, and the overlap that really matters is pretty small... Like, you have to be crazy to buy an M SoC for gaming, or buy a Nvidia GPU for workloads that won't fit in VRAM.
I tested the new game porting kit on a Mac mini M2 and was surprised I was able to run Cyberpunk 2077 and it was running really well, I can imagine that more powerful M processors will be awesome, so the argument of games might go away soon
Maybe, but most of these comparisons are based on GPU performance (ie. games). For other workloads like machine learning, Tensor cores on NVIDIA GPUs will blow Apple Silicon GPUs out of the water. The M2 Ultra is 27 TFLOPS. The 4080't Tensor Cores are 48.7 TFLOPS in FP32, 194.9 TFLOPS in FP16, 389.9 TFLOPS in FP8 with FP16 accumulate. (IIRC on Apple Silicon GPUs FP16 performance is roughly the same as FP32.)
(There is the Neural Engine which supports lower precision, but it limited in various ways.)
Regardless, the strides that Apple has been making are impressive.
I think the exciting thing with Apple silicone and "AI" is inference, not training. Due to their unified memory, you can potentially have enormous local model inference. That's potentially much more expensive on a PC with a GPU having its own memory.
Apple have an opportunity, if they 2-4x the memory on the entry level devices (not beyond the realms of possibility), to make local inference a thing available to all.
One differentiator is the amount of memory available. Apple's shared pool of memory means the GPU has potentially up to 96gb of ram at its disposal, where discrete GPU's are often limited to 8, 12, 16, etc. They mentioned in the keynote that the M2 can handle many tasks discrete GPUs cannot simply because they run out of ram.
FLOPS are one thing, but 192 GB of unified memory that can be used as VRAM is something else. That could be a big win on the inference side of things, where even an RTX 4090 GPU is limited to only 24 GB.
Then there's the power consumption difference to consider. This seems like one of those cases where benchmarks reveal only a fraction of the larger picture.
The absolute "best of class" Nvidia card is currently the 4090, which can be up to 30% faster than the 4080 in scaling workloads.
What would be more interesting is to see how Nvidia's laptop cards fare here though - they're constrained to much lower wattage (80-120w) and would make for a much fairer fight against the ~200w M2 Ultra.
> What would be more interesting is to see how Nvidia's laptop cards fare here though - they're constrained to much lower wattage (80-120w) and would make for a much fairer fight against the ~200w M2 Ultra.
Doesn't look like Apple offers Ultra in a laptop - just the Basic, Pro, and Max.
A decent 4090 GPU alone is over $3000 in Australia - that’s half the price of a Mac Studio with Ultra - just for the GPU.
Then you’d be looking for a 24 core CPU, 64GB RAM, 1TB PCIe 5 SSD, mainboard with 6x thunderbolt ports, a silent cooling setup, high quality case that is both small and all the gear while running cool. and if you’re stuck with MS Windows - an Operating System.
Many people do care - many Americans have subsidized low power but that’s not globally true and even if your power is free a hot CPU means you’re listening to fan noise and heating up the room. In a cold winter that’s not bad but I knew people who had to do things like leave their office doors open for cooling.
I wonder how long you have to run the Apple chip to save enough on electricity to make the price difference worthwhile? Does it only drop to one human lifetime in places where energy cost is insanely high?
The article reports that the M2 Ultra is basically even with a 4060 Ti in OpenCL compute which they refer to as a mainstream card. It is more a notch above entry level. The "best of class" 4090 is actually 2.5x faster. But again, this is OpenCL so who knows how the two compare in a more relevant benchmark.
Regardless, wccftech is far from reliable. IIRC, /r/amd blocks links to the site.
Those are nvidia's best consumer GPUs. I think the cheese grater falls into the pro segment. In that segment nvidia has the A6000s with 48GB VRAM and 91 SP TFlops compared to the 4090's 24GB and 73 SP TFlops. But that costs as much as the Mac Pro alone. And even bigger options (segmented for server/datacenter use) are available.
power consumption doesn't scale linearly with performances
the absolute best of class NVIDIA discrete GPU offering could possibly outperform the Apple GPU at the same power level
Or, to put it in another way, to recover that remaining 50% of performances (2x) the increase in power consumption would be exponential (a lot more than 2x, like 10x)
It's not just that it doesn't come with a discrete GPU. Apple Silicon doesn't support discrete GPUs to begin with.
As far as I understand it—and this is just from watching Apple's presentations on the architecture—the lack of a discrete GPU is a big part of how the Apple Silicon machines achieve good performance per watt.
Instead of having discrete RAM or a discrete GPU with its own VRAM, all of the RAM is accessible to the CPU and and the GPU in a unified memory architecture. On the M2 Ultra, this allows for 800 GB/s of memory bandwidth, and also eliminates a lot of the need to copy data from RAM to VRAM, as both the GPU and the CPU can access the same memory. In return, this allows the GPU to match the performance of discrete GPUs that have a lot more cores.
Of course, the big downside is that you can't expand the RAM or install a beefier GPU. It's all baked in to the logic board.
Or memory slots, or dual/quad socket capability, or compatibility with the GPU card for previous gen Mac Pro. You get one multicore ARM SoC with 192GB of on-package L4 cache and integrated graphics, and that’s it.
While 192GB of RAM is more than I would need, for people looking to use 1.5TB of RAM and a pair of NVIDIA GPU they had from previous model, they’d have to go elsewhere.
Which leaves me wondering; how much engineering at Apple are happening on Mac?
I think a useful way of thinking about this is it has something like $300-400 worth of discrete GPU equivalent performance assuming that the article's claims are correct (not exactly a given...) as it will get you something similar to a 3060 ti or 4060 ti.
My M1 Pro is about 33% as fast as a box with a 3060 in it on PyTorch and obviously it is way more efficient and doesn't require a special power supply, which is just taken for granted as something you have to do with a PC.
The best case this article can make is that if you need to play the latest game or do intense ML stuff you probably want NVIDIA, but that's the same as it ever was.
It also costs a lot more. For the money you can get a laptop with a 4080m and reduce its power limit and probably get the same or better efficiency, but with room to the top if you need it.
They said it's not faster than the desktop parts, which is almost a false statement since it actually is faster at multithreaded than both base desktop SKUs. And then they switch to providing a specific percentage when comparing the the 4080 (also based on an deceptively narrow comparison, one their other articles counter) for dramatic effect.
I don't think it was ever meant to be the most informative article: it seems written to serve the contrarians because that's profitable from a readership perspective for a publication like this.
NVIDIA still beats Apple even in efficiency, or so I have read. If you care about wattage that much, you can just underclock and undervolt your GPU. It is free and the police can not stop you.
Second, people don't buy Macs only for performance. They also buy Macs for macOS, for integration between devices, for a system that is cool and quiet, for hardware acceleration of ProRes, for on-device privacy-preserving machine learning. Being a bit slower than competing AMD and Intel systems is acceptable, because you get so many other desirable properties in return.
I'd definitely consider a Mac Studio with an M2 Max or M2 Ultra, if I didn't want something portable. I would never buy a machine with a competing machine with an Intel or AMD machine, because I don't want to deal with Windows or desktop Linux.
Other people have another set of priorities and that is fine.
I don't know. Nowadays a top-of-the-spec Linux workstation without any cost on top (i.e. bare-components) is around ~10k. 5995WX in the graph wccf provided alone costs ~5k.
The article does not say if it’s a Mac Studio or Mac Pro. I think that matters. My understanding is that there is no TDP on Apple Silicon, and that it has a lot of runway in a less constrained thermal environment like the new Mac Pro vs Mac Studio.
While I’m still surprised they didn’t put a second Ultra in the Mac Pro, I’m betting there’s a wider delta than people imagine between the two form factors.
The M2 has been a bit underwhelming in general through all of its iterations: Apple jumped into such a lead with the M1 [1], so it was disappointing when they slowly iterated while Intel and AMD have made enormous strides catching up. Everyone keeps citing TDP, but given that we're talking about desktops that just isn't a huge factor.
Having said that, the 7950X was released late February, and the 13900KS was released in mid January. Both of this year. Both are their premiere available chips right now in the segment. Referring to them like they're last year's junk is rather silly.
[1] Though fun fact with the M1, I remember super disappointing Geekbench results leaking before its release. People do know how low trust the site is, right? The computer identifiers on the claimed "M2 Ultra" devices claim to be Macbook Pro 14" devices....which aren't getting M2 Ultras for obvious reasons. In all likelihood someone is making guesses and posting nonsense.
> Having said that, the 7950X was released late February, and the 13900KS was released in mid January. Both of this year. Both are their premiere available chips right now in the segment. Referring to them like they're last year's junk is rather silly.
7950X was released September, 2022. It's quite literally last year's chip and given that AMD release cycle is typically about 2 years, we're roughly halfway between last release and 9000 series release.
You might be getting it confused with the Non-X versions that were released earlier in 2023 -- those are basically the same chip but power limited and maybe slightly worse selections of silicon. Of those Non-X versions it was 7600, 7700, 7900 but non-X of 7950 was released. [1]
Desktop pcs requiring a ludicrous amount of power and cooling is absolutely problematic. The amount of people willing to put up with big honking machines is dwindling.
May I assume that you haven't used a desktop from some year equivalent to whatever Mac you use? Because modern desktops are far from "honking."
I have a 7950x (the high core count desktop offering). AMD require (sans one brand) water-cooling for it. That means you get big quiet fans, which are far less "honking" than the tiny loud ones that are in laptops by necessity. In fact, due to the nice large radiators that liquid coolers have the fans don't spin at all the majority of the time.
When I do need power, it's on-tap.
My 6900xt is big honking during gaming, but you can get real quiet $300 GPUs. Or just use the integrated graphics and enjoy the quiet liquid cooling.
> Desktop pcs requiring a ludicrous amount of power and cooling is absolutely problematic.
High-end PSUs now overlap with smaller space heaters on power output. My living room is usually 3-4 degrees hotter around my desk than it is by the dining table.I'm looking to upgrade my gaming PC, and getting the power budget under control is surprisingly challenging.
AMD and Intel are jacking up the default power settings to absurd levels because it gives marketing bigger numbers to to throw around.
You can drastically reduce the power you supply to desktop chips with BIOS settings. You'll generate far less heat, can use a smaller power supply and form factor, while still getting great performance.
If Intel/AMD get to a level where their chips rival M1/M2 while power throttled, things get interesting.
My machine is entirely silent on normal operations (Zen 3 5600X and RX 6700). All the fans, on both GPU and CPU are stopped in desktop usage. And it doesn't obviously eat that much power (my monitor probably eats more).
The only moment I could hear them is if I play games. And then I have headset on, so I can't hear them.
I really couldn't care less if it ate twice the amount of power when playing games.
IMO driving down power consumption should still be a goal even for desktop machines. It means that the overwhelming majority of users get cooler, quieter, less bulky machines, while the fringe who need raw power at all costs can overclock to the moon if they really want to.
This also pushes more "desktop like" performance to both ultraportable and reasonably portable laptops, allowing these machines to fully replace desktops for the overwhelming majority without all of the caveats that come with "desktop replacement" laptops (workstation laptops, heavy duty gaming laptops, etc). A lot of people who previously wouldn't have seen laptops as capable of being their primary machines are doing exactly that with M1/M2 Pro MBPs.
>while the fringe who need raw power at all costs can overclock to the moon if they really want to.
This is actually the case! The 7950X and 13900K come in non-X and non-K variants, which have vastly reduced power footprints and overall consumption, and you can even take your X or K variant and... Enforce that exact same power envelope in BIOS, for minimal loss in performance. But the purchasers of desktop X and K SKUs are the overclocking fringe (by and large). I will admit though, a lot of laptops are sold with with i|r7|9 +HX variants that shouldn't be purchased because big number means easier upsell.
> IMO driving down power consumption should still be a goal even for desktop machines. It means that the overwhelming majority of users get cooler, quieter, less bulky machines, while the fringe who need raw power at all costs can overclock to the moon if they really want to.
It is the goal (after all more efficiency also means fitting more powerful cores into the thermal envelope) but given the choice most desktop users would be fine with "just a bit bigger box" rather than sacrifice performance for the price.
With power in city downtowns (eg San Jose) at $0.50+/kWh it’s definitely becoming a huge factor. My Mac uses almost an order of magnitude less power than the PC and I keep the PC powered off.
From what I understand, the M2 was released due to TSMC struggling with yields on their N3 process. Apple had to do something with the N5 node to keep hardware available to consumers, and M2 was the result.
The results speak for themselves (Apple is slower than some) but the context does matter: Apple has been talking about performance per watt since Steve Jobs showed a keynote slide announcing the transition from PowerPC.
The cynical view is that Apple is intentionally misleading customers with their ambiguous graph axes. Another perspective is they’re simply demonstrating the metric they’ve optimized for in the first place.
I bought an m series device to replace my windows desktop mostly for efficiency reasons. It saves me hundreds a year in electricity costs at the moment (quite literally, I calculated it beforehand and checked real usage afterwards). It also means my office is now silent and cool. It’s a big QoL improvement for me.
Even if your power is free, heat means a toasty office and fan noise, not to mention being careful about air flow around your furniture. Not a deal breaker but 100% of the Apple Silicon users I know have mentioned not previously having appreciated just how much noise their old systems made.
Like I said in another comment, the results say Apple is faster than both base SKUs on multithreaded and within the margin of error on single threaded once you consider they only used one Geekbench result.
I don't think it's misleading anyone to say they're faster, when their on SoC graphics are 4060 level, they definitely are overall.
CPUs are intentionally engineered for a target market. When engineering the M2, Apple had a certain market in mind, and design and performance trade-offs were made accordingly. As others have noted, power efficiency seems to be a big priority. The M2 Ultra has a TDP of 60W. The i9-13900K has a base consumption of 125W, and draws up to 253W under stress. So do the math. Intel achieves 32% better single-core performance and 41% better multi-core performance (Cinebench) for 422% of Apple's power consumption. If there's something impressive here, it's that Apple is able to do so much with so little. If Apple wanted to, they could probably conjoin two M2 Ultra's and soundly beat the i9-13900K by a considerable margin and still use about half the same power to do so. The real question is why any consumer would need that much compute, and the target market for such a niche is probably very small which is why Apple didn't do that
The M2 Ultra comes on a $7000 workstation; it's not exactly consumer-level hardware. What you're saying is absolutely true - they could just double it up (like they do from the normal to the Pro, and from the Pro to the Ultra). And I hope they do, for an M3 Ultra. It'd be great if they pushed it even harder and showed Intel/AMD that they can beat them at both the low end and the high end, with plenty of power consumption to spare.
As you said though, it's a very niche thing, so even without beating Intel at the top end they're not going to be losing much. It's just a bit of a shame. The i9-13900K is only $500, it's not like it's some wildly out-of-reach thing.
The Apple processors are definitely impressive efficiency-wise (and by many other metrics).
But of course, combining any number of M2’s won’t increase their single core scores. Intel’s desktop chips are there for people who want high clock speeds and are happy to pay in power for that (and the power cost is super linearly related to clock frequency). The Intel chips are just designed for different use-cases, it doesn’t make sense to assume either company could just linearly scale things to reproduce each other’s products.
The relevance of multi-core scores depends on how parallel the workload is. If we all had perfectly parallel workloads I guess Xeon Phi’s would have sold better.
> The real question is why any consumer would need that much compute, and the target market for such a niche is probably very small which is why Apple didn't do that
This is a lazy argument. Why did they make the M1 faster than other CPUs? I would argue that most people don’t need even that performance. Why do they put M chips in iPads - they can’t even use it optimally.
The way I framed my argument, it's a matter of trade-offs. Regarding the iPad, it's definitely possible that they might have used an A15/A16 from their iPhone line rather than the M1 and still have good performance, but the M1 was probably better for marketing and for consumer appeal hence its inclusion.
> If Apple wanted to, they could probably conjoin two M2 Ultra's and soundly beat the i9-13900K by a considerable margin and still use about half the same power to do so.
You can play that in the other way too. Use multiple 13700Ts at 35W each.
Poorly written, resulting in what could look like a bias.
The article mentions that apple has focused on single core performance while the x86 processors in question are designed for multi core use cases. This reflects two different markets being addressed and the sad state (small amount) of multicore code today.
Also it’s silly to claim that the M2 Ultra is so expensive — you can get the same performance from a 3K “studio” desktop that you do from the >7K “Mac Pro”.
I use apple for all my “terminals” (macs, iPhone, etc) but really want AMD and Intel to keep working on these multithreaded powerhouses because I depend on that on the cloud side. I don’t see this ever being in Apple’s markets. Articles that further that are needed, but this isn’t one of them.
haven’t read the article, but there are no single threaded use cases on Apple platforms that Apple cares about, so I’d take any claim that Apple is optimizing for single threaded performance with an enormous bag of salt
I'm pretty sure I remember discussion from Srouji and others on this specific topic at the "apple silicon" WWDC introduction. But I can't find anything useful from Apple in a web search, so it seems my statement is just a "some guy on the Internet" assertion.
It seems pretty clear and unsurprising that Apple optimises their design for their use case (e.g. major consideration of bandwidth to screen in handheld devices, reminiscent of one of the Alto's design criteria) but how that plays out doesn't support my claim either. But Apple's intended use cases aren't the same as the threadripper's.
> but there are no single threaded use cases on Apple platforms that Apple cares about
I don’t see how that could be true. A huge amount of software tasks are basically single threaded.
Remember since Apple does everything soup-to-nuts they have a ton of performance data from their computers to know what real user workloads look like so they can optimize the hardware + the software for them.
UI latency, the entire point of a device, is a single threaded use case.
Multithreaded performance is only good when you don't care about power use, but that's never true on a battery powered phone. It's actually more often the case that you optimize software by removing accidental excess concurrency than by adding it. Junior engineers love them some unstructured concurrency.
We cannot speak of performance without the per-watt quotient. Battery life is a real concern, and current Wintel laptops just don't compete.
Energy concerns are real (as depicted in the George Miller-directed films mentioned above), and it may be the case that the premium paid for Apple Silicon-based devices is actually a bargain.
I have a m2 Max, the battery life is awe inspiring. M2 ultra doesn’t best the competition for its use case.
As for the desktop use case... sure, you aren't going to care about the $200 of saved power draw, but not having a very loud and hot machine at your desk has to count for something, right?
[0] GPUs won't work, their memory access is locked out in hardware and Apple removed the MPX connectors that powered the Intel GPU modules.
What matters more than people may realize is battery runtime. The Apple hardware just spanks x86 hardware, unless that hardware is fitted with a big battery, and or a secondary one.
I typically run Lenovo hardware, and on my faster machine, I got a very large battery and could get 6 solid hours. And that is at a nice 3Ghz speed too.
My current machine (i5) is far slower than the M1 and that hot running Lenovo. (i7) It has two batteries that can yield 5 to 7 hours.
Those machines are heavier, slower, bigger, and just feel crappy compared to the M1 Air I have been using.
And that thing is crazy good. Battery life is longer, sometimes by a considerable amount. It is great hardware, fast, easy to use, light, you name it.
I don't get it.
The M2 is 15W. There are mobile Ryzen APUs with the same TDP. They're about the same speed; maybe the M2 is a little faster for single thread and the PC is a little faster for multi-thread, it's not that different. You can get a PC laptop which is under 3 lbs and has 14+ hours of battery life.
You can also get a PC laptop which is heavier and has a shorter battery life because they have no qualms about selling you what amounts to a desktop CPU which is then something like 30% faster for multi-thread. But nobody forces you to buy that one.
I want to put Linux on it. So far I have been looking at the legion 5 pro, because a friend got one recently, and seems amazing, though haven't yet got my hands on it long enough to try it with an USB stick with a recent Ubuntu (with a recent kernel) to test the power management feature support.
I guess I don't understand why you would want the ultra if you still prioritize TDP over performance.
Apple should assume that desktop machines have an unlimited power source, and that users want a maximum speed. That the users time is more expensive than the electricity being fed into the computer.
Laptops, sure, sacrifice power to extend the battery. Apple does an excellent job here.
The only reason you would buy one of these desktop machines is because you want to be in the Apple ecosystem.
Deleted Comment
But here is what is visible:
The M2 core is probably in the same ballpark as Zen 4 core, likely a tiny bit below. That may become very tiny if Zen 4 core runs at lower frequency to equalize the power. This doesn't account for the AVX512 of Zen4.
24 M2 cores manage to beat 16 Zen 4 cores also at lower power, but these are different products. Zen 4 does scale to far more cores, 96 in an EPYC chip. AMD and Intel have far more investments in interconnects and multi-die chips to do these things.
The M2 GPU is in the same league as a 300$ mid-range nVidia card. It is not competitive at all - Apple produces the largest chip it can manufacture to go against a high margin smaller chip that nVidia orders.
Again all of this doesn't mean each product is not good on its own.
It seems strange to me for Apple to advertise something they haven't exactly mastered yet on stage.
Maybe they have some kind of optimization up their sleeves that will roll out later? I can imagine Apple coming out with their own answer to DLSS and FSX2 based on their machine learning hardware, for example. On the other hand, I would've expected them to demonstrate that in the first place when they shoed off their game port toolkit.
The issue is that people compare games running under emulated x86 and emulated graphics APIs, when making claims about what the SOC is capable of.
There's nothing wrong with knowing how well the SOC performs when emulating games, but if you claim to be talking about what the SOC can do, then include the performance of native games as well.
The 7950x is running at 5.7Ghz when only a single thread is saturated. The M2 Ultra caps its cores at 3.5Ghz. A 62% higher clock speed, at a monster power profile, to barely beat it isn't evidence of a core advantage.
>24 M2 cores manage to beat 16 Zen 4 cores also at lower power
The M2 ultra has 16 real cores, with 8 additional efficiency cores that are very low performance. And of course the M2 Ultra could pretty handily trounce the 7950x because the latter has to dramatically scale back the clock speed, as the power profile of all 16 cores at 5.7Ghz would melt the chip. And of course the 7950x has hyper-threading and hardware for mini-versions of 16 more cores, so in a way it has more cores than the Apple chip.
>This doesn't account for the AVX512 of Zen4.
AVX512 is used by a tiny, minuscule fraction of computers ever in their history of existence. It is the most absolute non-factor going.
I mean...in an ideal world Apple would get the GPU off the core. It limits their core and power profile, and takes up a huge amount of die space. They could then individually mega-size the GPU and the CPU. They could investigate mega interconnects like nvidia's latest instead of trying to jam everything together.
Was Apple correct to call it the most powerful chip? Certainly not. And there is a huge price penalty. But they're hugely, ridiculously powerful machines that will never leave the user wanting.
But also as users, some were expecting the M series are so good that they are going to take many markets by storm. And it seems it is not happening.
$300 midrange Nvidia card? Did you get stuck in 2010?
That's way below entry-level at this point. You're likely comparing it with a 1666 cards or something, which is based on a chip from 2012.
I wish Apple silicone was actually competitive on performance. Nvidia needs competition or they'll likely double prices again with the next generation.
It still has the advantage of a much larger memory pool.
I did a quick comparison exercise - I priced two workstations with similar configurations, one from Dell, the other from Apple. While there are x86 (and ARM) machines that'll blow the biggest M2 out of the water, the prices, as far as Apple can go, aren't much different.
https://twitter.com/0xDEADBEEFCAFE/status/166747612998729728...
The article describes the M2 being blown out of the water by a 4080 and a 13900KS. That's about $2000 + RAM, motherboard, and power supply. Plus you can use the built in GPU in your CPU for acceleration things like transcodes.
You can get a pre-built gaming PC with a 4090 for about $4000, that'll crush the M2 in compute if you use any kind of GPU acceleration.
Of course the M2 has some other advantages (the unified memory and macOS) and some other disadvantages (you're stuck with the amount of RAM you pick at checkout, macOS, you have to sacrifice system RAM for GPU RAM) so it all depends on your use case.
I think the M2 still reigns supreme for mobile devices, though AMD is getting closer and closer with their mobile chips, but if you've got a machine hooked into the wall you'll have to pay some pretty excessive electricity rates for the M2 to become competitive.
I wonder if given roughly equal power to the GPUs in current gen consoles (PS5/XBSX), it'd yield some advantage in porting console games since those consoles also have a large shared pool of memory (16GB), and neither AMD nor Nvidia want to give up using VRAM as an upsell.
Why do you think NVIDIA doesn't "just add" "more memory"? To its $40,000 H100s, which top out at "just" 80GB?
The answer isn't price segmentation.
So many people making claims that power utilization doesn't matter. Perhaps not to them, but it does for many.
Energy prices are getting higher and higher in many parts of the country, heck in the world.
Devices that consume more power generate more heat. So now one is likely using more electricity to keep their home or office cool and comfortable.
Noise matters for many people using workstations. Running a system at full tilt can be irritating, distracting, because of the active cooling.
Some people are just simply conscious of how much electricity they use and want to have a lower environmental impact.
And then there's the large scale matter. One workstation might not be a big deal in terms of energy usage, but millions of them absolutely is.
Saying no one cares how much power a workstation uses is disingenuous.
Power costs (in datacenters at least) were high enough that buying the €10,000 server that sucked 200w more was worth less than the €15,000 machine that didnt suck that extra 200w.
So electricity prices can be more than a negligible amount on the total.
Where that line is depends on your personal situation.
I live in southern sweden and they hide the total price of power here, but aggregated my cost per watt is 5sek/kWh (roughly €0.45).
So a worst case for me at 200w with 24hrs of usage is about €800/y
A workstation can be a single core, 256MB RAM, 1 watt SBC. It can be a 96 core, 2TB RAM, 1 kw beast. It can be anything in between or even outside of that range.
I'm not saying everyone cares about power consumption, but several people here seem to be saying that no one does, and that's simply not true.
Whereas raw performance specs are typically high but ultimately dependent on use case.
Everyone cares about power, at a fairly similar level - maybe 2-4x differences, but not 10x. And most of the ones who care underestimate how rarely their machine is actually fully busy.
Idle power is probably more interesting than even task power, for anything other than an unusually busy server or cloud hypervisor.
And yeah, as others point out, this is Apples to oranges. x86 desktops are great at some things, M2 Ultras are great at others, and the overlap that really matters is pretty small... Like, you have to be crazy to buy an M SoC for gaming, or buy a Nvidia GPU for workloads that won't fit in VRAM.
(There is the Neural Engine which supports lower precision, but it limited in various ways.)
Regardless, the strides that Apple has been making are impressive.
Apple have an opportunity, if they 2-4x the memory on the entry level devices (not beyond the realms of possibility), to make local inference a thing available to all.
Then there's the power consumption difference to consider. This seems like one of those cases where benchmarks reveal only a fraction of the larger picture.
https://wccftech.com/m2-ultra-only-10-percent-slower-than-rt...
Deleted Comment
What would be more interesting is to see how Nvidia's laptop cards fare here though - they're constrained to much lower wattage (80-120w) and would make for a much fairer fight against the ~200w M2 Ultra.
Doesn't look like Apple offers Ultra in a laptop - just the Basic, Pro, and Max.
Then you’d be looking for a 24 core CPU, 64GB RAM, 1TB PCIe 5 SSD, mainboard with 6x thunderbolt ports, a silent cooling setup, high quality case that is both small and all the gear while running cool. and if you’re stuck with MS Windows - an Operating System.
https://twitter.com/0xDEADBEEFCAFE/status/166747612998729728...
Regardless, wccftech is far from reliable. IIRC, /r/amd blocks links to the site.
Those are nvidia's best consumer GPUs. I think the cheese grater falls into the pro segment. In that segment nvidia has the A6000s with 48GB VRAM and 91 SP TFlops compared to the 4090's 24GB and 73 SP TFlops. But that costs as much as the Mac Pro alone. And even bigger options (segmented for server/datacenter use) are available.
power consumption doesn't scale linearly with performances
the absolute best of class NVIDIA discrete GPU offering could possibly outperform the Apple GPU at the same power level
Or, to put it in another way, to recover that remaining 50% of performances (2x) the increase in power consumption would be exponential (a lot more than 2x, like 10x)
As far as I understand it—and this is just from watching Apple's presentations on the architecture—the lack of a discrete GPU is a big part of how the Apple Silicon machines achieve good performance per watt.
Instead of having discrete RAM or a discrete GPU with its own VRAM, all of the RAM is accessible to the CPU and and the GPU in a unified memory architecture. On the M2 Ultra, this allows for 800 GB/s of memory bandwidth, and also eliminates a lot of the need to copy data from RAM to VRAM, as both the GPU and the CPU can access the same memory. In return, this allows the GPU to match the performance of discrete GPUs that have a lot more cores.
Of course, the big downside is that you can't expand the RAM or install a beefier GPU. It's all baked in to the logic board.
While 192GB of RAM is more than I would need, for people looking to use 1.5TB of RAM and a pair of NVIDIA GPU they had from previous model, they’d have to go elsewhere.
Which leaves me wondering; how much engineering at Apple are happening on Mac?
The best case this article can make is that if you need to play the latest game or do intense ML stuff you probably want NVIDIA, but that's the same as it ever was.
I don't think it was ever meant to be the most informative article: it seems written to serve the contrarians because that's profitable from a readership perspective for a publication like this.
Dead Comment
A 4080 is best of class?
And second of all, for the price it should be compared to the 4090, which absolutely demolishes it.
Second, people don't buy Macs only for performance. They also buy Macs for macOS, for integration between devices, for a system that is cool and quiet, for hardware acceleration of ProRes, for on-device privacy-preserving machine learning. Being a bit slower than competing AMD and Intel systems is acceptable, because you get so many other desirable properties in return.
I'd definitely consider a Mac Studio with an M2 Max or M2 Ultra, if I didn't want something portable. I would never buy a machine with a competing machine with an Intel or AMD machine, because I don't want to deal with Windows or desktop Linux.
Other people have another set of priorities and that is fine.
While I’m still surprised they didn’t put a second Ultra in the Mac Pro, I’m betting there’s a wider delta than people imagine between the two form factors.
Having said that, the 7950X was released late February, and the 13900KS was released in mid January. Both of this year. Both are their premiere available chips right now in the segment. Referring to them like they're last year's junk is rather silly.
[1] Though fun fact with the M1, I remember super disappointing Geekbench results leaking before its release. People do know how low trust the site is, right? The computer identifiers on the claimed "M2 Ultra" devices claim to be Macbook Pro 14" devices....which aren't getting M2 Ultras for obvious reasons. In all likelihood someone is making guesses and posting nonsense.
7950X was released September, 2022. It's quite literally last year's chip and given that AMD release cycle is typically about 2 years, we're roughly halfway between last release and 9000 series release.
You might be getting it confused with the Non-X versions that were released earlier in 2023 -- those are basically the same chip but power limited and maybe slightly worse selections of silicon. Of those Non-X versions it was 7600, 7700, 7900 but non-X of 7950 was released. [1]
[1] https://en.wikipedia.org/wiki/List_of_AMD_Ryzen_processors
May I assume that you haven't used a desktop from some year equivalent to whatever Mac you use? Because modern desktops are far from "honking."
I have a 7950x (the high core count desktop offering). AMD require (sans one brand) water-cooling for it. That means you get big quiet fans, which are far less "honking" than the tiny loud ones that are in laptops by necessity. In fact, due to the nice large radiators that liquid coolers have the fans don't spin at all the majority of the time.
When I do need power, it's on-tap.
My 6900xt is big honking during gaming, but you can get real quiet $300 GPUs. Or just use the integrated graphics and enjoy the quiet liquid cooling.
> big
https://en.wikipedia.org/wiki/Mini-ITX
High-end PSUs now overlap with smaller space heaters on power output. My living room is usually 3-4 degrees hotter around my desk than it is by the dining table.I'm looking to upgrade my gaming PC, and getting the power budget under control is surprisingly challenging.
You can drastically reduce the power you supply to desktop chips with BIOS settings. You'll generate far less heat, can use a smaller power supply and form factor, while still getting great performance.
If Intel/AMD get to a level where their chips rival M1/M2 while power throttled, things get interesting.
My machine is entirely silent on normal operations (Zen 3 5600X and RX 6700). All the fans, on both GPU and CPU are stopped in desktop usage. And it doesn't obviously eat that much power (my monitor probably eats more).
The only moment I could hear them is if I play games. And then I have headset on, so I can't hear them.
I really couldn't care less if it ate twice the amount of power when playing games.
>big honking machines
Buy an all-in-one water cooler for the CPU https://pcpartpicker.com/product/2PFKHx/arctic-liquid-freeze...
It's easy to install and you have a quiet, well-cooled CPU.
Dead Comment
This also pushes more "desktop like" performance to both ultraportable and reasonably portable laptops, allowing these machines to fully replace desktops for the overwhelming majority without all of the caveats that come with "desktop replacement" laptops (workstation laptops, heavy duty gaming laptops, etc). A lot of people who previously wouldn't have seen laptops as capable of being their primary machines are doing exactly that with M1/M2 Pro MBPs.
This is actually the case! The 7950X and 13900K come in non-X and non-K variants, which have vastly reduced power footprints and overall consumption, and you can even take your X or K variant and... Enforce that exact same power envelope in BIOS, for minimal loss in performance. But the purchasers of desktop X and K SKUs are the overclocking fringe (by and large). I will admit though, a lot of laptops are sold with with i|r7|9 +HX variants that shouldn't be purchased because big number means easier upsell.
It is the goal (after all more efficiency also means fitting more powerful cores into the thermal envelope) but given the choice most desktop users would be fine with "just a bit bigger box" rather than sacrifice performance for the price.
Either you are not measuring power consumption correctly, or there is something very wrong with your PC.
I think you're confusing Mac14,14 which is the internal designation for the Mac Studio with a MacBook Pro 14". The leak if anyone's interested: https://browser.geekbench.com/v5/cpu/compare/21305974?baseli...
It's a no brainer anyway to get more performance from desktop and non apple much cheaper due to apples pricing.
Apple has a huge advantage price / performance wise with the cheap m based Mac book air.
The comparison with a Mac book pro which costs 2-3k is slightly less so.
My 7950x machine cost, excluding the enthusiast GPU, $3000. That's less than half the cost of the M2 Ultra.
The cynical view is that Apple is intentionally misleading customers with their ambiguous graph axes. Another perspective is they’re simply demonstrating the metric they’ve optimized for in the first place.
I don't think it's misleading anyone to say they're faster, when their on SoC graphics are 4060 level, they definitely are overall.
As you said though, it's a very niche thing, so even without beating Intel at the top end they're not going to be losing much. It's just a bit of a shame. The i9-13900K is only $500, it's not like it's some wildly out-of-reach thing.
But of course, combining any number of M2’s won’t increase their single core scores. Intel’s desktop chips are there for people who want high clock speeds and are happy to pay in power for that (and the power cost is super linearly related to clock frequency). The Intel chips are just designed for different use-cases, it doesn’t make sense to assume either company could just linearly scale things to reproduce each other’s products.
The relevance of multi-core scores depends on how parallel the workload is. If we all had perfectly parallel workloads I guess Xeon Phi’s would have sold better.
This is a lazy argument. Why did they make the M1 faster than other CPUs? I would argue that most people don’t need even that performance. Why do they put M chips in iPads - they can’t even use it optimally.
You can play that in the other way too. Use multiple 13700Ts at 35W each.
The article mentions that apple has focused on single core performance while the x86 processors in question are designed for multi core use cases. This reflects two different markets being addressed and the sad state (small amount) of multicore code today.
Also it’s silly to claim that the M2 Ultra is so expensive — you can get the same performance from a 3K “studio” desktop that you do from the >7K “Mac Pro”.
I use apple for all my “terminals” (macs, iPhone, etc) but really want AMD and Intel to keep working on these multithreaded powerhouses because I depend on that on the cloud side. I don’t see this ever being in Apple’s markets. Articles that further that are needed, but this isn’t one of them.
It seems pretty clear and unsurprising that Apple optimises their design for their use case (e.g. major consideration of bandwidth to screen in handheld devices, reminiscent of one of the Alto's design criteria) but how that plays out doesn't support my claim either. But Apple's intended use cases aren't the same as the threadripper's.
I don’t see how that could be true. A huge amount of software tasks are basically single threaded.
Remember since Apple does everything soup-to-nuts they have a ton of performance data from their computers to know what real user workloads look like so they can optimize the hardware + the software for them.
Multithreaded performance is only good when you don't care about power use, but that's never true on a battery powered phone. It's actually more often the case that you optimize software by removing accidental excess concurrency than by adding it. Junior engineers love them some unstructured concurrency.