We now have a cpuidle driver, which significantly lowers idle power consumption by enabling deep CPU sleep. You should also get better battery runtime both idle and during sleep, especially on M1 Pro/Max machines.
Thanks to the cpuidle driver, s2idle now works properly, which should fix timekeeping issues causing journald to crash.
Also thanks to the cpuidle driver, CPU boost states are now enabled for single- and low-threaded workloads, noticeably increasing single-core performance.
Thermal throttling is now enabled, which should keep thermals in check on fanless (Air) models. There was never a risk of overheating (as there are hard cutoffs), but the behavior should now more closely match how macOS works, and avoid things getting too toasty on your lap.
Random touchpad instability woes should now finally be gone, thanks to bugfixes in both the M1 (SPI) and M2 (MTP) touchpad drivers.
A bugfix to the audio subsystem that fixes stability issues with the headphone jack codec.
New firmware-based battery charge control, which offers fixed a 75%/80% threshold setting. To use this, you need to update your system firmware to at least version 13.0, which you can do by simply updating your macOS partition to at least that version or newer. This new charge control method also works in sleep mode.
U-Boot now supports the Type A USB ports (and non-TB ports on the iMac), so you can use a keyboard connected to any port to control your bootloader.
And last but not least, this kernel release includes base support for the M2 Pro/Max/Ultra SoCs! We are not enabling installs on these machines yet as we still have some loose ends to tie, but you can expect to see support for this year's new hardware soon."
"New firmware-based battery charge control, which offers fixed a 75%/80% threshold setting. To use this, you need to update your system firmware to at least version 13.0, which you can do by simply updating your macOS partition to at least that version or newer. This new charge control method also works in sleep mode."
This is interesting, am I correct in thinking this a feature implemented by Apple and now supported by the Asahi team? Does that mean that macOS supports this charge control feature?
I really hope Apple brings the same charge limiting to iPhone as well.
> This is interesting, am I correct in thinking this a feature implemented by Apple
Yes, battery charge control is a hardware(/firmware) feature supported on other modern laptops as well, such as the Lenovo ThinkPads, but it's not a standard so it requires explicit driver and OS support.
OpenBSD recently added support for this as well for both of these implementations (Apple silicon and ThinkPads).
> I really hope Apple brings the same charge limiting to iPhone as well.
This was added to iPhones in 2019.
> If your iPhone stops charging at 80%, it's most likely due to a feature Apple introduced in iOS 13 called Optimized Battery Charging. It aims to prevent over-stressing the battery and hence extend the battery life of your iPhone by limiting the charge to 80%.
Your iPhone learns your usage patterns and delays 100% charging until moments before you wake up in the morning.
> This is interesting, am I correct in thinking this a feature implemented by Apple and now supported by the Asahi team? Does that mean that macOS supports this charge control feature?
It does, but in a weird way. You can turn on "adaptive charging" and it will randomly decide to charge to 80%.
If you want to properly control it, just install the wonderful AlDente utility ( https://apphousekitchen.com/ ). Then you can manually control the max charge percentage. Mine is permanently set to 80% because I never really use even 40% of the battery on my M2-based laptop.
>And last but not least, this kernel release includes base support for the M2 Pro/Max/Ultra SoCs! We are not enabling installs on these machines yet as we still have some loose ends to tie, but you can expect to see support for this year's new hardware soon."
> e includes base support for the M2 Pro/Max/Ultra SoC
Does this mean Apple gave them prerelease hardware early? Might apple start helping these guys more - like for example donating a 5 person dev team for a few months maybe?
It means that Apple isn't radically changing the internals of the SOC every year.
>Apple’s first iPhones ran on Samsung SoCs, and even as Apple famously announced that they were switching to their own designs, the underlying reality is that there was a slower transition away from Samsung over multiple chip generations. “Apple Silicon” chips, like any other SoC, contain IP cores licensed from many other companies; for example, the USB controller in the M1 is by Synopsys, and the same exact hardware is also in chips by Rockchip, TI, and NXP. Even as Apple switched their manufacturing from Samsung to TSMC, some Samsung-isms stayed in their chips… and the UART design remains to this day.
M2 Pro/Max were available in January. I think they needed to wait until now to be sure the M2 Ultra announcement didn't have too huge of changes from the way the M1 Ultra was done. In other words, the Asahi Linux team don't have an M2 Ultra to test on, they are getting ready for when they can get some test results, possibly from users.
Adding noreferer tag is “tasteless”? What’s tasteless is jwz for example silently redirecting visits from this site to goatse. This just reminds me of the clueless businesses in the early days of the web asking for “license agreements” to link to their content.
Its been more than a year I'm running asahi on my macbook air and I can't stress how grateful I feel for enjoying such wonderful freedom.
I don't feel like ever going back to x86 to be honest, at this point there is nothing lacking or unable to run and when the neural engine drivers come online now that the GPU is starting to mature people will be able to juice out every last bit of computation this machine is capable of.
For the record, I've switched to the edge branch a couple of months ago and honestly I noticed no actual difference in my day-to-day tasks which is really telling about how powerful even the M1 is when it can handle software rendering in such an effortless manner coupled with anything else running.
> at this point there is nothing lacking or unable to run
Sure there is. You just haven't run into it yourself.
Faster, cooler and more power efficient hardware is great. I just don't think that it makes up for depending on a small team of volunteers to resolve all hardware issues in an ecosystem hostile to OSS, which might break at any point Apple decides to do so.
And the incompatibilities with ARM are not negligible. If all your software runs on it, great. If not, good luck depending on yet another translation layer.
I'm sticking with my slow, hot and power-hungry x86 machines with worse build quality for the foreseeable future. The new AMD mobile chips are certainly in the ballpark of what Apple silicon can do, so I won't be missing much.
> depending on a small team of volunteers to resolve all hardware issues in an ecosystem hostile to OSS, which might break at any point Apple decides to do so
You are describing how most OSS software has been developed. I don't see how this is any different than early linux when no hardware manufacturers had any interest in supporting it.
A lot of the work that the asahi team is doing is just fixing Arm issues in the linux kernel (and sadly user space). That work will benefit everyone using Arm systems, not just folks running asahi on Apple hardware.
Its good for there to be more hardware architecture competition! I'm glad I can run my server workloads on the Arm servers in AWS that are 20% cheaper than the equivalent x86 machines. I'm glad that I can run the software I like (linux) on legitimately nice hardware (m2 air). You can make different decisions on what architectures are best suited for your needs, but the competition in the market improves the options and prices for everyone.
I've been using Asahi since the fall of 2022. When I first started using it a lot of software was broken because of bugs in that software that had never been exposed before (specifically around page sizes larger than 4k). All of that software has now been fixed. Support for linux/arm will only continue to improve as more people use it.
I have an AMD Linux laptop I’ve been using for work.
It’s great. The battery life is great, it’s quite fast with a lot of cores, when I need to do my genetics runs (plugged in). Build quality isn’t bad, plus affordable and lots of ports. After my initial transition away, not missing my 2015 Mac book pro.
Linux is the way to go. I don’t blame people with apple hardware for wanting it. I just don’t feel the x86 side is as bad as the everyone makes it out to be. We’ve come along way since my first Linux laptop and it’s not so great battery life.
Aren't lots of things in Linux dependent on a small team of volunteers? I know the Linux foundation owns the whole kernel, but in practice how many full time people work on the ext4 driver or whathaveyou?
/me glances at their full NixOS desktop that native builds for x86_64-linux and aarch64-linux. Heck, most all of it cross-compiles too. Native riscv64-linux needs more attention (upstream support is missing in places) but most of my config cross-compiles fine as well.
Every month that passes, with similar HN comments insisting it's a bad time... I wonder: "am I 'special' or just spoiled by nix(os/pks)". Or maybe, just maybe, people's expectations of their distros are shockingly low. And maybe rightly so at times.
The compute accelerator story on mainstream, non-patched Linux, with upstream software isn't that good at the moment. You're going to be waiting a while before you can do fun stuff like organize layers across the Neural Engine and GPU for ML models, something CoreML can do today. Compute using graphics APIs exists, but it isn't really the same and loses out on many features people practically want and are used to, and it moves forward much more quickly than graphics APIs e.g. Nvidia just released Heterogeneous Memory Management as stable in the open source GPU driver for x86. The Linux accelerator ecosystem in practice is just held together by Nvidia's effort, honestly.
We really need something like Mesa, but for compute accelerator APIs. I'm really hoping that IREE helps smooth out parts of the software stack and can fill in part of this, but the pieces aren't all put in place yet. You'll need the GPU for a substantial amount of accelerator work regardless of Neural Engine support.
I disagree that there is nothing lacking on these machines with Asahi, I still run into small nits all the time (from 16k page sizes biting back to software missing features). But my M2 Air is 100%, no-questions-asked usable as a daily driver and on-the-go hacking machine, it is fast as hell and quiet, it has nested virtualization and is the only modern ARM machine on the market, and I love it for that.
Is the Neural Engine/CoreML used in "normal" desktop apps on macOS, or is it limited to specialized ML centered apps? In other words, where should I expect performance improvements if there was a hypothetical Mesa for compute accelerators? Spontaneously, I can only think of image editors like Photoshop offering AI-based tools.
Has there been any ongoing work on this? It's been marked as "WIP" in GitHub for a while now, and I'd imagine it's one of the more complicated things to reverse-engineer.
For €2300 I assume you're looking at an M2 Pro model? Note that neither the M2 or M2 Pro Mac Mini currently have working display outputs[1] so no you should not. Apple changed the way the display outputs work in M2 so they're now dependant on Thunderbolt/DP alt mode support which is not implemented for any Apple silicon machine yet.
On the other hand a cheap M1 Mac Mini would make a great machine to try it out. The M1 Mac Mini is the best supported machine currently.
I don’t run Asahi on anything currently, but I do have two desktop Linux machines, an M1 Macbook, and have previously run Linux on an Intel Mac… I can see the argument for laptops based on battery life/heat/build quality, but for a desktop machine I’d need a lot of convincing to justify the price premium and risk of compatibility issues in choosing a Mac Mini over a SFF/USFF/Tiny desktop with fully supported hardware.
I’ve owned both windows and Apple computers, quite many of them, over the last 20 years. On average, the Apple machines last at least twice as long as the windows machines while still being fully usable. One could argue just based on that basic math that they are worth twice the price.
You can get a used M1 mini for more or less 400€.
Get a glimpse of whats going on in your local facebook marketplace, most likely you'll come up with nice offers.
Not the OP but I got a 13" and a larger model Mac of the x86 variety when they were still reasonably young and even though I eventually got all of the bits and pieces to work it usually pays off to wait until a somewhat larger distro supports the hardware as well. That way you benefit from a much larger crowd of testers and once they have no more issues you should be good to go.
Moneywise it was definitely worth it, both machines are still working many years later and have been pretty much trouble free after the initial bugs were ironed out.
If I was in the market for a new laptop right now I'd wait for a bit and then pull the trigger on the latest model with broad support.
I'd say getting a macbook or a macbook air would be worth it, but rather than spending that much on a mac mini I'd probably get one of those new ryzen mini-PCs like from Beelink or Minisforum. You could get something with a 7735HS 8-core chip, terabytes of diskspace and a shitload of LPDDR5 RAM for 500€ and it's as small as the mac mini.
New M2 mac minis start at $600 (8 core CPU; 8GB ram; 256GB SSD). You can probably find similarly specced x86 PCs for a comparable price, but this doesn't seem unreasonable. https://www.apple.com/shop/buy-mac/mac-mini
This is extremely gross 'Dang, and I'm wondering how you can possibly defend doing this. Seriously, why are you going to these lengths to evade a soft-block on links to specific sites that very clearly and explicitly are communicating that they don't want HN's traffic?
I wonder if the new Mac Pro's full PCI Express support resolves any limitations that prevents people from using GPUs over Thunderbolt on existing Apple Silicon hardware (this is apparently a hardware limitation).
Although the Mac Pro's PCIe extensibility makes it a pretty mystifying niche product from Apple without providing memory and GPU expandability, once Asahi Linux gets running on there you should be able to not get the full abilities of the latest Vulkan and full OpenGL 4.6 by putting in a recent AMD card. The open source Radeon drivers should "just work" on ARM as they do in the Talos II POWER-based workstation, if they can be stably initialized that is. Heck, Nvidia publishes a binary Linux aarch64 driver and they sound petty enough with Apple to try to make that work.
You could have Asahi Linux running and delegate any not-yet-supported hardware to the 7 PCIe devices it supports. Would be quite a mighty ARM Linux workstation. Again though - only if Apple has the PCI Express support for it.
It seems they didn't make any massive changes and instead just put switches on the existing PCI-E Lanes. That probably doesn't bode well for full GPU support :(
This is great work and I commend it. But in other threads people are acting like Asahi Linux hardware support is 100% complete. My fear is that if I were to go this route and purchase the hardware I'd be seeing fraction of the performance and capability I would in Mac OS. To be honest this blog post seems like the project has a long ways to go, not that it is nearly completion.
I just can't justify buying hardware from a company that is so hostile to developers and hackers as nice as it may be.
> I just can't justify buying hardware from a company that is so hostile to developers and hackers as nice as it may be.
I don't think it's hostile, I think they're just hands-off; they throw the hardware over the fence and say, "if you wanna make use of it, here's our software; if you don't like our software, sorry no docs but you're free to write your own". Which is exactly what's happening.
I mean it would be nice if Apple had released more documentation, but I totally understand if they don't want the burden of supporting it.
First, personally I don't care what hardware or software people use, if they are happy with the tools that they using then that's good.
That said, Apple has been very hostile to hackers over the years imo. Hardware being hard to repair, access, upgrade, etc. I think at one point they were making it virtually impossible to replace components because they were serial locked.
As far as I am aware, progress Apple as made has been in response to public image issues or changes in consumer laws within the EU.
Plus Apple software is heavily indebted to Open Source software so they very easily could be releasing drivers for their hardware instead of relying on community to do backwards engineering.
There's a line somewhere, and I think it's different for every person. For a lot of people, the pricing itself is just outright hostile. 8gb of RAM in a base model Mac is not future-proofed in an age of local AI models, and paying to upgrade it gets expensive, fast. For others the OS is hostile when it unceremoniously drops support for $THING they use, or because it has the gall to show them ads. For others yet, the hardware is hostile because it's basically a black box that Apple withholds documenting to maintain a monopoly on fixing them.
Apple is one of the few companies smart enough to deliberately do this. It is both a testament to ability to do brilliant things, and akin to being trapped in a room with a lion that has twice your SAT score. The "golden handcuffs", as they say.
> Regarding the capabilities not sure which one you miss. Do you plan to use it for development, or you want some kind of gaming/multimedia setup?
> It's the other way around. It has been usable as a daily driver for ages.
Honest questions since I haven't been paying attention to Asahi for some time now:
- Does hardware accelerated video decoding work? Including in Firefox?
- Does sleep work properly or do I get significant battery drain after leaving it sleeping during the night time? Also, does it wake up from sleep reliably? Like if you open/close the lid 100 times in a row would it crash?
- How is wifi? Does it work as fast and reliably on Linux as the Intel cards? Supports latest WiFi standard and 6ghz?
This would be my most basic questions to buy MacBook as a daily-driver Linux laptop.
I don't see why it is unreasonable to desire Apple to provide documentation and open up their hardware. Honestly I'd ask the same as any other hardware vendor. Intel for example provides very detailed technical documents on their new GPUs (A770).
You don't create a new bootloader that allows users the freedom to run an unsigned third party OS without having it degrade the system's security if and when they boot the native OS because you are "hostile to developers and hackers".
> I'd be seeing fraction of the performance and capability
You'd temporarily lose some hardware support (documented) while it's being worked on. But I'm not sure why you expect losing performance? This is running native code. Same binary will run the same on both systems (+/- the llvm version differences).
Does the CPU run at similar frequencies between Mac OS and Linux (since they're writing their own drivers this isn't guaranteed)? Is the scheduling done similarly? Are there any special hardware modes you have to activate with e.g. binary blobs?
There are a bunch of factors that could affect performance even under the same OS (try underclocking your CPU or play around with schedulers). Given the mostly non-existent documentation from Apple I'd strongly suspect that average-case performance will stay worse on the Linux side for a long time.
E.g. the performance you can get out of the GPU at the moment is a subset of what you can get out of the hardware. Or as another more generic example, until this latest release CPU boost states weren't enabled due to lack of proper cpuidle driver which resulted in regressed single thread performance.
There is nothing inherent about running Linux that will require it be slower, in some cases it will/is even faster, but the lack of everything being fully supported does actually impact performance right now. It has been getting better with time.
Specifically GPU drivers, which can dramatically impact performance. Especially if I am attempting to run any kind of ML workload from Linux. I'm assuming it's basically a non-starter at this point and one is forced to use Mac OS.
> I just can't justify buying hardware from a company that is so hostile to developers and hackers as nice as it may be.
While I understand what you’re saying, the developers of Asahi have said before that Apple appears to be going out of their way to leave things open for them.
Apple’s boot security is enough they could easily prevent anyone from ever running a non-Apple blessed OS. But Apple made it fully supported to boot directly into a non-Apple OS as your primary OS.
They’re not helping by giving code or documentation, but they’re not putting up roadblocks. Apple seems to be happy with a stance of benevolent indifference.
It’s not official support. But a number of PC vendors don’t give official support either.
A car with one seat seems 100% complete if your use cases only involve you driving it alone. Asahi Linux is absolutely in that kind of spot right now. For some people there is 0 reason to wait, for others it's not even worth booting. If you have fear it's not fully complete enough, I'd say trust those feelings. At least right now.
There have been multiple reports of the last couple years that Apple has been informally internally helping the Asahi Linux team to make it run well on their hardware. Apple cannot come out and officially support another operating system of course, but they are aware of the interest and are helping make it happen, in an unofficial capacity.
> I just can't justify buying hardware from a company that is so hostile to developers and hackers as nice as it may be.
As opposed to what company besides those tiny ones? Almost all of them are closed-source only and drivers have been painstakingly reversed engineered over decades.
Yeah, I'm not gonna run out and buy a new Macbook just to wipe it. But these M1 machines are going to be things you just have lying around sometime soon. If my wife upgrades, for example, I know exactly what I'm doing with her M1 Air. :D
Asahi is designed to be dual-booted right now (in fact you have to go off the beaten path to not do that), and it even uses the native Apple boot UI to let you pick your OS
Thanks to the entire Asahi team, your work is truly incredible and so far beyond my pay grade that words fail me. I honestly recently tried and very much struggled to communicate why I was so amazed by your project when talking with friends.
For anyone interested into the GPU side, I can't recommend Linas streams[1] enough.
Sample-rate shading is exceptionally rare (MSAA is rare-ish these days, but I only know of exactly one title that has shipped sample-rate shading), so requiring a basic compiler transform to handle it, especially when they can do so easily because of their tiler architecture, is pretty sane.
"Also in this update:
We now have a cpuidle driver, which significantly lowers idle power consumption by enabling deep CPU sleep. You should also get better battery runtime both idle and during sleep, especially on M1 Pro/Max machines.
Thanks to the cpuidle driver, s2idle now works properly, which should fix timekeeping issues causing journald to crash.
Also thanks to the cpuidle driver, CPU boost states are now enabled for single- and low-threaded workloads, noticeably increasing single-core performance.
Thermal throttling is now enabled, which should keep thermals in check on fanless (Air) models. There was never a risk of overheating (as there are hard cutoffs), but the behavior should now more closely match how macOS works, and avoid things getting too toasty on your lap.
Random touchpad instability woes should now finally be gone, thanks to bugfixes in both the M1 (SPI) and M2 (MTP) touchpad drivers.
A bugfix to the audio subsystem that fixes stability issues with the headphone jack codec.
New firmware-based battery charge control, which offers fixed a 75%/80% threshold setting. To use this, you need to update your system firmware to at least version 13.0, which you can do by simply updating your macOS partition to at least that version or newer. This new charge control method also works in sleep mode.
U-Boot now supports the Type A USB ports (and non-TB ports on the iMac), so you can use a keyboard connected to any port to control your bootloader.
And last but not least, this kernel release includes base support for the M2 Pro/Max/Ultra SoCs! We are not enabling installs on these machines yet as we still have some loose ends to tie, but you can expect to see support for this year's new hardware soon."
This is interesting, am I correct in thinking this a feature implemented by Apple and now supported by the Asahi team? Does that mean that macOS supports this charge control feature?
I really hope Apple brings the same charge limiting to iPhone as well.
Yes, battery charge control is a hardware(/firmware) feature supported on other modern laptops as well, such as the Lenovo ThinkPads, but it's not a standard so it requires explicit driver and OS support.
OpenBSD recently added support for this as well for both of these implementations (Apple silicon and ThinkPads).
https://marc.info/?l=openbsd-cvs&m=168436150408382&w=2
https://marc.info/?l=openbsd-cvs&m=168458409622780&w=2
https://marc.info/?l=openbsd-cvs&m=168521616605492&w=2
I know certain Android/Samsung phones support this as well, not sure about iOS/macOS.
This was added to iPhones in 2019.
> If your iPhone stops charging at 80%, it's most likely due to a feature Apple introduced in iOS 13 called Optimized Battery Charging. It aims to prevent over-stressing the battery and hence extend the battery life of your iPhone by limiting the charge to 80%.
Your iPhone learns your usage patterns and delays 100% charging until moments before you wake up in the morning.
https://www.makeuseof.com/why-your-iphone-stops-charging-at-...
It does, but in a weird way. You can turn on "adaptive charging" and it will randomly decide to charge to 80%.
If you want to properly control it, just install the wonderful AlDente utility ( https://apphousekitchen.com/ ). Then you can manually control the max charge percentage. Mine is permanently set to 80% because I never really use even 40% of the battery on my M2-based laptop.
I do not follow Apple's release notes so I cannot compare.
s/jacking/cracking
Amazing
Does this mean Apple gave them prerelease hardware early? Might apple start helping these guys more - like for example donating a 5 person dev team for a few months maybe?
>Apple’s first iPhones ran on Samsung SoCs, and even as Apple famously announced that they were switching to their own designs, the underlying reality is that there was a slower transition away from Samsung over multiple chip generations. “Apple Silicon” chips, like any other SoC, contain IP cores licensed from many other companies; for example, the USB controller in the M1 is by Synopsys, and the same exact hardware is also in chips by Rockchip, TI, and NXP. Even as Apple switched their manufacturing from Samsung to TSMC, some Samsung-isms stayed in their chips… and the UART design remains to this day.
https://asahilinux.org/2021/03/progress-report-january-febru...
Please consider donating if you have the means. https://asahilinux.org/support/
I find it pretty tasteless for HN to do that.
I don't feel like ever going back to x86 to be honest, at this point there is nothing lacking or unable to run and when the neural engine drivers come online now that the GPU is starting to mature people will be able to juice out every last bit of computation this machine is capable of.
For the record, I've switched to the edge branch a couple of months ago and honestly I noticed no actual difference in my day-to-day tasks which is really telling about how powerful even the M1 is when it can handle software rendering in such an effortless manner coupled with anything else running.
Really thank god for asahi being a thing.
Sure there is. You just haven't run into it yourself.
Faster, cooler and more power efficient hardware is great. I just don't think that it makes up for depending on a small team of volunteers to resolve all hardware issues in an ecosystem hostile to OSS, which might break at any point Apple decides to do so.
And the incompatibilities with ARM are not negligible. If all your software runs on it, great. If not, good luck depending on yet another translation layer.
I'm sticking with my slow, hot and power-hungry x86 machines with worse build quality for the foreseeable future. The new AMD mobile chips are certainly in the ballpark of what Apple silicon can do, so I won't be missing much.
You are describing how most OSS software has been developed. I don't see how this is any different than early linux when no hardware manufacturers had any interest in supporting it.
A lot of the work that the asahi team is doing is just fixing Arm issues in the linux kernel (and sadly user space). That work will benefit everyone using Arm systems, not just folks running asahi on Apple hardware.
Its good for there to be more hardware architecture competition! I'm glad I can run my server workloads on the Arm servers in AWS that are 20% cheaper than the equivalent x86 machines. I'm glad that I can run the software I like (linux) on legitimately nice hardware (m2 air). You can make different decisions on what architectures are best suited for your needs, but the competition in the market improves the options and prices for everyone.
I've been using Asahi since the fall of 2022. When I first started using it a lot of software was broken because of bugs in that software that had never been exposed before (specifically around page sizes larger than 4k). All of that software has now been fixed. Support for linux/arm will only continue to improve as more people use it.
It’s great. The battery life is great, it’s quite fast with a lot of cores, when I need to do my genetics runs (plugged in). Build quality isn’t bad, plus affordable and lots of ports. After my initial transition away, not missing my 2015 Mac book pro.
Linux is the way to go. I don’t blame people with apple hardware for wanting it. I just don’t feel the x86 side is as bad as the everyone makes it out to be. We’ve come along way since my first Linux laptop and it’s not so great battery life.
This. The volunteer pool is too small. And you're supporting a shitty company.
Every month that passes, with similar HN comments insisting it's a bad time... I wonder: "am I 'special' or just spoiled by nix(os/pks)". Or maybe, just maybe, people's expectations of their distros are shockingly low. And maybe rightly so at times.
Nothing wrong with being a late adopter. Nothing wrong with being an early adopter either.
Namely?
We really need something like Mesa, but for compute accelerator APIs. I'm really hoping that IREE helps smooth out parts of the software stack and can fill in part of this, but the pieces aren't all put in place yet. You'll need the GPU for a substantial amount of accelerator work regardless of Neural Engine support.
I disagree that there is nothing lacking on these machines with Asahi, I still run into small nits all the time (from 16k page sizes biting back to software missing features). But my M2 Air is 100%, no-questions-asked usable as a daily driver and on-the-go hacking machine, it is fast as hell and quiet, it has nested virtualization and is the only modern ARM machine on the market, and I love it for that.
Has there been any ongoing work on this? It's been marked as "WIP" in GitHub for a while now, and I'd imagine it's one of the more complicated things to reverse-engineer.
https://oftc.irclog.whitequark.org/asahi/search?q=anehttps://oftc.irclog.whitequark.org/asahi/search?q=neuralhttps://oftc.irclog.whitequark.org/asahi-dev/search?q=ane
On the other hand a cheap M1 Mac Mini would make a great machine to try it out. The M1 Mac Mini is the best supported machine currently.
[1] https://github.com/AsahiLinux/docs/wiki/Feature-Support#m2-d...
Moneywise it was definitely worth it, both machines are still working many years later and have been pretty much trouble free after the initial bugs were ironed out.
If I was in the market for a new laptop right now I'd wait for a bit and then pull the trigger on the latest model with broad support.
That’s the only thing I’m really missing currently.
Dead Comment
Please don't forget to donate if you get value from Asahi.
This is tremendously detailed and laborious work that people are doing in their free time.
https://asahilinux.org/support/
Deleted Comment
Deleted Comment
https://news.ycombinator.com/item?id=3132752
New silent HN policy to avoid showing its users that some people don't like them.
Although the Mac Pro's PCIe extensibility makes it a pretty mystifying niche product from Apple without providing memory and GPU expandability, once Asahi Linux gets running on there you should be able to not get the full abilities of the latest Vulkan and full OpenGL 4.6 by putting in a recent AMD card. The open source Radeon drivers should "just work" on ARM as they do in the Talos II POWER-based workstation, if they can be stably initialized that is. Heck, Nvidia publishes a binary Linux aarch64 driver and they sound petty enough with Apple to try to make that work.
You could have Asahi Linux running and delegate any not-yet-supported hardware to the 7 PCIe devices it supports. Would be quite a mighty ARM Linux workstation. Again though - only if Apple has the PCI Express support for it.
It seems they didn't make any massive changes and instead just put switches on the existing PCI-E Lanes. That probably doesn't bode well for full GPU support :(
Deleted Comment
Seems designed more to support lots of low/medium bandwidth cards, not a few high bandwidth cards.
I just can't justify buying hardware from a company that is so hostile to developers and hackers as nice as it may be.
I don't think it's hostile, I think they're just hands-off; they throw the hardware over the fence and say, "if you wanna make use of it, here's our software; if you don't like our software, sorry no docs but you're free to write your own". Which is exactly what's happening.
I mean it would be nice if Apple had released more documentation, but I totally understand if they don't want the burden of supporting it.
That said, Apple has been very hostile to hackers over the years imo. Hardware being hard to repair, access, upgrade, etc. I think at one point they were making it virtually impossible to replace components because they were serial locked.
As far as I am aware, progress Apple as made has been in response to public image issues or changes in consumer laws within the EU.
Plus Apple software is heavily indebted to Open Source software so they very easily could be releasing drivers for their hardware instead of relying on community to do backwards engineering.
Apple is one of the few companies smart enough to deliberately do this. It is both a testament to ability to do brilliant things, and akin to being trapped in a room with a lion that has twice your SAT score. The "golden handcuffs", as they say.
The performance is there, it has been running stuff much faster than the vast majority of Intel/AMD laptops for over a year.
Regarding the capabilities not sure which one you miss. Do you plan to use it for development, or you want some kind of gaming/multimedia setup?
>To be honest this blog post seems like the project has a long ways to go, not that it is nearly completion.
It's the other way around. It has been usable as a daily driver for ages.
>I just can't justify buying hardware from a company that is so hostile to developers and hackers as nice as it may be.
Then don't?
> It's the other way around. It has been usable as a daily driver for ages.
Honest questions since I haven't been paying attention to Asahi for some time now:
- Does hardware accelerated video decoding work? Including in Firefox?
- Does sleep work properly or do I get significant battery drain after leaving it sleeping during the night time? Also, does it wake up from sleep reliably? Like if you open/close the lid 100 times in a row would it crash?
- How is wifi? Does it work as fast and reliably on Linux as the Intel cards? Supports latest WiFi standard and 6ghz?
This would be my most basic questions to buy MacBook as a daily-driver Linux laptop.
You'd temporarily lose some hardware support (documented) while it's being worked on. But I'm not sure why you expect losing performance? This is running native code. Same binary will run the same on both systems (+/- the llvm version differences).
There are a bunch of factors that could affect performance even under the same OS (try underclocking your CPU or play around with schedulers). Given the mostly non-existent documentation from Apple I'd strongly suspect that average-case performance will stay worse on the Linux side for a long time.
There is nothing inherent about running Linux that will require it be slower, in some cases it will/is even faster, but the lack of everything being fully supported does actually impact performance right now. It has been getting better with time.
While I understand what you’re saying, the developers of Asahi have said before that Apple appears to be going out of their way to leave things open for them.
Apple’s boot security is enough they could easily prevent anyone from ever running a non-Apple blessed OS. But Apple made it fully supported to boot directly into a non-Apple OS as your primary OS.
They’re not helping by giving code or documentation, but they’re not putting up roadblocks. Apple seems to be happy with a stance of benevolent indifference.
It’s not official support. But a number of PC vendors don’t give official support either.
Apple officially supported running Windows on Macs for many years.
Why?
As opposed to what company besides those tiny ones? Almost all of them are closed-source only and drivers have been painstakingly reversed engineered over decades.
For anyone interested into the GPU side, I can't recommend Linas streams[1] enough.
[1] https://www.youtube.com/@AsahiLina/streams