Readit News logoReadit News
Posted by u/cedws 4 years ago
Ask HN: How are you dealing with the M1/ARM migration?
I love the M1 chips. I use a 2021 MacBook both personally and professionally. My job is DevOps work.

But the migration to ARM is proving to be quite a pain point. Not being able to just do things as I would on x86-64 is damaging my productivity and creating a necessity for horrible workarounds.

As far as I know none of our pipelines yet do multi-arch Docker builds, so everything we have is heavily x86-64 oriented. VirtualBox is out of the picture because it doesn't support ARM. That means other tools that rely on it are also out of the picture, like Molecule. My colleague wrote a sort of wrapper script that uses Multipass instead but Multipass can't do x86-on-ARM emulation.

I've been using Lima to create virtual machines which works quite well because it can do multiple architectures. I haven't tested it on Linux though, and since it claims to be geared towards macOS that worries me. We are a company using a mix of MacBooks and Linux machines so we need a tool that will work for everyone.

The virtualisation situation on MacBooks in general isn't great. I think Apple introduced Virtualization.framework to try and improve things but the performance is actually worse than QEMU. You can try enabling it in the Docker Desktop experimental options and you'll notice it gets more sluggish. Then there's just other annoyances, like having to run a VM in the background for Docker all the time because 'real' Docker is not possible on macOS. Sometimes I'll have three or more VMs going and everything except my browser is paying that virtualisation penalty.

Ugh. Again, I love the performance and battery life, but the fragmentation this has created is a nightmare.

How is your experience so far? Any tips/tricks?

rgovostes · 4 years ago
I got an M1 MacBook Pro from work last year, and expecting to pay the price for being an early adopter, I set up my previous Intel-based MBP nearby in case I ran into any problems or needed to run one of my existing virtual machines. (I do varied development projects ranging from compiling kernels to building web frontends.)

In reality I have hardly turned on the Intel MBP at all since I got it. At all.

Docker and VMware Fusion both have Apple Silicon support, and even in "tech preview" status they are both rock solid. Docker gets kudos for supporting emulated x86 containers, though I rarely use them.

I was able to easily rebuild almost all of my virtual machines; thanks to the Raspberry Pi, almost all of the packages I use were already available for arm64, though Ubuntu 16.04 was a little challenging to get running.

I also had to spend an afternoon updating my CI scripts to cross-compile my Docker containers, but this mostly involves switching to `docker buildx build`.

Rosetta is flawless, including for userland drivers for USB and Bluetooth devices, but virtually all of my apps were rebuilt native very quickly. (Curious to see what, if anything, is running under translation, I just discovered that WhatsApp, the #1 Social Networking app in the App Store, still ships Intel-only.)

barkingcat · 4 years ago
Rosetta is definitely not flawless. This type of jit/translation does have limits.

Often times people experience different levels of difficulty using Apple Silicon precisely because my workload is not yours, and yours is different again from OP's

So I feel this particular Ask HN is more about wondering how different everyone's workflows are, and how that impacts M1 usage.

I envision that workflow options/pathways will start converging into one "way" which is the Apple way. You already are getting shunted into relying purely on Metal for gpu acceleration and you see the various plurality of gpgpu libraries start converging on only the Apple blessed/authorized and optimized version.

There are people fighting against this, for example the Linux on Apple Silicon project bringing up the GPU, but it's slow going.

Give it another few years, and people will stop using x, y, or z frameworks, and only use whatever API's Apple gives us, because that is the Apple Way.

Proceed at your own peril. The future is fast, but there is only one road.

rgovostes · 4 years ago
I use "flawless" in the sense that I have not seen a single incompatibility or even regression in any of the ordinary macOS software I have used under its translation, which is exactly what it was designed to do. It has surprisingly few documented limitations, like lacking AVX vector instructions:

https://developer.apple.com/documentation/apple-silicon/abou...

There are a handful of apps that aren't supported, but few of these are popular apps. Virtualbox is notable, but unsurprising: Rosetta is not designed for x86_64 virtual machines, and Virtualbox doesn't support arm64. (I submitted a correction to the Wine entry, since wine64 has worked under Rosetta 2 for a year.)

https://isapplesiliconready.com/for/unsupportedhttps://liliputing.com/2021/06/wine-6-0-1-lets-you-run-windo...

spockz · 4 years ago
Just to note. You can do `docker buildx install` to make it the default backend of `docker build`. So this precludes you from having to switch commands everywhere. I haven’t figured out how you can let it build multiple architecture by default, so without having to pass the —target flag.

On my m1 I see 16x performance differences in builds in favour of native over emulated. Even simple shell script run slow or seem to stall when emulated.

Deleted Comment

Operyl · 4 years ago
If you open Activity Monitor you can see what processes are running in "Apple" or "Intel"!
eslaught · 4 years ago
Are there more details on "docker buildx build" that you can point us to? The command line reference doesn't seem especially helpful: https://docs.docker.com/engine/reference/commandline/buildx_...

E.g. if I wanted to start building ARM binaries on a x86 host, is that the sort of thing this would enable?

verst · 4 years ago
You can build multiplatform images for x86 and ARM (e.g. M1 Mac) like so:

docker buildx build --platform linux/amd64,linux/arm64 .

I constantly use buildx to build x86 images on my M1 so that I can run these images in my x86 Kubernetes cluster.

https://docs.docker.com/buildx/working-with-buildx/

eslaught · 4 years ago
Just to follow up on my own question (though I appreciate the siblings):

I found this article with some overview and example command lines: https://www.docker.com/blog/multi-arch-images/ . As best I can tell, you don't actually need a custom builder. You can just skip the `docker buildx create` and go straight to `docker buildx build` after one workaround below.

I needed the workaround from this comment to make it work (to install qemu? not sure): https://github.com/docker/buildx/issues/495#issuecomment-754...

Overall a very slick experience for going from zero to multiarch containers. Well done, Docker.

0x29aNull · 4 years ago
..want to get rid of your old intel MBP? The video on my 2017 is dying.

Deleted Comment

rgovostes · 4 years ago
Unfortunately it belongs to work, and not to me. Good luck finding a replacement, though.
doublepg23 · 4 years ago
I recommend the r/AppleSwap or r/HardwareSwap subreddit.
hansor · 4 years ago
why your login is green mate?
kcartlidge · 4 years ago
> Curious to see what, if anything, is running under translation

There's a useful app called Silicon Info on Github (https://github.com/billycastelli/Silicon-Info) and also on the Mac App Store.

It adds a menu bar icon that switches according to the currently-focused app's architecture.

tluyben2 · 4 years ago
Hmm, docker is very buggy for my team and myself on m1. Coming from a x220 where it was flawless, on m1 it quite often just ‘dies’. When you paste the message you get when you run any docker command after that into Google, you see that many people have this issue and the fix is; restart docker. This didn’t happen under Linux. I am not sure if it did happen on x86 Mac as I never used docker there.
rgovostes · 4 years ago
To be clear I think the support for Apple Silicon has been solid, but Docker Desktop for Mac regardless of CPU architecture has bugs, and I've had it get stuck as you describe on both. Right now for me it seems incapable of auto-updating, which I assume is unrelated to Apple Silicon.
sally1620 · 4 years ago
Rosetta's biggest flaw is lack of AVX support.

We had to put so much effort to just run things on Rosetta because all of our compiled code had AVX enabled. We also needed to chase down pre-compiled binaries and re-compile them without AVX, we still haven't finished this work.

pishpash · 4 years ago
What's also nice is iPhone/iPad ARM apps can run as desktop apps on M1, so when there was no native desktop app replacement sometimes there was that other native app.
andrekandre · 4 years ago
vmware on m1 supports x86_64 emulation?
rgovostes · 4 years ago
No, and they've announced they do not plan to support it. However you can use x86_64 containers with Docker and virtual machines through QEMU, Lima.
dev_tty01 · 4 years ago
At least some aspects of this issue are getting better as we speak. The latest Mac OS (in beta) supports virtualizing ARM Linux but also enables the ARM Linux system to use Apple's speedy Rosetta 2 x86 binary compiler and JIT compiler to run x86 programs within the ARM Linux VM. Based on descriptions, it seems that the rest of the hypervisor VM framework has also matured substantially this release.

https://developer.apple.com/documentation/virtualization/run...

https://developers.apple.com/videos/play/wwdc2022/10002/

If you are not familiar, Rosetta is how Apple Silicon Macs run existing Mac x86 binaries and it is highly performant. It does binary pre-compilation and cacheing. It also works with JIT systems. They are now making that available within Linux VMs running on Macs.

mdp2021 · 4 years ago
> highly performant

Last thing I read, 70% of the native performance was shown by running GeekBench through Rosetta (with a few odd results noted).

If somebody has better info...

Edit: I see that Nov 2020 checks returned an 80% performance, and there was discussion on HN at (at least) https://news.ycombinator.com/item?id=25105597

dev_tty01 · 4 years ago
Here are my numbers for the original M1 (not Pro or Max) soon after release:

ARM Geekbench single core on M1 MacOS is 1734. ARM Geekbench single core on WinARM in VM on M1 is 1550. x86 single core on i9 MB Pro MacOS is 1138. x86 in emulation on M1 MacOS is 1254.

Yes, 72% x86 Rosetta vs. M1 Native. However, x86 Rosetta on M1 was faster than the previous i9 2019 Macbook Pro x86 native. I consider that to be performant for running code that was compiled for a very different architecture.

sircastor · 4 years ago
This might be an unpopular opinion, but I really think people should ignore bench scores and run the processes they need themselves. See what it feels like, and how comfortable you are with that.

Benchmarks are good for bragging rights and maybe convincing over-zealous accounting to approve a purchase (but even then that’s probably not all there is to it.)

epolanski · 4 years ago
Those are terrific numbers for emulation.

Anyway I can say that my colleagues M1 using rosetta is faster or equal to my MBP i9 2020.

watermelon0 · 4 years ago
I have benchmarked x86 on ARM Linux VM with Rosetta, and while Geekbench 5 shows similar performance between ARM and x86 version (for both single and multi core), this does not translate to the actual real world use cases.

When benchmarking x86 and ARM containers, our application seems to be around ~5x slower with x86-rosetta, and similarly can be observed for mysql-server or just doing `apt install`.

This is still significantly better than using qemu emulation, but it's not really usable in our case.

I've also encountered segmentation faults when running x86 `npm` inside Docker, so couldn't even install packages, but didn't dig further as to what's the cause.

(Note: I've created a simple macOS app using Virtualization framework, enabled Rosetta, and loaded Ubuntu Focal. I've installed the latest version of Docker, which automatically used `rosetta` when encountering x86 executables. Maybe this setup is not ideal.)

bjornsing · 4 years ago
> It does binary pre-compilation and cacheing. It also works with JIT systems.

Much more impressively it also leverages a custom hardware x86-like memory model unique to the M1/Apple ARM chips. That's where most of the performance really comes from, as I understand it.

saagarjha · 4 years ago
Not really; you can skip the barriers as Windows does and get mostly-decent emulation.
azinman2 · 4 years ago
My understanding is the AOT won’t be available to Linux; it’s JIT only.
runjake · 4 years ago
The WWDC video is unclear but seems to imply that it works the exact same as on macOS.

Hopefully, this is the right timestamp:

https://developer.apple.com/videos/play/wwdc2022/10002/?time...

dev_tty01 · 4 years ago
Interesting. Where did you see that? I'm still trying to get a handle on the latest changes.
porcoda · 4 years ago
I'm in a similar boat - love the performance/battery of my M1 MacBook Air, but the ecosystem is just too messy at the moment for me. I have a few tools I need to use that haven't yet been making official Apple Silicon releases due to GitHub actions not supporting Apple Silicon fully yet. The workaround involves maintaining two versions of homebrew, one for ARM and one for x86-64, and then being super careful to make sure you don't forget if you're working in an environment that's ARM and one that's X86. It's too much of a pain to keep straight for me (I admit it - I lack patience and am forgetful, so this is a bit of a "me" problem versus a tech problem).

My solution was to give up using my M1 mac for development work. It sits on a desk as my email and music machine, and I moved all my dev work to an x86 Linux laptop. I'll probably drift back to my mac if the tools I need start to properly support Apple Silicon without hacky workarounds, but until GitHub actions supports it and people start doing official releases through that mechanism, I'm kinda stuck.

It is interesting how much impact GitHub has had by not having Apple Silicon support. Just look at the ticket for this issue to see the surprisingly long list of projects that are affected. (See: https://github.com/actions/virtual-environments/issues/2187)

spockz · 4 years ago
I’m having the same issue on azure devops. The only way forward seems to be is running your own ado agents on arm machines you managed to arrange. Arm on Azure is a private beta that you have to subscribe for.

That wouldn’t be too much of an issue if you could just cross compile like you can with go. However graalvm can’t do this yet.

CoolCold · 4 years ago
Can you describe in a bit more details on this case? I'm trying to understand for myself, would it hit any problem for our dev team anytime soon - so far I've heard no complains/requests from them about Apple Silicon - should we get prepared for that or it's very specific case?

In nutshell I don't see how having Apple Silicon locally makes the problem - if your non local env (dev, prod, stage) is running on x86 Linux or even arm Linux, shouldn't be any issue to build for that architectures on your build farms anyway.

I may be missing some important part here.

brundolf · 4 years ago
> It is interesting how much impact GitHub has had by not having Apple Silicon support

Putting on my tin-foil hat for a sec: GitHub is owned by Microsoft, who would really stand to benefit from slowing down Apple Silicon adoption a bit...

michaelt · 4 years ago
Alternative theory: Apple doesn't offer an M1 server. Github doesn't offer an M1 build server because M1 servers don't exist.
jeppester · 4 years ago
There are many comments here that goes: "we had to do all sorts of configuration for this to work, but it's been great and we like it".

As a primarily Linux user these feel like very familiar stories.

It's kinda refreshing to hear those stories from mac users. Maybe we are not so different after all.

d3nj4l · 4 years ago
Mac-using devs are mostly Linux-using devs who prefer a non-free distro ;)
mrtksn · 4 years ago
That’s not that far from the reality. It’s essentially like using a Linux with really good graphical UI, ecosystem/3rd party integrations that just works and fully supported stellar hardware.

After all, macOS is certified Unix and Linux is Unix-like.

jachee · 4 years ago
Some of us cut our teeth on BSD even before linux and Darwin is like a familiar friend’s cousin from across town.
nottorp · 4 years ago
> Mac-using devs are mostly Linux-using devs who prefer a non-free distro ;)

A working GUI, to be exact. Source: switched to Macs from Linux in 2013.

dmitriy9000 · 4 years ago
non-free distro and nice trackpad :)
Sakos · 4 years ago
My company gave me a choice between a Thinkpad running Windows and a Macbook. So yeah, of course I chose the MB.
colordrops · 4 years ago
Or work at a company where Linux isn't possible.
XorNot · 4 years ago
Mac-using devs will file tickets that CI/CD is not working, and don't know how to configure their system to use GNU coreutils.

They all just use VS Code or Jetbrains and that's about it, so hell if I know why they need a $3000 machine to run shell commands they don't understand on.

I desperately wish one of the big boys would push enterprise Linux dev machines hard.

tonyhb · 4 years ago
1. Run arm-based debian using Parallels, headless using `prlctl`. SSH in and use tmux.

2. Everything you install will be arm based. Docker will pull arm-based images locally. Most every project (that we use) now has arm support via docker manifests.

3. Use binfmt to cross-compile x86 images within prlctl, or have CI auto-build images on x86 machines.

That pretty much does it.

aidos · 4 years ago
Yup. We have a lot of complex dependencies so a couple of us got M1s so we could charge into it headfirst to get it sorted. It wasn’t too bad. We had a couple of 3rd party things stuck on x86 so we emulated them on qemu within the vm. Slow, but ok (eventually we replaced them).

We were using UTM but have recently switched to Parallels, which is nice.

Our prod stayed on x86 but we’ve started moving to graviton3 which is better bang for buck. Suspect it’ll end up being a common story for others too.

m1s are just such nice machines that I’d go quite out of my way to stay on them now.

nonethewiser · 4 years ago
How would you handle docker images in a team where some use M1 and others use intel?
pmontra · 4 years ago
A customer of mine has two different Dockerfiles, one for Intel and one for M1. Deploys are Intel and are built on a CI server. No image gets out of our laptops.
jacquesm · 4 years ago
For now I'm ignoring it, I'm usually about two to three years behind the curve and by then the bugs have typically been ironed out. I won't be running macOS anyway, but will wait until a fully supported version of Debian is out there that uses all of the peripherals properly. They call it the bleeding edge for a reason and I see no reason to spend extra effort that isn't driven by an immediate need. I like tech, I can't stand fashion.
endorphine · 4 years ago
I'm in exactly the same boat. I'd love the quality of the hardware, but only if my current software experience (I'm on Debian) is not degraded. Let's see...
hiq · 4 years ago
> I'm usually about two to three years behind the curve

This makes so much sense now for many workflows. I no longer complain about my computers being slow so I don't even think of upgrading, and if something's annoying it's mostly about software rather than hardware anyway, so no point in upgrading, although the M1 seems to have convinced a lot of people otherwise. Looking forward to adopt this new tech... in 3-5 years or so.

This also makes it cheaper to upgrade through second-hand device, just stay one or two models behind.

nojito · 4 years ago
You will be waiting far more than 3 years.
plonk · 4 years ago
Asahi Linux has released an alpha version that supports quite a lot of features.
johnklos · 4 years ago
Simple: I never target specific CPUs to begin with ;)

I'm only half joking. I'm of the group of people who know that Docker is a security nightmare unless you're generating your Docker images yourself, so wherever I've had to support that, I insist on that. If you don't use software that's either processor centric (and therefore buggy, IMHO) or binary-only, then this is straightforward and a win for everyone.

Run x86 and amd64 VMs on real x86 and amd64 servers, and access them remotely, like we've done since the beginning of time (teletypes predate stored program electronic computers).

Since Docker is x86 / amd64 centric, treat it like the snowflake it is, and run it on x86 / amd64.

danudey · 4 years ago
Can you elaborate on what part of docker is a security nightmare?
kccqzy · 4 years ago
Dockerd is a daemon that runs with very high privileges and does too many things. People hate using "sudo docker" so they add themselves to the docker group. Now congratulations you are effectively running as root all the time.
ThemalSpan · 4 years ago
On the whole it's been good.

I work on scientific software, so the biggest technical issue I face day-to-day is that OpenMP based threading seems almost fundamentally incompatible with M1.

https://developer.apple.com/forums/thread/674456

The summary of the issue is that OpenMP threaded code typically assumes that a) processors are symmetric and b) there isn't a penalty for threads yielding.

On M1 / macOS, what happens is that during the first OpenMP for loop, the performance cores finish much faster, their threads yield, and then they are forever scheduled on the efficiency cores which is doubly bad since they're not as fast and now have too many threads trying to run on them. As far as I can tell (from the linked thread and similar) there is not an API for pinning threads to a certain core type.

physicsguy · 4 years ago
Can you not do this using the CPU affinity environment variables and just ignoring the efficiency codes? I was under the impression you could bind to specific cores with:

GOMP_CPU_AFFINITY=“1 2 5 6”

With thread 1 bound to core 1, thread 2 on core 2, thread 3 on core 5, thread 4 on core 6. I don’t have an M1 to play around on but I’d have assumed that the cores are fixed IDs.

Aside from that, if the workload is predictable in time, using a more complex scheduling pattern might help. You could perhaps look at how METIS partitions the workload, but see if it’s modifiable by adding weights to the cores reflective of their relative performance. Generally, to get good OMP performance I always found it better to treat it almost like it’s not shared memory, because on HPC clusters, you have NUMA anyway which drags performance down once you have more threads than a single processor has cores in the machine

ThemalSpan · 4 years ago
Unfortunately, the thread affinity api on m1 doesn't work that way, at least based on what I've been able to understand by reading here: https://developer.apple.com/forums/thread/703361 and more specifically this linked source file: https://github.com/apple-oss-distributions/xnu/blob/bb611c8f...

I agree with your other points though!

Deleted Comment