A lot of Windows features depend on Hyper-V, once enabled Windows is not booted directly any more, Hyper-V is started and the main Windows system runs in a privileged VM.
All other VMs need to utilize the Hyper-V hypervisor, because nested virtualization is not that well supported. So even VMware then is just a front-end for Hyper-V.
Back when I ran Windows in a KVM VM for gaming, a lot of anti-cheat systems didn't take kindly to running in a virtualized environment.
Turning on HyperV to go KVM->HyperV->Windows effectively 'laundered' my VM signature enough to satisfy the anticheats, though the overall perf hit was ~10-15%.
You are right that Windows itself runs under Hyper-V as a guest when virtualisation-based security is enabled and it even has paravirtual devices that are not massively different to VirtIO.
I think your statement about VMware Workstation is right as of today too with recent versions, although for a long time older versions would simply refuse to start if it detected that Hyper-V was enabled, presumably because it made assumptions about the host virtualisation support.
It's not just security features that need Hyper-V. Also WSL (Linux on Windows) or the Android Subsystem (run any side loaded app or anything from the Amazon App Store) need Hyper-V. Both of them are super useful for me, more and more things are iOS/Android App based only. Linux should speak for itself.
Is it possible to use hyper v directly? Like could I boot into linux but switch over to Windows with just a key press? I'm guessing no since its not in Microsoft interest to do so
That's an interesting idea, to run Hyper-V completely without Windows. I think it's not possible, at least not without some substantial amount of hacking.
But it's no problem to run Linux on Hyper-V. It's a hypervisor, off course you can start nearly any operating system as a VM. You can also give the VM access to some hardware components. But I don't think it's possible to get a full native Linux desktop experience, with GPU/Screen, Keyboard and Mouse connected to the host system.
Not with Hyper-V but the thing to be aware of is there is no difference which you initially “boot into” since each is essentially run at the same level.
You can install ESXi (free) to do what you are asking, though.
> A lot of Windows features depend on Hyper-V, once enabled Windows is not booted directly any more, Hyper-V is started and the main Windows system runs in a privileged VM.
Got a source for this? Not that I don't believe you but other than for the Xbox I haven't seen/can't find any details about this.
“ In addition, if you have Hyper-V enabled, those latency-sensitive, high-precision applications may also have issues running in the host. This is because with virtualization enabled, the host OS also runs on top of the Hyper-V virtualization layer, just as guest operating systems do. However, unlike guests, the host OS is special in that it has direct access to all the hardware, which means that applications with special hardware requirements can still run without issues in the host OS.”
It hasn't always been, nor is it necessarily now. If you enable Hyper-V, that will act as Hypervisor for your machine and boot Windows by default. Applications that use it (VMWare, for instance, or Microsoft ones like WSL2) will add their own guests to the Hypervisor.
It is not the default configuration. And it wasn't even installed before Windows 8.
Does anyone know the state of running Windows / Linux x86-64 virtualization on Apple Silicon? This article is super interesting but dances around the most important application for VMs on Mac.
For Linux, and if you only need to run CLI tools, I've been very happy with Lima [0]. It runs x86-64 and ARM VMs using QEMU, but can also run ARM VMs using vz [1] (Apple virtualization framework[2]) that is very performant. Also, along with the project colima [3] you can easily start Docker/Podman/Kubernetes instances, totally substituting Docker Desktop for me.
For desktop environments (Linux/Windows) I've used UTM [4] with mixed success. Although it's been almost a year since last time I used it, so maybe it runs better now
There's also Parallels, and people say it's a good product, but it's around USD/EUR 100, and I haven't tested it as I don't have that need.
And there's VMWare Fusion but... who likes VMWare? ;)
A correct solution is to remote into instances on dedicated (bare metal) servers (use ECC memory and SSH with a good cipher for your transport, even across your local or VPN/WireGuard.. network!), perhaps using KVM/QEMU for macOS VMs (yep, requires a MacPro to be legal) and KVM/Firecracker for Linux VMs. You could do Windows VMs in KVM/QEMU, but will have less friction remoting into an alternate (HyperV) box for that (using Windows-specific security products). RDP-over SSH for Windows, MPEG-VNC-over-SSH for macOS (and Wayland).
Why? Did you checkout the Privacy Policy for Parallels? The last time I checked, it allowed them to remotely take anything from your systems that they want. If I wanted that, I would just use a VPS running on someone else's machine in a cage somewhere.
VMware, by the way, is now Broadcom, as in they reportedly replaced the staff and ripped up the perpetual licensing model (subscription only now)... Even before that, the Fusion product development had been shifted overseas, presumably to avoid paying higher wage software engineers in Silicon Valley (what a brilliant way for a software company to innovate) --now a company in Singapore is wearing their skin and the C-suite are out of jobs too.
Parallels has a bad desktop user experience using Linux because of poor support for continuous scrolling. Lots of users have complained on their forums for years, but they refuse to do anything about it. I bought it for one year, and regretted the experience. It works well with Windows though.
Generally, the experience with MacOS is mediocre thanks to Apple and their Virtualization Framework, with many basic features missing for years.
I abandoned Parallels when they crippled the perpetually licensed version. "Pro" is only available via subscription for a few years now. Even before then, their store was disgusting with forced bundling of additional hostile products, and later they became optional but were still added to your cart by default.
My personal experience is that Windows 11 for ARM runs extremely well on Parallels. It includes an emulation layer for x86 apps that's completely invisible and just works. I can even still run Cakewalk, a program originally from the 90s, on my M1 Mac to edit midi files.
With that being said, this is just my view as someone who uses simple consumer oriented programs, and I'm not sure how well it'll work for more serious purposes.
Have you tried any Windows games on Apple Silicon? What kinds of Windows apps do you tend to run? I've used the macOS version of World of Warcraft on my '20 Mac Mini (16GB RAM) and even with utilities that adjust the mouse acceleration curve, I still find game play clunky. I was hoping I could run WoW under a VM and have it be somewhat performant.
YMMV, but from my own experiments, on an M1 Macbook Air, it did not work well for me. I was trying to compile an Elixir codebase on x86-64 Alpine Linux. Elixir does not have cross-compiling. I tried it in a Docker container, and in a Linux VM using OrbStack. Both approaches fail, as it just segfaults, even on the first `mix compile` of a blank project.
This problem does not exist in ARM containers or VMs, as the same project compiles perfectly in an ARM Alpine Linux container/VM.
It's definitely not plug-and-play for all scenarios. If anyone knows workarounds, let me know.
That’s an underlying QEMU bug, which is used by Lima [1].
Add `ENV ERL_FLAGS="+JPperf true"` to your Dockerfile and it will build just fine cross platform. The flag just changes some things during build time and won’t affect runtime performance.
For anything that doesn't need a UI, you're FAR better off having some remote server than trying to emulate, it's far to slow for ARM64<>x86-64 in both directions..
Many things are just so much easier with a remote server/workstation somewhere than trying to deal with VM shenanigans.
ARM64 visualised on the otherhand (Linux works great, macos seems good(?), haven't tried Windows) with UTM is pretty great.
I’ve been able to do this (build x86/ubuntu targeted elixir) with UTM on my M1 Mac. It ain’t fast, that’s for sure. But it works. Which is interesting because sibling responses to your Lima experience claim it’s because of a qemu “bug”, but utm runs qemu as well.
I regularly use Orbstack to develop for x64 Linux (including kernel development). It works transparently as an x64 linux command line that uses Rosetta under the hood, so performance is reasonably good.
It can also run docker containers, apparently faster than the normal docker client, although I haven't used that feature much so I'm not sure.
You can use Rosetta to run x86 Linux binaries with good performance under a virtualised ARM Linux [0], but if you want to run fully x86 Windows or Linux you’ll need to emulate, not virtualise. It’s possible, but there’s a big performance hit as you might expect.
I do my work on Apple Silicon laptops since the first M1 came out.
I use Docker Desktop that can run for me amd64 images as well.
I do run Splunk in it (which is a very enterprise product, written mostly in C++), I was so shocked to see that I was able to run it on Rosetta pretty much from day 1. Splunk worked on macOS with Rosetta from day 1, but had some issues in Docker running under QEMU, now Docker uses Rosetta for Linux, which allows me to run Splunk for Linux in Docker as well.
I use RedHat CodeReady Containers (local OpenShift), which works great as well.
And I use Parallels to run mostly headless Linux to run Kubernetes. And sometimes Windows just to look at it.
In a first two years of Apple Silicon architecture I definitely had to find some workaround to make things work. Right now I am 100% rely only on Apple Silicon, and deliver my software to large enterprise companies who use it on amd64/arm64 architectures.
I run full AMD64 containers using Docker Desktop, which uses Rosetta under the hood. On my M1 Pro they were a bit slow (maybe 25% slower than my work laptop, which is a 12th gen. i9), but good enough in general. I have since upgraded to an M3 Max and AMD64 VMs seem to be a lot faster, maybe even faster than my 12th gen. i9. I really hope Apple doesn’t get rid of Rosetta support in VMs, ever. It’s just too useful.
Very slow using qemu. You can run arm64 Linux and run x86_x64 apps inside using Rosetta, if your virtual machine uses Virtualization.Framework (does not work with qemu, AFAIK). I suppose you can do the same with arm64 Windows and Microsoft x86_64 translation technology, but not really sure.
I wish there was a good GUI-based solution for Windows emulation via Rosetta. My use case isn’t development - it’s running software with an x64-only proprietary driver! (The Oculus remote link drivers, FWIW.) Fusion and Parallels don’t have that feature, so I’m wondering whether there are technical difficulties/blockers there.
The article is about virtualization, not emulating x86-64, so I'd disagree it's dancing around that. (Also, Windows and Linux have their own x86 emulations - if you boot virtualized Windows/ARM or Linux/ARM, you can get to the native emulation functionalities)
I'm a big windows guy, pretty much windows only. Recently bought a macbook. I love windows so much that I set up my shell on the mac to be powershell and use Windows Terminal to SSH into the mac.
I'm REALLY happy with parallel desktop. It runs any productivity or programming app I've needed. It also makes it as if it's running natively on the mac, you can just open up some windows app and it pops up like a mac one. It works amazingly fast, and I can develop both x64, x32, ARM apps in visual studio on my VM. Games don't work because of DRM, but I just use Parsec to stream my desktop if I want to game anyways, so it doesn't affect my workflow. And any game I would actually play while traveling is on the mac natively.
For linux I only emulate Kali, and it works good, I love how the VM's pop up as a "Virtual desktop" so I can side swipe it, but linux vm's don't have the native integration like Windows. Once nested virtualization is enabled, i'll probably stick it in WSL, I personally don't use Linux that much since I think it's shit.
The only downside is some asshole at Apple won't put in nested virtualization for the VM's, even though M2 and M3 have support for it on linux.
I was able to get a fully functional Windows 11 install using UTM on my M1 MBP. This really helped with some Windows-only android tools with USB passthrough.
I've not tried Linux.
Note: I am not associated with UTM in any way, just a satisfied user.
Yes, this is the best way to do it if possible in my experience. I use some fairly heavy x86_64 apps in the Arm for Windows in Parallels, using Windows’ translation system (rosetta 2 equivalent), and it’s been quite good.
Trying to emulate the whole x86_64 version of an OS (I tried some Docker images that only came in x86 before finding instructions to rebuild them on the ARM base OS) has been super slow on the other hand. This is on a quite decent M2 Pro.
I wish every OS user logged in their isolated VM of the OS. This way, Adobe could install all their bloatware and take control of their user and I could keep ownership of my Apple’s computer
What's sad is that processes are already virtual machines, they just need to have a better permissions system. What's really sad is that for the most part those better permissions systems have been built (namespaces/cgroups on linux, gatekeeper on Mac OS) but nobody figured out how to expose that to end users before the business people figured out that there were trillions of dollars available if you charged rent to centrally manage it.
Is this not essentially what docker did with cgroups? It’s incredibly tricky securing containers, I’m not at all confident process only sandboxes would be adequate.
Essentially shipping an entire OS with every app looks horribly inefficient to me. Especially if the only thing you need is sandboxing.
Containers would be a more appropriate solution, and even containers would be somewhat overkill. Simply using UNIX-style permissions and an application-specific UID could do. I think it is how it is done in Android.
Qubes does allow creating a VM for just about any program or service. But, in my experience, it suffers from latency. So, while fine for web browsing, it wasn't too keen on playing videos. YMMV of course, but Adobe products are already hogs without the emu layer.
I wanted to use a MacOS VM with Parallels for development. It is very easy to install and runs fast, but it's impossible to sign in with an Apple ID, which severely limits its use.
One major point which this article failed to mention, is that macOS only lets you run two VMs at the same time.
So if you were thinking of getting a mac mini to use as a build server, just buy the absolutely cheapest model.
Unlike on Linux systems, you're not allowed to have enough VMs to actually fully utilise the hardware. So buying headroom (resource wise) is a complete waste.
My desktop PC is using libvirt+qemu (on an Arch host. I use Arch, btw) to PCI passthru my RTX 4090 GPU to a Windows guest. I installed the guest initially with emulated SATA for the main drive. Once Windows was up and running, I installed virtio-win and the guest is now using virtIO accelerated drivers for the network interface + main disk. I'm also sharing some filesystems using virtio-fs.
Did you have to use any hacks to get a regular GTX/RTX card to pass through? Last time I tried this with ESXi, it was insanely difficult and poorly documented to get non-Quadro cards to do pass thru (admittedly on a Windows guest).
> Running older versions of macOS in a VM enables users to run Intel-only apps long after Rosetta 2 support is dropped from the current macOS
Now if they'd offer that for x86 Windows guests... I mean, games are the obvious thing but I guess the architectural differences between Apple's PowerVR-family GPU and NV/AMD are just too large, but there's a ton of software that only has Windows binaries available and which I still need either an Intel macOS device or an outright Windows device to run.
Yes I know UTM exists but it's unusably slow and the Windows virtio drivers it ships are outright broken.
Even if you could get Windows working, what good would ARM Windows do?
Honestly, running virtualized x86_64 Steam (using something like FEX) under Asahi Linux and using Proton seems like the most fruitful way to play Windows games on Apple Silicon hardware (at least once the GPU drivers mature).
ARM Windows probably already does better than future Asahi+Proton+FEX in that it includes a Rosetta2/FEX like layer of it's own, is otherwise the native Windows without needing to fake that interface, and e.g. Parallels already has DX11 working through Metal without the need for a future version of Asahi drivers combined with the layer in Proton.
The downside to either approach is anticheats. Games without them can run great today, games with them can't run at all because they are kernel level x86 code and emulating the kernel architecture is too slow for games. It looks like Windows is doing another ARM push with higher end chips and less vendor exclusivity this time around - maybe that'll finally get enough market penetration to make this less of an issue going forward, at which point virtualized ARM Windows could be nearly fully viable.
There’s one obscure use case that won’t work, sadly - people who have to use proprietary binary only drivers! I’ve been through hell trying to get Oculus Link to work.
> > Running older versions of macOS in a VM enables users to run Intel-only apps long after Rosetta 2 support is dropped from the current macOS
> Now if they'd offer that for x86 Windows guests...
Hmm the way i read it they're running older ARM versions of Mac OS in the VMs. Not x86 versions. The virtualization infrastructure doesn't do architecture translation, that is done in software by the OS running inside the VM.
As for x86 games... they run pretty well with x86 crossover emulating x86 windows that is then translated by rosetta 2 to arm... is your head spinning yet?
If the op premises partially is about when Rosette v2 no longer support, at least the older vm arm based macOS can run Apple intel App using the now current then obsolete Rosette v2.
Never thought of that but it happened to power pc App …
Tbh they should keep it as unlike powerpc not much people use, intel based app is all around. Having both intel and arm, only one upcoming platform is missing. But supporting a translator as said implicitly in other post is hard. New intel/amd cpu code may appear, ignoring all those amd and nivdia Gpu code which are mostly not supported anyway.
My great confusion is why docker —-platform linux/amd64 is so much faster (almost native performance) than x86 UTM VMs. Can docker somehow leverage Rosetta?
Yes, Docker can leverage Rosetta.
I haven't used Docker Desktop in a bit (b/c I end up doing my work in a VM on Azure since I work on Azure), but not too long ago there was an option to enable it in the settings panel, not sure if it's default or not these days.
Any Linux VM can use Rosetta[1] you just need to enable it when booting the vm.
This creates a shared directory in the vm that you need to mount and then register Rosetta with binfmt_misc (same way Docker uses qemu).
> Any Linux VM hosted under Virtualization.framework can use Apple's Rosetta 2 for Linux translation binary via binfmt_misc
FTFY
You don't need the mount, it just exposes the Linux binary stored on macOS, which you can copy over. The binary does check for it being run under virt.fw, although some folks managed to hack that away (IIRC running it on AWS whatever)
Docker runs an ARM kernel and uses qemu in user mode on the individual binary level. Anything CPU-bound is emulated, but as soon as you do a system call, you’re back in native land, so I/O bound stuff should run decently.
Note that UTM also supports rosetta. Boot up an aarch64 image with Rosetta support and then load the mounted binfmt handler. Now you can run x86 binaries on your aarch64 UTM VM. Works flawlessly.
MacOS on Apple Silicon does not allow whole VM Rosetta.
You must run arm64 MacOS or Linux VMs and those VMs can run x86_64 binaries via Rosetta. Apple documented this.
Running an x86_64 virtual machine on MacOS requires software emulation, which is why it is so slow. Docker sets it up correctly so that the Linux VM it uses is arm64 but the binaries in the containers are x86_64, so that Rosetta can be used on those binaries.
A lot of Windows features depend on Hyper-V, once enabled Windows is not booted directly any more, Hyper-V is started and the main Windows system runs in a privileged VM.
All other VMs need to utilize the Hyper-V hypervisor, because nested virtualization is not that well supported. So even VMware then is just a front-end for Hyper-V.
Turning on HyperV to go KVM->HyperV->Windows effectively 'laundered' my VM signature enough to satisfy the anticheats, though the overall perf hit was ~10-15%.
I think your statement about VMware Workstation is right as of today too with recent versions, although for a long time older versions would simply refuse to start if it detected that Hyper-V was enabled, presumably because it made assumptions about the host virtualisation support.
But it's no problem to run Linux on Hyper-V. It's a hypervisor, off course you can start nearly any operating system as a VM. You can also give the VM access to some hardware components. But I don't think it's possible to get a full native Linux desktop experience, with GPU/Screen, Keyboard and Mouse connected to the host system.
Edit: this post seems to answer your question, not sure if it's correct: https://superuser.com/a/1531799
You can install ESXi (free) to do what you are asking, though.
Got a source for this? Not that I don't believe you but other than for the Xbox I haven't seen/can't find any details about this.
quote:
“ In addition, if you have Hyper-V enabled, those latency-sensitive, high-precision applications may also have issues running in the host. This is because with virtualization enabled, the host OS also runs on top of the Hyper-V virtualization layer, just as guest operating systems do. However, unlike guests, the host OS is special in that it has direct access to all the hardware, which means that applications with special hardware requirements can still run without issues in the host OS.”
From https://learn.microsoft.com/en-us/virtualization/hyper-v-on-...
https://techcommunity.microsoft.com/t5/virtualization/virtua...
What are the performance implications of that?
Wait it's all VMs? Always has been?! That is actual one sentence horror.
It is not the default configuration. And it wasn't even installed before Windows 8.
Deleted Comment
For desktop environments (Linux/Windows) I've used UTM [4] with mixed success. Although it's been almost a year since last time I used it, so maybe it runs better now
There's also Parallels, and people say it's a good product, but it's around USD/EUR 100, and I haven't tested it as I don't have that need.
And there's VMWare Fusion but... who likes VMWare? ;)
Why? Did you checkout the Privacy Policy for Parallels? The last time I checked, it allowed them to remotely take anything from your systems that they want. If I wanted that, I would just use a VPS running on someone else's machine in a cage somewhere.
VMware, by the way, is now Broadcom, as in they reportedly replaced the staff and ripped up the perpetual licensing model (subscription only now)... Even before that, the Fusion product development had been shifted overseas, presumably to avoid paying higher wage software engineers in Silicon Valley (what a brilliant way for a software company to innovate) --now a company in Singapore is wearing their skin and the C-suite are out of jobs too.
Generally, the experience with MacOS is mediocre thanks to Apple and their Virtualization Framework, with many basic features missing for years.
I do!
I abandoned Parallels when they crippled the perpetually licensed version. "Pro" is only available via subscription for a few years now. Even before then, their store was disgusting with forced bundling of additional hostile products, and later they became optional but were still added to your cart by default.
With that being said, this is just my view as someone who uses simple consumer oriented programs, and I'm not sure how well it'll work for more serious purposes.
This problem does not exist in ARM containers or VMs, as the same project compiles perfectly in an ARM Alpine Linux container/VM.
It's definitely not plug-and-play for all scenarios. If anyone knows workarounds, let me know.
[1] https://gitlab.com/qemu-project/qemu/-/issues/1034
Many things are just so much easier with a remote server/workstation somewhere than trying to deal with VM shenanigans.
ARM64 visualised on the otherhand (Linux works great, macos seems good(?), haven't tried Windows) with UTM is pretty great.
Elixir compiles to beam files, like Erlang, right?
I was pretty sure beam files are bytecode and not platform specific?
It can also run docker containers, apparently faster than the normal docker client, although I haven't used that feature much so I'm not sure.
Deleted Comment
[0] https://developer.apple.com/documentation/virtualization/run...
I use Docker Desktop that can run for me amd64 images as well.
I do run Splunk in it (which is a very enterprise product, written mostly in C++), I was so shocked to see that I was able to run it on Rosetta pretty much from day 1. Splunk worked on macOS with Rosetta from day 1, but had some issues in Docker running under QEMU, now Docker uses Rosetta for Linux, which allows me to run Splunk for Linux in Docker as well.
I use RedHat CodeReady Containers (local OpenShift), which works great as well.
And I use Parallels to run mostly headless Linux to run Kubernetes. And sometimes Windows just to look at it.
In a first two years of Apple Silicon architecture I definitely had to find some workaround to make things work. Right now I am 100% rely only on Apple Silicon, and deliver my software to large enterprise companies who use it on amd64/arm64 architectures.
Nothing graphical or all that intensive though, just some productivity tools I can't live without.
I'm REALLY happy with parallel desktop. It runs any productivity or programming app I've needed. It also makes it as if it's running natively on the mac, you can just open up some windows app and it pops up like a mac one. It works amazingly fast, and I can develop both x64, x32, ARM apps in visual studio on my VM. Games don't work because of DRM, but I just use Parsec to stream my desktop if I want to game anyways, so it doesn't affect my workflow. And any game I would actually play while traveling is on the mac natively.
For linux I only emulate Kali, and it works good, I love how the VM's pop up as a "Virtual desktop" so I can side swipe it, but linux vm's don't have the native integration like Windows. Once nested virtualization is enabled, i'll probably stick it in WSL, I personally don't use Linux that much since I think it's shit.
The only downside is some asshole at Apple won't put in nested virtualization for the VM's, even though M2 and M3 have support for it on linux.
I've not tried Linux.
Note: I am not associated with UTM in any way, just a satisfied user.
[1] https://mac.getutm.app/
Trying to emulate the whole x86_64 version of an OS (I tried some Docker images that only came in x86 before finding instructions to rebuild them on the ARM base OS) has been super slow on the other hand. This is on a quite decent M2 Pro.
We were so close. Sigh.
Containers would be a more appropriate solution, and even containers would be somewhat overkill. Simply using UNIX-style permissions and an application-specific UID could do. I think it is how it is done in Android.
That's a funny slip...
Apple are very weird about MacOS VMs.
It works fine.
So if you were thinking of getting a mac mini to use as a build server, just buy the absolutely cheapest model.
Unlike on Linux systems, you're not allowed to have enough VMs to actually fully utilise the hardware. So buying headroom (resource wise) is a complete waste.
Sounds like we could potentially get some Windows ARM64 builds happening as well then. Might be a project for a future weekend. :)
My desktop PC is using libvirt+qemu (on an Arch host. I use Arch, btw) to PCI passthru my RTX 4090 GPU to a Windows guest. I installed the guest initially with emulated SATA for the main drive. Once Windows was up and running, I installed virtio-win and the guest is now using virtIO accelerated drivers for the network interface + main disk. I'm also sharing some filesystems using virtio-fs.
https://pve.proxmox.com/wiki/Windows_VirtIO_Drivers
Now if they'd offer that for x86 Windows guests... I mean, games are the obvious thing but I guess the architectural differences between Apple's PowerVR-family GPU and NV/AMD are just too large, but there's a ton of software that only has Windows binaries available and which I still need either an Intel macOS device or an outright Windows device to run.
Yes I know UTM exists but it's unusably slow and the Windows virtio drivers it ships are outright broken.
Honestly, running virtualized x86_64 Steam (using something like FEX) under Asahi Linux and using Proton seems like the most fruitful way to play Windows games on Apple Silicon hardware (at least once the GPU drivers mature).
The downside to either approach is anticheats. Games without them can run great today, games with them can't run at all because they are kernel level x86 code and emulating the kernel architecture is too slow for games. It looks like Windows is doing another ARM push with higher end chips and less vendor exclusivity this time around - maybe that'll finally get enough market penetration to make this less of an issue going forward, at which point virtualized ARM Windows could be nearly fully viable.
> Now if they'd offer that for x86 Windows guests...
Hmm the way i read it they're running older ARM versions of Mac OS in the VMs. Not x86 versions. The virtualization infrastructure doesn't do architecture translation, that is done in software by the OS running inside the VM.
As for x86 games... they run pretty well with x86 crossover emulating x86 windows that is then translated by rosetta 2 to arm... is your head spinning yet?
Never thought of that but it happened to power pc App …
Tbh they should keep it as unlike powerpc not much people use, intel based app is all around. Having both intel and arm, only one upcoming platform is missing. But supporting a translator as said implicitly in other post is hard. New intel/amd cpu code may appear, ignoring all those amd and nivdia Gpu code which are mostly not supported anyway.
Any Linux VM can use Rosetta[1] you just need to enable it when booting the vm. This creates a shared directory in the vm that you need to mount and then register Rosetta with binfmt_misc (same way Docker uses qemu).
[1] https://developer.apple.com/documentation/virtualization/run...
FTFY
You don't need the mount, it just exposes the Linux binary stored on macOS, which you can copy over. The binary does check for it being run under virt.fw, although some folks managed to hack that away (IIRC running it on AWS whatever)
If you use NixOS you can simply enable https://search.nixos.org/options?channel=23.11&show=virtuali...
You must run arm64 MacOS or Linux VMs and those VMs can run x86_64 binaries via Rosetta. Apple documented this.
Running an x86_64 virtual machine on MacOS requires software emulation, which is why it is so slow. Docker sets it up correctly so that the Linux VM it uses is arm64 but the binaries in the containers are x86_64, so that Rosetta can be used on those binaries.