Everyone is talking about running a separate Linux dev environment, but I'd actually love to run a separate macOS dev environment in a clean way (i.e. without messing with partitions for dualboot, especially on the M1).
I get my stuff from Homebrew, just Rails + MySQL, no Docker, no fancy stuff. I'd love to have a fast macOS VM to run software I don't trust like Zoom or Skype and a second one to run my work projects, so they don't spill over my personal stuff, but AFAIK the virtualization story is still pretty incomplete on the M1 (or is there a way to run an arm64 macOS VM without a gigantic performance hit?).
If I recall, VMWare Fusion and Parallels can run macOS VMs with little or no speed hit, under Big Sur. Big Sur added GPU virtualization support so VMs aren’t slow, previous macOS versions lacked this.
Just tested Parallels latest beta on M1 and it couldn’t find an OS on a Big Sur ISO that I’d just created. Doesn’t appear to support virtualizing macOS on M1 yet.
It’s not smooth enough for me to consider regularly using for most work but you can setup macOS VMs from parallels/fusion inside of macOS since apple allows it on their hardware (...or a hackintosh).
It’s not as easy to get working compared to a windows vm because I believe it uses the OS recovery image but it’s not that hard to setup and works well. I’ve used it in the past to test my dotfiles setup on a perfectly clean install.
I still haven't tried it, but what if you downloaded the iOS apps on the M1? Wouldn't it be more containerized if you ran those apps from the phone "emulator"
You're absolutely right, but I would assume Zoom has disallowed its iOS client from being installed on macOS. Since Apple closed the loophole allowing sideloading, we're out of luck on that front.
I think running your dev environment in a VM is the future on all platforms.
As developers we trust so many different libraries. And it is important that they are safe when used in production code.
But we shouldn't have to worry about accidentally installing a library which uploads our emails or our browser data. By working from a VM we can prevent that.
The worst a malicious library can do from a VM is upload our SSH keys or source code (which is still bad).
I hear what you're saying, but the last time I worked through a linux VM on my mac laptop, everything got much slower. Our nodejs service took 2-3x longer to start up or to build. I think the issue was filesystem overhead for some reason. The future might involve working from a VM when we have CPU cycles to spare. But right now I need every iota of speed my computer has for my rust compiler.
I'd much rather we solve the security problems using better local sandboxing for software, like how it works on our phones. That would help end users as well, and it would stop crypto ransomware and all sorts of other attacks. Or alternatively, run my dev tools from a solaris zone or a freebsd jail or something, both of which have no performance impact.
> the last time I worked through a linux VM on my mac laptop, everything got much slower. Our nodejs service took 2-3x longer to start up or to build. I think the issue was filesystem overhead for some reason.
Docker on Mac has a lot of IO overhead. This is mentioned in the article:
> The way I made it work is by having a full “dev environment” running virtually. That means that I check out repositories on the virtual disk and run everything from there. This has the slight inconvenience that I can’t easily access those files with Finder, but the upside is that there is no noticeable IO latency issues like when running Docker for Mac.
If you just want improved performance with bind mounts instead, you can use :cached or :delegated flags.
Fedora Toolbox is quite interesting from this perspective. It does not provide a proper security boundary, since it shares the home directory, but it is really nice to make ad-hoc development environments. Since it is built on podman/buildah, it does not require the Docker daemon and can be used by regular users.
A properly secured unprivileged Linux container is not particularly worse than VM from a security point of view, but its impact on performance is very minimal. The drawback is that one cannot use Mac or Windows as a host, but as long as one is OK with running Linux on machine and accessing, say, Windows occasionally via VM, this is a very nice setup.
My first attempt at this configuration was using a podman container (my laptop runs Fedora).
It worked well, except that I occasionally want to run a container within my dev-environment. Running a container inside a container is possible, but not easy.
I settled on running a Vagrant Libvirt VM.
I do not know what the performance impact is- but I have not noticed either my laptop or the VM being slow.
I'm just now learning about the container technologies that are alternatives to Docker. More precisely, LXC, in case you had it in mind in your comment.
Do you know of a set of settings or tips to follow for making containers as safe as a VM?
I believe the real problem is that devs install random binaries, or modules, without any real concern for what malicious activities, even if unintentional, these artifacts may perform. Forget the fact that it’s on the dev’s machine, they move this stuff into production without proper security vetting.
Many devs just blindly install things from npm, copy random snippets from GitHub gists, execute remote shell scripts via curl that they just copy and paste into the terminal because they read it in some blog post, and on and on.
Microsoft tried to push for a better model in the Windows Store apps. But the world reacted with apathy. Now they seem to try to leverage their built-in hypervisor instead, for example with Sandbox [1] for temporary VMs with minimal overhead.
I believe VMs are not just for security - it might be the only way to run completely different operating systems (like macos on linux, or linux on windows, etc)
Additionally, VMs might be a great help to complicated debugging. You can crash a VM without taking down your desktop. I'm not sure about things like kernel debugging, but it might be easier in a VM.
Actually I think VMs for dev are evil. I hope it's a stop gap measure until tooling like nix catches up enough. Running an entire seperate OS just for development is completely bonkers to me.
If you think that's bonkers, I'll give you an even worse idea: a dedicated desktop machine. I have a dell optiplex running Intel i3-2100 with 8GB RAM and 1TB Samsung 860 Evo SSD. No display. Fedora runs like a dream with dnf automatic. Use cockpit if you'd like.
You can use your windows or Mac laptop with visual studio code to do web development. ng serve works as if you ran it from your laptop.
The biggest drawback I can think of is if you ever needed to leave the house. Thankfully, I have no life so I don't have this problem.
Running desktop applications inside a web browser is bonkers too. And the cloud. It's better to embrace the madness. I recommend watching Dr Strangelove.
I have it an easy way I think. My development for Linux except some RPi projects is strictly limited to some high performance business servers. I use C++ with a couple of multiplatform libraries for such development. So I simply develop and do main testing on Windows using Microsoft Visual Studio C++. The big one, not VS code. When the time comes for release I just check out project on Linux, run build and tests and that is it. I do have Linux VM on my development computer (VMware, full Linux OS with the GUI and dev environments) in case Linux tests fail due to some compiler differences / whatever. During last 3 years I had to use like it twice for that purpose. I mostly use Vm's to test my server setup from scratch scripts.
Nix has been something I’ve played with for a while, but increasingly I’ve been using it. I would love to see Nix working on Windows - there was a project to make that happen but I don’t know what the current status is.
I've been reluctant to contribute to nodejs (e.g. electron) projects for some time, because I just don't want to run npm on a computer with any kind of remotely private data.
Lately there were just too many itches to scratch, so I went for a VM replicating my normal setup (dotfiles etc.), and I just use x2go, locally. Quick and dirty setup which is good enough when used infrequently.
I've been playing with this stuff on and off for the past few weekends, getting an environment set up on my M1 air.
I ended up going with a VM, as it still allows (IIRC) the thin client to be thinner. There's definitely bits of containers I miss, like I'm back to keeping notes in a markdown file for commands that would be added to a Dockerfile, but I'm also not having to do weird things to get something like my shell history to persist.
> I think running your dev environment in a VM is the future on all platforms.
We probably have a long ways to go before we get there and it does come with its own sets of challenges and usability quirks even if the technical implementation is good.
For example, 8 years ago I used to run Windows 7 with xubuntu running in a graphical vmware VM using Unity mode[0]. Basically a way to seamlessly run graphical Linux apps in Windows. Each GUI app you launched from the Linux VM would have its own floating window that you could move around like any other Windows window. As an aside, this feature has been removed from vmware for years now when it comes to Linux guests.
It worked well enough for then, and I spent 99% of my time in that VM (browser, code editor, everything) and I only used Windows for recording / editing videos and playing games to avoid having to dual boot.
But even with vmware's really good disk performance there were performance issues at times, you're also splitting your memory up between your main system and your VM, it's not that efficient. Then there's little quirks like your main OS not really fully being able to integrate with files and apps from the VM, so you have to do hacky things to get apps to launch from a taskbar, search doesn't work because your stuff is in a VM, etc.. Plus you always feel like you're split between 2 worlds, the main OS and the VM. It doesn't feel really nice and cohesive.
To a lesser extent nowadays we have WSL 2 on Windows which is what I use. It solves a lot of the VM problems from above and running an X-server lets you run graphical apps very well, but you still feel like you're running 2 operating systems and the user experience suffers because you don't feel like you're running 1 streamlined OS.
A prime example is having to put all of your files in WSL 2's file system to get good performance but having certain types of files there is an inconvenience or you may not want to do it because it doesn't make sense to put 100GB of storage files on your SSD. That happened to me because I have a podcast site which is mostly code, except for a ton of raw wav file recordings + mp3s. Instead of just having a git ignored directory in my project, I had to create a 2nd directory outside of WSL to store these files. There's many other examples like this too.
I don't know what the Mac story is like, but I would imagine at minimum you're dealing with files being split into 2 worlds and will experience the unfriendly split OS feeling overall. Does Parallels let you seamlessly run floating windows across your VM and macOS?
My new work laptop runs Windows, so I'm interested in WSL2. I gather that it has good integration with Windows (eg you can type notepad and notepad will open)- which is convenient but removes any security boundary.
> Does Parallels let you seamlessly run floating windows across your VM and macOS?
I agree that not everything is as convenient.
My papercuts were not being able to type `code .` to make a new VSCode window. And not being able to use the new tab shortcut in my terminal to make a new tab in the current directory.
I've made a little project to solve both those issues for me https://github.com/ccouzens/ssh-nicety
It works, but is fairly bespoke to my setup. It uses Unix Domain sockets (which probably excludes Windows). The only new terminal it can launch atm is gnome-terminal.
I agree. We need something like Firecracker and quick boot times and something like Bottlerocket (immutable os) as the host. That woukd help my workflows very much.
Personally pushing the edit, compile, run cycle time as low as possible has always been the reason I have stayed away from dev VMs. For 99% of computer uses a VM is fast enough but unfortunately for many programming tasks, it is not.
I’ve been doing this with minor variations for a while now (from my iPad, from my Mac, my netbook, etc.) towards VMs in various places (your favorite flavor of cloud, my favorite closet, etc.).
It has become remarkably seamless and trivial to switch any of the local/remote pairs over time, and definitely cleaner than managing various app runtimes on my local machines (I have cloud-config templates to bootstrap fresh Go, Java and Node boxes as required).
edit: forgot to mention I'm posting this from another of those combos, a Windows VM I remote to from my iPad whenever I need a desktop browser
Can you explain more about your setup, in particular the iPad part? What apps are you running on your iPad to facilitate dev work? And when talking remote instances, do you mean something like a droplet/vps?
My dream is to code web apps from my iPad from the couch/bed as I sit at a big desk and monitor all day for work and I just want to chill in the evenings on something smaller and more comfortable.
Sure. I use Jump Desktop for RDP/VNC over SSH (with a Citrix X1 mouse) and Blink/Prompt for tmux sessions. A typical setup of mine has a remote container with Xrdp, Firefox, VS Code and barely enough window management to do full-screen windows and workspaces (typically blackbox).
Remotes can be anything: I have a KVM host at home (that I remote to from my Mac for Docker dev) and plenty of Azure VMs.
Not OP but I use an iPad occasionally to remote in to a linux box and work on projects.
I use blink for the terminal app and connect using mosh instead of ssh. I found that mosh handles the connection (reconnection) way better since iPadOS is pretty aggressive in killing the terminal app if I switch to a different app. I use also tmux on the server and just detach it when I'm done in case I want to work on it on my laptop or desktop. Overall works great, my only issue is that the 10.9" iPad screen is a _little_ bit too small for my liking so I don't do work like this that much. If I had the 12.9" iPad it would probably be something I use daily.
I wrote pojde[1] a while back to solve that exact usecase, using my iPad (or any device with a web browser) to code from anywhere. It creates multiple isolated code-server instances on a remote server and handles toolchain installs, updates, authentication and TLS for you :)
To give an additional data point for people who are interested how a setup like this performs for daily use:
I am currently running Parallels Tech Preview on the MacBook Air M1 and primarily use PyCharm (remote interpreter and deployment to the VM). The whole thing works better than expected considering it’s still a preview release. Battery lasts around 12 hours, sometimes an hour or so more depending on what else I run.
I am currently working on a Django app. When saving while the debug server is running I can command tab to my REST client and make an API request and the change was already deployed and the server restarted. Despite dealing with a VM the whole thing is just fast.
If you don’t absolutely need a local VM I’ve found it much nicer to have a beefy ec2 instance be the Linux vm that you connect to in order to work in Linux on x86.
Recently I’ve been doing this with VSCode which has a remote dev mode that works amazingly well. Before that I was just using ssh and tmux/screen which, as we know, also works and has worked for decades.
In our basic testing on M1 performance this week we’ve found that an arm vm on the M1 runs about 2x as fast as a c6g.2xlarge graviton2 instance. So you’re probably looking at about $0.50 / hr to compete with the Mac.
I am curious what you find „much nicer“ using an EC2 instance than a local VM.
When running remote VMs I usually run them on my ESXi box in the basement and VPN back home when traveling. This is especially nice when a project needs more resources than whatever device I’m working on has to offer. But beside this very specific use case I haven’t personally found any advantage of this setup.
> I had two requirements for developing that I wanted to achieve: macOS UI, Linux-based dev environment
What exactly is meant by a linux-based dev environment? Seems like the idea is to run the whole dev environment is in a virtual disk in a VM. I'm puzzled, but ok. It then goes on to set up Ubuntu Server in this VM, which is then used to host the dev environment.
Wouldn't simple running a docker instance both be less cumbersome, far more resource efficient, and quick to iterate, than literally installing an OS on a virtual disk?
---
To summarize, unless I missed something, this looks to explain that it is possible to run a VM on a MacOS. Add "M1" and it's the top post on hacker news? What's going on here?
That this is 'M1' is relevant for me personally, as my current Mac is dead old, I 'need' a Mac for iOS development, the Intel ones are a dead end and seem to run hot often, and it is unclear if all my dev needs will be met by the M1.
Any piece of information that untangles this mess is helpful to me. Of course this may not be the same for others, but it could be 'what's going on here'.
We’re going through the transition at the moment - tried to find info online but decided the only route was to try it on a real machine to see what works / doesn’t work.
Have built an Ubuntu 18 environment running through UTM (https://getutm.app/ - running qemu). Took a bit of tomfoolery where I had to install using a console only display and then flick back to full graphics to get the machine to boot.
Use port forwarding to talk to the machine - haven’t figured out a way to do bridge mode like I can with virtual box.
We’ve got a bunch of crazy dependencies that I’m in the process of rebuilding. Most seem to be ok. There’s a third party one we’re a bit trapped with but we’re running that emulated x86 within the vm. You can also use docker the same way within the vm.
Performance wise the arm vm is blazing fast. Seems to be about 20x faster than an x86 vm on my old MacBook 2015. It’s about 2x the speed of an 8 core graviton2. When running emulated x86 code, the speed drops to about 1/4 the speed of my old 2015 MacBook. Not ideal but in our case we’ve only got a single non arm dependency and it’s not used often, so it works for us.
It’s a bit of a leap, and if you have more crazy binary dependencies like we do, you will have do to a little work to get things running.
Having said that, the machine itself is _amazing_. It’s a real joy to be on a machine with this level of responsiveness.
> Wouldn't simple running a docker instance both be less cumbersome, far more resource efficient, and quick to iterate, than literally installing an OS on a virtual disk?
On Mac, docker works by installing a VM- so the two aren't so different.
I (not the author) prefer using a VM as a development environment, because at some point I'll want to run a container and nested containers are tedious.
> Seems like the idea is to run the whole dev environment is in a virtual disk in a VM
Pretty much.
I had bad experiences running Docker directly on MacOS. The IO latency was unbearable. I know they are working hard on it so maybe it's better now, but this setup works well for me.
I've been doing something like this for about 3 months with very good success. This is also pretty much the only "complete" solution I could come up with that doesn't involve duct taping 3-4 different things and keeping them all in my head.
A simpler solution I had - One linux vm, SSH connection plugin in VSCode and a simple 4 line SSH config file (~/.ssh/config) does magic.
The LocalForwards are key in setting up any tunnels I need working locally - you can tunnel as many ports as you need.
I use the terminal inside VSCode - which means I can manage docker(-compose), microk8s, etc and anything I spin up, I'll just be able to access from my local host during testing.
I am looking for a quiet and fast machine for development. I've been trying to find a reasonable AMD laptop but all are out of stock and I think these will still have fans buzzing under heavier load. I personally hate Apple practices and I never clicked with macOS (was forced to work with it for many years), but if I could install Linux on M1 it would be hard to swallow, but I may consider using it. My Intel laptop has fans buzzing now even when it is idle. It drives me crazy.
Do you absolutely have to have a laptop? I realized years ago that I spend 99% of my time in a single place and have since built custom desktop systems for my primary development machine. They are faster than any laptop I ever owned, quieter and much easier to live with. And because I can put together a system to my own specifications I can end up with something that works perfectly with Linux. I haven't bought a Mac in years and even with their new ARM hardware I don't see enough of a benefit to go back.
I'm in a similar situation. I want a new x86 laptop for development, but it's not super urgent.
Some laptop built around the new Ryzen 9 5900HS cpu [0] seemed like an obvious choice. But although it seems like AMD has released it, I'm having trouble finding any actual laptops that have it as an option. Maybe I'm just not looking hard enough?
I got an HP OMEN 15 for $1200 and added some memory - 32gb with 8 cores, 16 threads and a nice IPS 144hz screen. I run VirtualBox Ubuntu 20.04 VMs with docker inside them and connect with VS Code SSH - and I have no performance complaints.
I get my stuff from Homebrew, just Rails + MySQL, no Docker, no fancy stuff. I'd love to have a fast macOS VM to run software I don't trust like Zoom or Skype and a second one to run my work projects, so they don't spill over my personal stuff, but AFAIK the virtualization story is still pretty incomplete on the M1 (or is there a way to run an arm64 macOS VM without a gigantic performance hit?).
Not sure about Parallels.
It’s not as easy to get working compared to a windows vm because I believe it uses the OS recovery image but it’s not that hard to setup and works well. I’ve used it in the past to test my dotfiles setup on a perfectly clean install.
As developers we trust so many different libraries. And it is important that they are safe when used in production code.
But we shouldn't have to worry about accidentally installing a library which uploads our emails or our browser data. By working from a VM we can prevent that.
The worst a malicious library can do from a VM is upload our SSH keys or source code (which is still bad).
I'd much rather we solve the security problems using better local sandboxing for software, like how it works on our phones. That would help end users as well, and it would stop crypto ransomware and all sorts of other attacks. Or alternatively, run my dev tools from a solaris zone or a freebsd jail or something, both of which have no performance impact.
https://code.visualstudio.com/docs/remote/remote-overview
Virtualized Linux on x86 AND being able to run VSCode on your Mac for great fonts, WiFi, screen etc without having to fuss with drivers.
Of course ssh and tmux also work well for this purpose.
Docker on Mac has a lot of IO overhead. This is mentioned in the article:
> The way I made it work is by having a full “dev environment” running virtually. That means that I check out repositories on the virtual disk and run everything from there. This has the slight inconvenience that I can’t easily access those files with Finder, but the upside is that there is no noticeable IO latency issues like when running Docker for Mac.
If you just want improved performance with bind mounts instead, you can use :cached or :delegated flags.
If so, why do you need rust compiler in the first place? May be you don't? May be it's wrong compiler?
It worked well, except that I occasionally want to run a container within my dev-environment. Running a container inside a container is possible, but not easy.
I settled on running a Vagrant Libvirt VM. I do not know what the performance impact is- but I have not noticed either my laptop or the VM being slow.
Do you know of a set of settings or tips to follow for making containers as safe as a VM?
A better solution would be a desktop OS with proper application sandboxing. Mac OS is taking many steps in that direction.
Linux - as usual - has a multitude of solutions, all of them problematic. (AppArmor, SeLinux, Firejail, Snap, Flatpak)
Many devs just blindly install things from npm, copy random snippets from GitHub gists, execute remote shell scripts via curl that they just copy and paste into the terminal because they read it in some blog post, and on and on.
Anyway, just my worthless < 2 cents.
1: https://docs.microsoft.com/en-us/windows/security/threat-pro...
Additionally, VMs might be a great help to complicated debugging. You can crash a VM without taking down your desktop. I'm not sure about things like kernel debugging, but it might be easier in a VM.
You can use your windows or Mac laptop with visual studio code to do web development. ng serve works as if you ran it from your laptop.
The biggest drawback I can think of is if you ever needed to leave the house. Thankfully, I have no life so I don't have this problem.
Does it solve the security problem by preventing access to the majority of your home directory?
Or solve it by only allowing you to install vetted libraries?
I've been reluctant to contribute to nodejs (e.g. electron) projects for some time, because I just don't want to run npm on a computer with any kind of remotely private data.
Lately there were just too many itches to scratch, so I went for a VM replicating my normal setup (dotfiles etc.), and I just use x2go, locally. Quick and dirty setup which is good enough when used infrequently.
My ideal setup would probably be closer to https://blog.jessfraz.com/post/docker-containers-on-the-desk..., but it's more setup than I could be bothered with at the time. Maybe one day.
Are containers a hack that will go away as VMs become lightweight or will containers replace VMs?
I run proxmox, and when I first set things up I used VMs, but over time I moved most server kinds of things to containers.
EDIT: docker is a special thing - creating an entire environment from one Dockerfile is pretty powerful.
I ended up going with a VM, as it still allows (IIRC) the thin client to be thinner. There's definitely bits of containers I miss, like I'm back to keeping notes in a markdown file for commands that would be added to a Dockerfile, but I'm also not having to do weird things to get something like my shell history to persist.
We probably have a long ways to go before we get there and it does come with its own sets of challenges and usability quirks even if the technical implementation is good.
For example, 8 years ago I used to run Windows 7 with xubuntu running in a graphical vmware VM using Unity mode[0]. Basically a way to seamlessly run graphical Linux apps in Windows. Each GUI app you launched from the Linux VM would have its own floating window that you could move around like any other Windows window. As an aside, this feature has been removed from vmware for years now when it comes to Linux guests.
It worked well enough for then, and I spent 99% of my time in that VM (browser, code editor, everything) and I only used Windows for recording / editing videos and playing games to avoid having to dual boot.
But even with vmware's really good disk performance there were performance issues at times, you're also splitting your memory up between your main system and your VM, it's not that efficient. Then there's little quirks like your main OS not really fully being able to integrate with files and apps from the VM, so you have to do hacky things to get apps to launch from a taskbar, search doesn't work because your stuff is in a VM, etc.. Plus you always feel like you're split between 2 worlds, the main OS and the VM. It doesn't feel really nice and cohesive.
To a lesser extent nowadays we have WSL 2 on Windows which is what I use. It solves a lot of the VM problems from above and running an X-server lets you run graphical apps very well, but you still feel like you're running 2 operating systems and the user experience suffers because you don't feel like you're running 1 streamlined OS.
A prime example is having to put all of your files in WSL 2's file system to get good performance but having certain types of files there is an inconvenience or you may not want to do it because it doesn't make sense to put 100GB of storage files on your SSD. That happened to me because I have a podcast site which is mostly code, except for a ton of raw wav file recordings + mp3s. Instead of just having a git ignored directory in my project, I had to create a 2nd directory outside of WSL to store these files. There's many other examples like this too.
I don't know what the Mac story is like, but I would imagine at minimum you're dealing with files being split into 2 worlds and will experience the unfriendly split OS feeling overall. Does Parallels let you seamlessly run floating windows across your VM and macOS?
[0]: Here's a really old video of that set up https://nickjanetakis.com/blog/create-an-awesome-linux-devel...
> we have WSL 2 on Windows which is what I use
My new work laptop runs Windows, so I'm interested in WSL2. I gather that it has good integration with Windows (eg you can type notepad and notepad will open)- which is convenient but removes any security boundary.
> Does Parallels let you seamlessly run floating windows across your VM and macOS?
I don't use Parallels, or Mac. But I believe so- they call it Coherence https://kb.parallels.com/4670
I agree that not everything is as convenient. My papercuts were not being able to type `code .` to make a new VSCode window. And not being able to use the new tab shortcut in my terminal to make a new tab in the current directory.
I've made a little project to solve both those issues for me https://github.com/ccouzens/ssh-nicety It works, but is fairly bespoke to my setup. It uses Unix Domain sockets (which probably excludes Windows). The only new terminal it can launch atm is gnome-terminal.
It has become remarkably seamless and trivial to switch any of the local/remote pairs over time, and definitely cleaner than managing various app runtimes on my local machines (I have cloud-config templates to bootstrap fresh Go, Java and Node boxes as required).
edit: forgot to mention I'm posting this from another of those combos, a Windows VM I remote to from my iPad whenever I need a desktop browser
My dream is to code web apps from my iPad from the couch/bed as I sit at a big desk and monitor all day for work and I just want to chill in the evenings on something smaller and more comfortable.
Remotes can be anything: I have a KVM host at home (that I remote to from my Mac for Docker dev) and plenty of Azure VMs.
I use blink for the terminal app and connect using mosh instead of ssh. I found that mosh handles the connection (reconnection) way better since iPadOS is pretty aggressive in killing the terminal app if I switch to a different app. I use also tmux on the server and just detach it when I'm done in case I want to work on it on my laptop or desktop. Overall works great, my only issue is that the 10.9" iPad screen is a _little_ bit too small for my liking so I don't do work like this that much. If I had the 12.9" iPad it would probably be something I use daily.
[1] https://github.com/pojntfx/pojde
I am currently running Parallels Tech Preview on the MacBook Air M1 and primarily use PyCharm (remote interpreter and deployment to the VM). The whole thing works better than expected considering it’s still a preview release. Battery lasts around 12 hours, sometimes an hour or so more depending on what else I run.
I am currently working on a Django app. When saving while the debug server is running I can command tab to my REST client and make an API request and the change was already deployed and the server restarted. Despite dealing with a VM the whole thing is just fast.
Recently I’ve been doing this with VSCode which has a remote dev mode that works amazingly well. Before that I was just using ssh and tmux/screen which, as we know, also works and has worked for decades.
When running remote VMs I usually run them on my ESXi box in the basement and VPN back home when traveling. This is especially nice when a project needs more resources than whatever device I’m working on has to offer. But beside this very specific use case I haven’t personally found any advantage of this setup.
> I had two requirements for developing that I wanted to achieve: macOS UI, Linux-based dev environment
What exactly is meant by a linux-based dev environment? Seems like the idea is to run the whole dev environment is in a virtual disk in a VM. I'm puzzled, but ok. It then goes on to set up Ubuntu Server in this VM, which is then used to host the dev environment.
Wouldn't simple running a docker instance both be less cumbersome, far more resource efficient, and quick to iterate, than literally installing an OS on a virtual disk?
---
To summarize, unless I missed something, this looks to explain that it is possible to run a VM on a MacOS. Add "M1" and it's the top post on hacker news? What's going on here?
Any piece of information that untangles this mess is helpful to me. Of course this may not be the same for others, but it could be 'what's going on here'.
Have built an Ubuntu 18 environment running through UTM (https://getutm.app/ - running qemu). Took a bit of tomfoolery where I had to install using a console only display and then flick back to full graphics to get the machine to boot.
Use port forwarding to talk to the machine - haven’t figured out a way to do bridge mode like I can with virtual box.
We’ve got a bunch of crazy dependencies that I’m in the process of rebuilding. Most seem to be ok. There’s a third party one we’re a bit trapped with but we’re running that emulated x86 within the vm. You can also use docker the same way within the vm.
Performance wise the arm vm is blazing fast. Seems to be about 20x faster than an x86 vm on my old MacBook 2015. It’s about 2x the speed of an 8 core graviton2. When running emulated x86 code, the speed drops to about 1/4 the speed of my old 2015 MacBook. Not ideal but in our case we’ve only got a single non arm dependency and it’s not used often, so it works for us.
It’s a bit of a leap, and if you have more crazy binary dependencies like we do, you will have do to a little work to get things running.
Having said that, the machine itself is _amazing_. It’s a real joy to be on a machine with this level of responsiveness.
As an alternative could you use React Native and borrow an Apple product during compile day?
On Mac, docker works by installing a VM- so the two aren't so different.
I (not the author) prefer using a VM as a development environment, because at some point I'll want to run a container and nested containers are tedious.
All the Docker pseudo-ports just run a Linux VM on the host OS and set up Docker inside it, for you.
Pretty much.
I had bad experiences running Docker directly on MacOS. The IO latency was unbearable. I know they are working hard on it so maybe it's better now, but this setup works well for me.
What exactly are you guys running that causes it to be "unbearable"?
Docker on Mac can be dog slow. This is appealing to me.
A simpler solution I had - One linux vm, SSH connection plugin in VSCode and a simple 4 line SSH config file (~/.ssh/config) does magic.
Here's my config file -
The LocalForwards are key in setting up any tunnels I need working locally - you can tunnel as many ports as you need.I use the terminal inside VSCode - which means I can manage docker(-compose), microk8s, etc and anything I spin up, I'll just be able to access from my local host during testing.
https://news.ycombinator.com/item?id=26746983
Some laptop built around the new Ryzen 9 5900HS cpu [0] seemed like an obvious choice. But although it seems like AMD has released it, I'm having trouble finding any actual laptops that have it as an option. Maybe I'm just not looking hard enough?
[0] https://www.amd.com/en/products/apu/amd-ryzen-9-5900hs
UPDATE: Maybe I just needed to wait a little longer: [1]
[1] https://www.ultrabookreview.com/35985-amd-ryzen-9-laptops/
1. Low-end Chromebook (good battery life), remote server, VPN.
2. High-end Chromebook (there are a few i3 and i5 models with 8GB RAM), Linux environment.
Are you often in locations where you don't have Internet access?