I used to do all my development in VMs but found they were too heavy-weight, too slow and too difficult to keep "clean". Docker doesn't give you the same isolation at the resource level, which is a major benefit of VMs, but I also found this was more of a hinderance than a help. For me sharing the artifacts of development or the dependencies (i.e. building Docker images or using containers for parts of the system not under development) is where the value is, vs a completely isolated, reproducable software/hardware environment. A baseline VM is still great for reproducing production issues, and avoids the "Works on my machine" badge.
I followed the given instructions on my MacOSX 10.15.7 following brew installs of vagrant and virtualbox, but running vagrant up resulted in
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
default: Adapter 1: nat
==> default: Forwarding ports...
default: 22 (guest) => 2222 (host) (adapter 1)
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
default: SSH address: 127.0.0.1:2222
default: SSH username: vagrant
default: SSH auth method: private key
default: Warning: Remote connection disconnect. Retrying...
...
Timed out while waiting for the machine to boot. This means that Vagrant was unable to communicate with the guest machine within the configured ("config.vm.boot_timeout" value) time period.
I cannot find any config.vm.boot_timeout in Vagrantfile
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Checking if box 'debian/bookworm64' version '12.20240212.1' is up to date...
==> default: Machine 'default' has a post `vagrant up` message. This is a message
==> default: from the creator of the Vagrantfile, and not from Vagrant itself:
==> default:
==> default: Vanilla Debian box. See https://app.vagrantup.com/debian for help and bug reports
almost immediately. Looks like timing out was not the real issue. I can run vagrant ssh now.
Realizing there's no cc and no git, I try
$ apt-get update
Reading package lists... Done
E: List directory /var/lib/apt/lists/partial is missing. - Acquire (13: Permission denied)
I was a heavy Vagrant user many years ago, but I thought the project was mostly dead due to changes in business model or focus from HasiCorp, IIRC, and of course, the popularity of Docker.
Also, isn’t Vagrant dependent on VirtualBox? Because the latter doesn’t run on ARM Macs.
I used Vagrant and VMs heavily for customer projects between 2013-2016.
Added benefit was mainly some sort of workspace isolation since you could simply “vagrant in” you different projects via WebStorm/PHPstorm in parallel.
I was given a MacBook Pro 13” with then massive 16gb exactly for this reason: to handle VMs via Vagrant.
Before that, everything was kind of tricky and fragile, mostly relying on a LAMP stack without isolation.
What kind of sucks about ARM Macs is that the pretty darn good x86 emulation seems to be hidden behind a proprietary wall, at least as far as I know.
It sure would be nice if Apple provided a nice interface into their emulation for various projects like VirtualBox / Bochs / PCem / Qemu / DOSBOX / VMWare / Bochs / WINE to plug into.
It also kind of sucks that the entire overall body of PC / DOS / Windows / x86 emulation and virtualization is locked in all these silos despite the open source nature. The problem probably is that there are so many gotchas to document and cross-annotate across the projects that it basically is impossible without some dedicated team of very talented technical documenters.
You can do it, Asahi Linux has access to a binary blob of Rosetta, or something like this, IIRC.
The thing is I don’t really care. I was worried I’d need to run Windows or Linux X86. Turns out, I don’t. I only had to run Windows once for a few moments in the last 3 years since I had an Apple Silicon. Surprising, actually.
Added a footnote suggesting a workaround with Parallels for any confused ARM Mac users out there, searching desparately for this fabled Virtualbox program. Appreciate the catch.
I'd say nix is the more deterministic way to get all, and only, the things you absolutely need. Docker has the same issue as vagrant that you'll get a different debian depending on the day you init the VM, run apt, and so on.
I deem that you are correct! I have investigated NixOS in a Vagrant + Virtualbox combo very similar to what I did in this post, and yes, this is a much more deterministic solution to this idea.
If the people you are writing a tutorial for can be expected to be familiar with Nix, this is probably a much more future-proofed solution. That's still a small camp of people at the moment, but it does include the creator of Vagrant himself, Mitchell Hashimoto. I maintain that a specific Debian version is a better "lingua franca" for a general audience - that may change in coming years, of course.
I have begun writing a new tutorial called "NixOS: Three Big Ideas" to expose people like myself who haven't given Nix the time of day to explore its concepts in about 20 minutes or so.
It depends. Depends on a whole lot of things, and Nix isn't special in this:
Do you re-use an existing artefact or do you build a new one?
You can put Nix in Docker and in Vagrant and get the same result every time. You an also not use Nix, create a Docker image or a Vagrant box, and whenever you instantiate one of those you get the same result every time. You an also involve Nix and not pin to a specific release and now you get a different result every time. That would be the same as creating a Dockerfile with a mutable tag or Vagrant file with a dynamic setup.
Usually you don't really want "the more deterministic way to get all, and only, the things you absolutely need.". That is something that is probably only really useful in CI when you need reproducible builds. For everything else you probably want to be tracking a stable release for the version you target. Especially since most people aren't working to get a deterministic result, but a 'close enough' result to get the job done.
In nix you can pin your environment down to a commit in nixpkgs, from which whenever you build your system (i.e, artefact), the result will be the same. How do you achieve this in docker or vagrant?
> Usually you don't really want "the more deterministic way to get all, and only, the things you absolutely need."
IMO, you do, as otherwise you'll need to fix additional issues every time your CI runs, no?
>create a Docker image or a Vagrant box, and whenever you instantiate one of those
I'm passing X to doubt.
This is just odd analysis. Nix is strictly superior to Docker, except the learning curve. You have to really work at it to make any modern Nix setup non-reproducible.
You can avoid that unpredictability if you don't use the latest tag, or, better, create images or start containers from SHA256 digests. Those are guaranteed to be immutable.
I.e instead of:
Docker run --rm debian:trixie
You'd do
Docker run --rm debian:trixie@sha256:0ee9224bb31d3622e84d15260cf759bf37f08d652cc0411ca7b1f785f87ac19c
The only disadvantage of the digest approach is that you would need to manually resolve the digest that's correct for your processor arch. Using bare tags like "debian:trixie" can resolve to manifest lists (if so configured) that has Docker automatically find the right digest for your arch.
Re Docker, chroot, etc: the boundary around a VM is both easier to understand and more hermetic than around a container.
The learner already has a pretty good idea of where a computer stops. They can pretty much transfer that knowledge to a VM and not have any problems for a long time. Whereas with Docker, what's a filesystem? What's a process namespace?
And, well, I wouldn't do anything in a container that I couldn't afford to leak out into my host system. In a chroot especially, the files are just sitting there, waiting for a newbie to touch them from the wrong context.
Exactly! The key thing to realize here is that tutorials have to be aimed at newbies. If they know containers, great. If they don't, and later want to learn, writing and running a Dockerfile based on instructions for the cool thing they successfully ran once saying "Here's our VM, here are the commands we run, and here's the program running in action" is a great little practice problem.
Docker is a lot better on Linux, everywhere else you must also spin a VM of sorts. It makes a difference if you’re running multiple instances since you only pay for the virtualized kernel once.
Vagrant is also great for filing bug reports in software. It allows you to give the developers all the commands to reproduce the bug in a clean environment.
So, just set config.vm.boot_timeout to 900 in the vagrant file, and see what happens.
Realizing there's no cc and no git, I try
Hmm, still some challenges ahead...Also, isn’t Vagrant dependent on VirtualBox? Because the latter doesn’t run on ARM Macs.
Added benefit was mainly some sort of workspace isolation since you could simply “vagrant in” you different projects via WebStorm/PHPstorm in parallel.
I was given a MacBook Pro 13” with then massive 16gb exactly for this reason: to handle VMs via Vagrant.
Before that, everything was kind of tricky and fragile, mostly relying on a LAMP stack without isolation.
Vagrant was fantastic at the time for the job.
It sure would be nice if Apple provided a nice interface into their emulation for various projects like VirtualBox / Bochs / PCem / Qemu / DOSBOX / VMWare / Bochs / WINE to plug into.
It also kind of sucks that the entire overall body of PC / DOS / Windows / x86 emulation and virtualization is locked in all these silos despite the open source nature. The problem probably is that there are so many gotchas to document and cross-annotate across the projects that it basically is impossible without some dedicated team of very talented technical documenters.
The thing is I don’t really care. I was worried I’d need to run Windows or Linux X86. Turns out, I don’t. I only had to run Windows once for a few moments in the last 3 years since I had an Apple Silicon. Surprising, actually.
Added a footnote suggesting a workaround with Parallels for any confused ARM Mac users out there, searching desparately for this fabled Virtualbox program. Appreciate the catch.
If the people you are writing a tutorial for can be expected to be familiar with Nix, this is probably a much more future-proofed solution. That's still a small camp of people at the moment, but it does include the creator of Vagrant himself, Mitchell Hashimoto. I maintain that a specific Debian version is a better "lingua franca" for a general audience - that may change in coming years, of course.
I have begun writing a new tutorial called "NixOS: Three Big Ideas" to expose people like myself who haven't given Nix the time of day to explore its concepts in about 20 minutes or so.
Do you re-use an existing artefact or do you build a new one?
You can put Nix in Docker and in Vagrant and get the same result every time. You an also not use Nix, create a Docker image or a Vagrant box, and whenever you instantiate one of those you get the same result every time. You an also involve Nix and not pin to a specific release and now you get a different result every time. That would be the same as creating a Dockerfile with a mutable tag or Vagrant file with a dynamic setup.
Usually you don't really want "the more deterministic way to get all, and only, the things you absolutely need.". That is something that is probably only really useful in CI when you need reproducible builds. For everything else you probably want to be tracking a stable release for the version you target. Especially since most people aren't working to get a deterministic result, but a 'close enough' result to get the job done.
Deleted Comment
> Usually you don't really want "the more deterministic way to get all, and only, the things you absolutely need."
IMO, you do, as otherwise you'll need to fix additional issues every time your CI runs, no?
I'm passing X to doubt.
This is just odd analysis. Nix is strictly superior to Docker, except the learning curve. You have to really work at it to make any modern Nix setup non-reproducible.
I.e instead of:
Docker run --rm debian:trixie
You'd do
Docker run --rm debian:trixie@sha256:0ee9224bb31d3622e84d15260cf759bf37f08d652cc0411ca7b1f785f87ac19c
The only disadvantage of the digest approach is that you would need to manually resolve the digest that's correct for your processor arch. Using bare tags like "debian:trixie" can resolve to manifest lists (if so configured) that has Docker automatically find the right digest for your arch.
The learner already has a pretty good idea of where a computer stops. They can pretty much transfer that knowledge to a VM and not have any problems for a long time. Whereas with Docker, what's a filesystem? What's a process namespace?
And, well, I wouldn't do anything in a container that I couldn't afford to leak out into my host system. In a chroot especially, the files are just sitting there, waiting for a newbie to touch them from the wrong context.
Docker is a lot better on Linux, everywhere else you must also spin a VM of sorts. It makes a difference if you’re running multiple instances since you only pay for the virtualized kernel once.
So much life-force is wasted in trying to make Docker on another OS behave like linux.
It is ironically, easier to run Docker, an linux inside something like a Virtual box an enjoy near native experience.
It does appear to be in some kind of beta
https://www.virtualbox.org/wiki/Testbuilds
https://web.archive.org/web/20240331100215/https://hiandrewq...