I have been exploring nix for the past few months and my experience with nix has been both exhilarating and frustrating, simultaneously. On one hand, I find it hard to imagine not using nix now, but on the other hand, I hesitate to recommend it to other colleagues due to its steep learning curve, ux issues and potential for footguns.
I sincerely hope that nix community improves the UX to make it more accessible to new users. Though for those willing to invest time in learning it, nix is extremely useful and highly recommended.
There are two sides to this problem, the first is to improve the UX, but the second is to clearly describe a compelling reason for people to adopt. It is very tempting to only blame the first, but I think we need to also need to tell a better story and highlight the values in a better way. This would then give people a reason to get past the UX issues in the hopes of achieving those desired values.
For example; people seem to have accepted that the benefits of using terraform in spite of various difficulties - the learning of its language or needing to hire specialists. The mantra of "infrastructure as code" is enough to drive adoption. What is our mantra? We need to accept that "reproducibility" isn't quite working and that we need either a clearer message, or to explain the message.
> but the second is to clearly describe a compelling reason for people to adopt.
Build source straight from Git:
nix run github:someuser/someproject
Need a different version?
nix run github:someuser/someproject?ref=v1.0.0
Wanna replace some dependency?
nix run \
--override-input somelib github:someotheruser/somelib \
github:someuser/someproject
Wanna fork:
git clone https://github.com/someuser/someproject
# do your changes
nix run someproject/
And the best part is, it's conceptually very simple, it's mostly just a bunch of symlinks and environment variables behind the scenes. If you wanna inspect what's in a package, just `cd /nix/store/yourpackage-HASH` and look around.
NixOS just feels like a distribution build from the ground up for Free Software. The "reproducibility" in every day use just means that stuff won't randomly break for no reason. And if you don't wanna go the full NixOS route, you can just install the Nix package manager itself on any other distribution.
That said, the part where Nix gets painful is when it has to interact with the rest of the software world. Things like software that wants to auto-update itself really does not fit into the Nix ecosystem at all and can be rather annoying to get to work.
There are of course numerous other pain points, missing features and all that. But being able to flip between versions, fork, compile and all that with feels just so much better than anything else.
I disagree, I think the value proposition for reproducibility is clear, it's just that the learning curve "is too damn high!" I'm highly motivated to learn and use Nix (or Guix for that matter) but I've bounced off of it three or four times now, and I'm the kind of weirdo who learns new PLs for fun.
Someone once said that you don't learn Nix, you reverse engineer it.
Two of the major differences between terraform and Nix that I see are 1. it’s possible to muddle through in terraform and 2. Hashicorp has put a non-trivial amount of effort into documentation for all levels of users. I’ve taken a stab at using Nix for Rust projects and could not even get to a point where I had something that functioned. I found plenty of material online but was it out of date, idiosyncratic, did it use flakes or not, etc etc? I suppose I could have contorted my existing project to meet the examples I found in various GitHub repos but my stuff is bog standard Rust so I don’t know that I’d be willing to. As for documentation, what should I, as a new and invested user, be looking for? Are flakes the future? Are they a distraction? Why are all the suggested docs I could find several year old guides on blogs? There’s 20 years of information floating around and the official project documentation, well, I don’t know who the audience is but it’s not learners.
It’s a shame. The promise of Nix/NixOS is really interesting — being able to deterministically create VMs with a custom user land is desirable to me — but in practice I can’t even get a simplistic project to compile, let alone something elaborate. Terraform is jank but it’s not a whole language that needs to be learned, seemingly, before the official docs start to become coherent in their underlying context.
> We need to accept that "reproducibility" isn't quite working...
I guess it doesn't sell Nix as strongly as it could..
But, it's hardly for a lack of enthusiasm on Nix user's part. -- Rather, I've seen a few "what's nix good for anyway" comments, and this results in many lengthy replies extolling nix.
Agreed, reproducibility is only one aspect of Nix and doesn't quite capture the whole picture. That's why so many newcomers see Nix as nothing more than a Docker replacement. There's also too much misconceptions about Nix the language that's scaring people off.
I'd like to see more being discussed about:
* Its unique ability to treat packages as programmable data (i.e., derivations)
* Its use case as a building block for deployment systems that knows about and integrates with packages
* Its JSON-like simplicity
They're all central to the Nix experience, and yet it's often overlooked in Nix discussions.
I wonder if the corporate backing behind Docker has anything to do with Nix being adopted less. There’s a lot of overlap between Nix and Docker, and Docker had major corporate guns behind it from early on. Reproducibility as you mentioned, is easily achieved with Docker, and there is no need to learn the Nix ecosystem.
Personally, I want to learn Nix, but Ive never forced myself to do it because it’s so much easier to make a Docker image do what I want. Nix is the pinnacle of a reproducible environment that doesn’t randomly break, but Docker is 80% of the way there and much easier.
It’s like comparing trucks (Docker) with planes (Nix) for logistics. I can pay out the wazoo for a plane to get my package there over night, or I can pay a small fraction to wait a few days.
What I see as a barrier for entry to Nix/NixOS is not the UX, but the available documentation, or lack of thereof. One may consider the docs being part of UX though. I am in the process of writing a book about NixOS, you may track the progress here:
https://drakerossman.com/blog/practical-nixos-the-book
The emphasis is on how to make it as practical as possible, plus cover the topics which may apply to Linux in general, and in great detail.
My main concern is that it puts another layer of abstraction atop an already complex (and at times leaky) abstraction.
I’d love to see more clear docs about what devenv is actually doing under the covers, and how to escape-hatch into Nix land when I inevitably need to tweak something.
Also, similarly, how do I map Nix docs (often just a set of example expressions) into equivalent
devenv incantations?
(It’s been a few months since I last looked so maybe things have come along since then.)
I expect my main hurdle will be stuff that isn’t in NicPkgs. Especially for dev stuff, I’m going to want to pull in low-profile GitHub stuff or Python packages. What’s the escape hatch to bring those into the dev environment?
Currently working on a graphical UX where you can create and share flakes without writing Nix code at https://mynixos.com
The site makes it easy to browse indexed flakes and configure flakes via options and packages. Hopefully the structure provided by the UI can makes it easier to get started with Nix flakes :)
A few antipatterns/annoyances I've come across over the years:
Importing paths based on environment variables:
There is built-in support for this, e.g. setting the env var `NIX_PATH` to `a=/foo:b=/bar`, then the Nix expressions `<a>` and `<b>` will evaluate to the paths `/foo` and `/bar`, respectively. By default, the Nix installer sets `NIX_PATH` to contain a copy of the Nixpkgs repo, so expressions can do `import <nixpkgs>` to access definitions from Nixpkgs.
The reason this is bad is that env vars vary between machines, and over time, so we don't actually know what will be imported.
These days I completely avoid this by explicitly un-setting the `NIX_PATH` env var. I only reference relative paths within a project, or else reference other projects via explicit git revisions (e.g. I import Nixpkgs by pointing the `fetchTarball` function at a github archive URL)
Channels:
These always confused me. They're used to update the copy of Nixpkgs that the default `NIX_PATH` points to, and can also be used to manage other "updatable" things. It's all very imperative, so I don't bother (I just alter the specific git revision I'm fetching, e.g. https://hackage.haskell.org/package/update-nix-fetchgit helps to automate such updating).
Nixpkgs depends on $HOME:
The top-level API exposed by the Nixpkgs repository is a function, which can be called with various arguments to set/override things; e.g. when I'm on macOS, it will default to providing macOS packages; I can override that by calling it with `system = "x86_64-linux"`. All well and good.
The problem is that some of its default values will check for files like ~/.nixpkgs/config.nix, ~/.config/nixpkgs/overlays.nix, etc. This causes the same sort of "works on my machine" headaches that Nix was meant to solve. See https://github.com/NixOS/nixpkgs/blob/master/pkgs/top-level/...
I avoid this by importing Nixpkgs via a wrapper, which defaults to calling Nixpkgs with empty values to avoid its impure defaults; but still allows me to pass along my own explicit overrides if needed.
The imperative nix-env command:
Nix provides a command called 'nix-env' which manages a symlink called ~/.nix/profile. We can run commands to "install packages", "update packages", "remove packages", etc. which work by building different "profiles" (Nix store paths containing symlinks to a bunch of other Nix store paths).
This is bad, since it's imperative and hard to reproduce (e.g. depending on what channels were pointing to when those commands were run, etc.). A much better approach is to write down such a "profile" explicitly, in a git-controlled text file, e.g. using the `pkgs.buildEnv` function; then use nix-env to just manage that single 'meta-package'.
Tools which treat Nix like Apt/Yum/etc.
This isn't something I haven't personally done, but I've seen it happen in a few tools that try to integrate with Nix, and it just cripples their usefulness.
Package managers like Apt have a global database, which maps manually-written "names" to a bunch of metadata (versions, installed or not, names of dependencies, names of conflicting packages, etc.). In that world names are unique and global: if two packages have the name "foo", they are the same package; clashes must be resolved by inventing new names. Such names are also fetchable/realisable: we just plug the name and "version number" (another manually-written name) into a certain pattern, and do a HTTP GET on one of our mirrors.
In Nix, all the above features apply to "store paths", which are not manually written: they contain hashes, like /nix/store/wbkgl57gvwm1qbfjx0ah6kgs4fzz571x-python3-3.9.6, which can be verified against their contents and/or build script (AKA 'derivation'). Store paths are not designed to be managed manually. Instead, the Nix language gives us a rich, composable way to describe the desired file/directory; and those descriptions are evaluated to find their associated store paths.
Nixpkgs provides an attribute set (AKA JSON object) containing tens of thousands of derivations; and often the thing we want can be described as 'the "foo" attribute of Nixpkgs', e.g. '(import <nixpkgs> {}).foo'
Some tooling that builds-on/interacts-with Nix has unfortunately limited itself to only such descriptions; e.g. accepting a list of strings, and looking each one up in the system's default Nixpkgs attribute set (this misunderstanding may come from using the 'nix-env' tool, like 'nix-env -iA firefox'; but nix-env also allows arbitrary Nix expressions too!). That's incredibly limiting, since (a) it doesn't let us dig into the structure inside those attributes (e.g. 'nixpkgs.python3Packages.pylint'); (b) it doesn't let us use the override functions that Nixpkgs provides (e.g. 'nixpkgs.maven.override { jre = nixpkgs.jdk11_headless; }'); (c) it doesn't let us specify anything outside of the 'import <nixpkgs> {}' set (e.g. in my case, I want to avoid NIX_PATH and <nixpkgs> altogether!)
Referencing non-store paths:
The Nix language treats paths and strings in different ways: strings are always passed around verbatim, but certain operations will replace paths by a 'snapshot' copied into the Nix store. For example, say we had this file saved to /home/chriswarbo/default.nix:
# Define some constants
with {
# Import some particular revision of Nixpkgs
nixpkgs = import (fetchTarball {...}) {};
# A path value, pointing to /home/chriswarbo/defs.sh
defs = ./defs.sh;
# A path value, pointing to /home/chriswarbo/cmd.sh
cmd = ./cmd.sh;
};
# Return a derivation which builds a text file
nixpkgs.writeScript "my-super-duper-script" ''
#!${nixpkgs.bash}/bin/bash
source ${nixpkgs.lib.escapeShellArg defs}
${cmd} foo bar baz
''
Notice that the resulting script has three values spliced into it via ${...}:
- The script interpreter `nixpkgs.bash`. This is a Nix derivation, so its "output path" will be spliced into the script (e.g. /nix/store/gpbk3inlgs24a7hsgap395yvfb4l37wf-bash-5.1-p16 ). This is fine.
- The path `cmd`. Nix spots that we're splicing a path, so it copies that file into the Nix store, and that store path will be spliced into the script (e.g. /nix/store/2h3airm07gp55rn9qlax4ak35s94rpim-cmd.sh ). This is fine.
- The string `nixpkgs.lib.escapeShellArg defs`, which evaluates to the string `'/home/chriswarbo/defs.sh'`, and that will be spliced into the script. That's bad, since the result contains a reference to my home folder! The reason this happens is that paths can often be used as strings, getting implicitly converted. In this case, the function `nixpkgs.lib.escapeShellArg` transforms strings (see https://nixos.org/manual/nixpkgs/stable/#function-library-li... ), so:
- The path `./defs.sh` is implicitly converted to the string `/home/chriswarbo/defs.sh`, for input to `nixpkgs.lib.escapeShellArg` (NOTE: you can use the function `builtins.toString` to do the same thing explicitly)
- The function `nixpkgs.lib.escapeShellArg` returns the same string, but wrapped in apostrophes (it also adds escaping with backslashes, but our path doesn't need any)
- That return value is spliced as-is into the resulting script
To avoid this, we should instead splice the path into a string before escaping; giving us nested splices like this:
The problem with Nix is that I still have to start with a Linux system--so I still need Docker, Terraform, something to give me a stable base for Nix to work against.
At that point--why should I add Nix to the mess since I still need those other things anyway?
With Linux, the only stable base required for Nix to function is the kernel. Nix packages all the required dependencies right down to glibc. Since the Linux kernel famously "doesn't break userspace," any sufficiently new kernel would suffice. Until recently, I've been able to get the latest Nix packages working on an ancient Linux 2.6 kernel. And even the kernel can be managed with Nix if you use NixOS. But Docker can't, so it's no use here.
As for Terraform, I don't see how it's relevant to this discussion. Nix-based SSH deployment tools can replace some of its functionality, so perhaps that's what you're talking about?
It feels so painful to go back to ‘regular’ Linux now. I'm so concerned about config file entropy and version incompatibility that Nix has solved for me. I'm happy I took the Nix Pill though and completely skipped over Docker and it's often-unnecessary overhead. Nix store is a better solution to reproducible builds, and the syntax is a lot better than LISP for Guix or whatever Skylark is trying to be.
Currently I'm setting up a second machine to distribute builds and share a cache on my local network. Overriding C flags Gentoo-style for better optimization is supported, but it can take a while to build--especially with LTO--as Hydra only builds for generic x86_64 so sharing optimized kernels and other software is great. I successfully got a shared znver3 LTO-optimized Linux 6.1.19 kernel with ZFS support yesterday! I just wish I could have built in parallel the kernel on the faster PC and the ZFS stuff on the slower one and resynced the build input derivations when it was finished after running `nixos-rebuild switch --flake ..`.
For the future, I hope distributed Nix caches become the norm like BitTorrent and we can all share optimized builds.
I’d like to learn more about the project history, who started it (early contributions), and the context under which the early work was initiated. Wikipedia has exactly two sentences on the history. Is there a better version of the Nix story available?
I'm definitely going to give that a try. I had issues with the Bash script on macOS and was annoyed that it left me with a system that I had to manually clean up.
It absolutely works and is the best thing since sliced bread within package managers — many UNIX tools are just simply broken on Mac (not always on the packaging side), and in case of more niche tools it is simply not a priority.
Hard to answer your question without info on what the problem is, but lots of people use it on MacOS. Maybe ask about your problem on /r/nixos or https://discourse.nixos.org/.
I’ve never had an issue with Nix itself on macOS but there are occasionally packages that are broken on macOS but work on Linux - despite upstream supporting macOS.
What I would like is to replace both Docker and docker compose for prod images and local dev, respectively for my team. Is this possible with nix today? It’s mostly a macOS box team.
I use nix flakes to manage my own configuration, but last time I played with building docker images on macOS I had to stand up a builder image on qemu or inside docker.
Further, I’ve historically run into friction between other package managers and nix. The poetry2nix and pnpm2nix kind of tools have a lot of friction [for example, private registry support for poetry is poor]. My current project has a dependency on xmlsec and it’s a bit cumbersome to handle building non wheels on M1.
(The compile time of Nix itself is unpleasant, but not exactly exceptional among programs written in the modern C++ style. The eval time even for Nixpkgs even on a ten-year-old i5 is annoying but not a terrible problem the way it’s used now, though even on a recent Android device it’s admittedly measured in minutes.)
With the Hydra binary cache I can quite comfortably update my Avoton C2550 based router or Raspberry Pi 4 AirPlay receiver within a few minutes. It’s only if I need to build a non trivial package that I have to make sure I build on my desktop or in a VM on my MacBook.
I like the principle of Nix that one can simultaneously install different versions of the same software and make layered choices of what version to use with what or depending on the use case. Nix has spearheaded that principle and that's great.
That being said, that fine-grained layering selection is done via symlinks in Nix afaik, whereas a couple newer packaging systems (e.g. OCI containers or flatpak) can do such layering with newer stuff like bind mounts and namespaces+sandboxing (and I don't just mean sandbox for build time but for run time) and thus increase the security by selectively choosing what a package is supposed to have access to. I wonder how fast Nix will adapt to such new possibilities. I think it should do so quickly (e.g. switch to OCI as the underlying layering system; I hear that the Tvix project is experimenting with that?), as that could establish Nix as the dominant system/distribution in that field whereas otherwise it would be overtaken and left behind by whatever OCI-container-based distribution manages to come out as the dominant one.
There is currently (temporarily) a unique window of opportunity in that:
* Docker is totally ruining their position in the OCI world, and had never really put effort into building a comprehensive quality curated distribution. That is: their registry may be "comprehensive" as in large choice, but apart from a small set of base images, it's mostly a hotchpotch of low-quality uncurated images with uncertain security… and often found to be of severely lacking in the security domain.
* Redhat has a much too closed policy for their OCI registries and has made the mistake of restricting their OCI stuff to the server side while fedora pushes flatpak/flathub which is too restricted to the desktop. That artificial chasm between a server-only and a desktop-only system sucks.
* Ubuntu has completely borked their attempts at new sandboxed/layered package formats, snap sucks. And Debian and the other remaining big distros have nothing in that category
Nix has the advantage of already having a large, comprehensive and curated set of packages. All it needs is to adopt OCI as its underlying layering system (instead of symlinks), make its large package base trivially accessible to OCI, and make an effort on UX (a little more accessible and easier) and it could come out as the dominant distribution.
Treating packaging boundaries and runtime isolation as the same thing is exactly the problem with Docker and similar solutions. Just because some package didn't require another package at build time doesn't mean we don't ever want to use them together at runtime. Yet Docker conflates the two, introducing all sorts of unnecessary friction all over the place.
This is why something as simple as getting more than one process to work with each other on Docker is such an overcomplicated mess. The runtime isolation boundary set by Docker doesn't represent any sort of logical component or security boundary in your system. It merely reflects how the underlying image was built.
This is a classic anti-pattern of mixing up policy with implementation. Runtime isolation policy should be independent of build time implementation. Nix gets this right with better design and composable packages. It's trivial to create a container that includes only the packages you want, with dependencies handled automatically by Nix. Docker, on the other hand, leaves you with a binary blob (i.e., Docker image) that's neither composable nor customizable.
> Just because some package didn't require another package at build time doesn't mean we don't ever want to use them together at runtime. Yet Docker conflates the two, introducing all sorts of unnecessary friction all over the place.
But Docker in fact doesn't have compiletime dependencies, it needs you to specify runtime deps only. If you want to build something in Docker, you use two-stage builds and the runtime deps of the first stage become your compiletime deps.
> This is why something as simple as getting more than one process to work with each other on Docker is such an overcomplicated mess
I don't get this, why is it considered an overcomplicated mess? If you want to run several processes in one container, you just launch it with a lightweight process manager, if you want to run it in separate containers -- well, that's even easier, just launch separate containers and configure the communication between them with a network.
> Nix gets this right with better design and composable packages
Nix actually implements it worse than Docker in some sense. Particularly, the exact problem that you described:
> Just because some package didn't require another package at build time doesn't mean we don't ever want to use them together at runtime
is not solved in Nix, runtime deps must be a subset of compiletime deps.
> that's neither composable nor customizable
Compositionality is a completely different issue on which I do have problems with Docker. DAG-oriented environment building is strictly better than inheritance-oriented, but that's all orthogonal with compiletime-runtime separation.
OCI is based on Linux namespaces, which provide a way to create isolated filesystem trees for processes. You probably want that, and I agree with you, but probably forcing this into OCI itself is a lost cause, since its tooling is too based on layers.
The choice of topology of the package and layering system - be it a tree as in OCI or a more general graph topology as in Nix - is only a very small part of either of these systems. I agree that the general graph topology is superior in some points.
My point though was that at the implementational level, the old symlink-based way of implementing it in Nix is severely lacking the isolation and more general security capabilities of the bind-mount, namespaces and cgroup-based approach of OCI and other newer packaging systems. And Nix needs to implement that.
My impression is that the OCI package spec isn't what would be in the way of implementing a system that would combine the isolation and security of a system based on bind-mount, namespaces and cgroups like OCI with a graph topology like Nix and that there would thus be an opportunity to combine the two, which would help Nix take over the dominating position in that space. If OCI turned out to be impossible to use with the more graph-based approach of Nix, then that would mean a much higher implementational work - not that it couldn't be done, but it still would need to be done then. But either way, Nix cannot stay with its old symlink-based layering: Failing to implement the security features that are now expected from modern packaging systems (isolation, bind mounts, cgroups, namespaces etc) is a surefire way to progressively maneuver into irrelevance.
Nix has a window of opportunity here due to the current weakness of the big players in the field. But it can't afford to let that slip.
I sincerely hope that nix community improves the UX to make it more accessible to new users. Though for those willing to invest time in learning it, nix is extremely useful and highly recommended.
For example; people seem to have accepted that the benefits of using terraform in spite of various difficulties - the learning of its language or needing to hire specialists. The mantra of "infrastructure as code" is enough to drive adoption. What is our mantra? We need to accept that "reproducibility" isn't quite working and that we need either a clearer message, or to explain the message.
Build source straight from Git:
Need a different version? Wanna replace some dependency? Wanna fork: And the best part is, it's conceptually very simple, it's mostly just a bunch of symlinks and environment variables behind the scenes. If you wanna inspect what's in a package, just `cd /nix/store/yourpackage-HASH` and look around.NixOS just feels like a distribution build from the ground up for Free Software. The "reproducibility" in every day use just means that stuff won't randomly break for no reason. And if you don't wanna go the full NixOS route, you can just install the Nix package manager itself on any other distribution.
That said, the part where Nix gets painful is when it has to interact with the rest of the software world. Things like software that wants to auto-update itself really does not fit into the Nix ecosystem at all and can be rather annoying to get to work.
There are of course numerous other pain points, missing features and all that. But being able to flip between versions, fork, compile and all that with feels just so much better than anything else.
Someone once said that you don't learn Nix, you reverse engineer it.
It’s a shame. The promise of Nix/NixOS is really interesting — being able to deterministically create VMs with a custom user land is desirable to me — but in practice I can’t even get a simplistic project to compile, let alone something elaborate. Terraform is jank but it’s not a whole language that needs to be learned, seemingly, before the official docs start to become coherent in their underlying context.
I guess it doesn't sell Nix as strongly as it could..
But, it's hardly for a lack of enthusiasm on Nix user's part. -- Rather, I've seen a few "what's nix good for anyway" comments, and this results in many lengthy replies extolling nix.
I'd like to see more being discussed about:
* Its unique ability to treat packages as programmable data (i.e., derivations)
* Its use case as a building block for deployment systems that knows about and integrates with packages
* Its JSON-like simplicity
They're all central to the Nix experience, and yet it's often overlooked in Nix discussions.
Personally, I want to learn Nix, but Ive never forced myself to do it because it’s so much easier to make a Docker image do what I want. Nix is the pinnacle of a reproducible environment that doesn’t randomly break, but Docker is 80% of the way there and much easier.
It’s like comparing trucks (Docker) with planes (Nix) for logistics. I can pay out the wazoo for a plane to get my package there over night, or I can pay a small fraction to wait a few days.
The emphasis is on how to make it as practical as possible, plus cover the topics which may apply to Linux in general, and in great detail.
It's primary targeted to improve DX and on-board users quickly.
My main concern is that it puts another layer of abstraction atop an already complex (and at times leaky) abstraction.
I’d love to see more clear docs about what devenv is actually doing under the covers, and how to escape-hatch into Nix land when I inevitably need to tweak something.
Also, similarly, how do I map Nix docs (often just a set of example expressions) into equivalent devenv incantations?
(It’s been a few months since I last looked so maybe things have come along since then.)
devenv.sh is quite nice, and have already started using it.
I expect my main hurdle will be stuff that isn’t in NicPkgs. Especially for dev stuff, I’m going to want to pull in low-profile GitHub stuff or Python packages. What’s the escape hatch to bring those into the dev environment?
The site makes it easy to browse indexed flakes and configure flakes via options and packages. Hopefully the structure provided by the UI can makes it easier to get started with Nix flakes :)
Importing paths based on environment variables:
There is built-in support for this, e.g. setting the env var `NIX_PATH` to `a=/foo:b=/bar`, then the Nix expressions `<a>` and `<b>` will evaluate to the paths `/foo` and `/bar`, respectively. By default, the Nix installer sets `NIX_PATH` to contain a copy of the Nixpkgs repo, so expressions can do `import <nixpkgs>` to access definitions from Nixpkgs.
The reason this is bad is that env vars vary between machines, and over time, so we don't actually know what will be imported.
These days I completely avoid this by explicitly un-setting the `NIX_PATH` env var. I only reference relative paths within a project, or else reference other projects via explicit git revisions (e.g. I import Nixpkgs by pointing the `fetchTarball` function at a github archive URL)
Channels:
These always confused me. They're used to update the copy of Nixpkgs that the default `NIX_PATH` points to, and can also be used to manage other "updatable" things. It's all very imperative, so I don't bother (I just alter the specific git revision I'm fetching, e.g. https://hackage.haskell.org/package/update-nix-fetchgit helps to automate such updating).
Nixpkgs depends on $HOME:
The top-level API exposed by the Nixpkgs repository is a function, which can be called with various arguments to set/override things; e.g. when I'm on macOS, it will default to providing macOS packages; I can override that by calling it with `system = "x86_64-linux"`. All well and good.
The problem is that some of its default values will check for files like ~/.nixpkgs/config.nix, ~/.config/nixpkgs/overlays.nix, etc. This causes the same sort of "works on my machine" headaches that Nix was meant to solve. See https://github.com/NixOS/nixpkgs/blob/master/pkgs/top-level/...
I avoid this by importing Nixpkgs via a wrapper, which defaults to calling Nixpkgs with empty values to avoid its impure defaults; but still allows me to pass along my own explicit overrides if needed.
The imperative nix-env command:
Nix provides a command called 'nix-env' which manages a symlink called ~/.nix/profile. We can run commands to "install packages", "update packages", "remove packages", etc. which work by building different "profiles" (Nix store paths containing symlinks to a bunch of other Nix store paths).
This is bad, since it's imperative and hard to reproduce (e.g. depending on what channels were pointing to when those commands were run, etc.). A much better approach is to write down such a "profile" explicitly, in a git-controlled text file, e.g. using the `pkgs.buildEnv` function; then use nix-env to just manage that single 'meta-package'.
Tools which treat Nix like Apt/Yum/etc.
This isn't something I haven't personally done, but I've seen it happen in a few tools that try to integrate with Nix, and it just cripples their usefulness.
Package managers like Apt have a global database, which maps manually-written "names" to a bunch of metadata (versions, installed or not, names of dependencies, names of conflicting packages, etc.). In that world names are unique and global: if two packages have the name "foo", they are the same package; clashes must be resolved by inventing new names. Such names are also fetchable/realisable: we just plug the name and "version number" (another manually-written name) into a certain pattern, and do a HTTP GET on one of our mirrors.
In Nix, all the above features apply to "store paths", which are not manually written: they contain hashes, like /nix/store/wbkgl57gvwm1qbfjx0ah6kgs4fzz571x-python3-3.9.6, which can be verified against their contents and/or build script (AKA 'derivation'). Store paths are not designed to be managed manually. Instead, the Nix language gives us a rich, composable way to describe the desired file/directory; and those descriptions are evaluated to find their associated store paths.
Nixpkgs provides an attribute set (AKA JSON object) containing tens of thousands of derivations; and often the thing we want can be described as 'the "foo" attribute of Nixpkgs', e.g. '(import <nixpkgs> {}).foo'
Some tooling that builds-on/interacts-with Nix has unfortunately limited itself to only such descriptions; e.g. accepting a list of strings, and looking each one up in the system's default Nixpkgs attribute set (this misunderstanding may come from using the 'nix-env' tool, like 'nix-env -iA firefox'; but nix-env also allows arbitrary Nix expressions too!). That's incredibly limiting, since (a) it doesn't let us dig into the structure inside those attributes (e.g. 'nixpkgs.python3Packages.pylint'); (b) it doesn't let us use the override functions that Nixpkgs provides (e.g. 'nixpkgs.maven.override { jre = nixpkgs.jdk11_headless; }'); (c) it doesn't let us specify anything outside of the 'import <nixpkgs> {}' set (e.g. in my case, I want to avoid NIX_PATH and <nixpkgs> altogether!)
Referencing non-store paths:
The Nix language treats paths and strings in different ways: strings are always passed around verbatim, but certain operations will replace paths by a 'snapshot' copied into the Nix store. For example, say we had this file saved to /home/chriswarbo/default.nix:
Notice that the resulting script has three values spliced into it via ${...}:- The script interpreter `nixpkgs.bash`. This is a Nix derivation, so its "output path" will be spliced into the script (e.g. /nix/store/gpbk3inlgs24a7hsgap395yvfb4l37wf-bash-5.1-p16 ). This is fine.
- The path `cmd`. Nix spots that we're splicing a path, so it copies that file into the Nix store, and that store path will be spliced into the script (e.g. /nix/store/2h3airm07gp55rn9qlax4ak35s94rpim-cmd.sh ). This is fine.
- The string `nixpkgs.lib.escapeShellArg defs`, which evaluates to the string `'/home/chriswarbo/defs.sh'`, and that will be spliced into the script. That's bad, since the result contains a reference to my home folder! The reason this happens is that paths can often be used as strings, getting implicitly converted. In this case, the function `nixpkgs.lib.escapeShellArg` transforms strings (see https://nixos.org/manual/nixpkgs/stable/#function-library-li... ), so:
- The path `./defs.sh` is implicitly converted to the string `/home/chriswarbo/defs.sh`, for input to `nixpkgs.lib.escapeShellArg` (NOTE: you can use the function `builtins.toString` to do the same thing explicitly)
- The function `nixpkgs.lib.escapeShellArg` returns the same string, but wrapped in apostrophes (it also adds escaping with backslashes, but our path doesn't need any)
- That return value is spliced as-is into the resulting script
To avoid this, we should instead splice the path into a string before escaping; giving us nested splices like this:
At that point--why should I add Nix to the mess since I still need those other things anyway?
With Linux, the only stable base required for Nix to function is the kernel. Nix packages all the required dependencies right down to glibc. Since the Linux kernel famously "doesn't break userspace," any sufficiently new kernel would suffice. Until recently, I've been able to get the latest Nix packages working on an ancient Linux 2.6 kernel. And even the kernel can be managed with Nix if you use NixOS. But Docker can't, so it's no use here.
As for Terraform, I don't see how it's relevant to this discussion. Nix-based SSH deployment tools can replace some of its functionality, so perhaps that's what you're talking about?
Nix + LLM.
Especially if using GPT4 for quality. You get all of the usefulness of Nix, but created or edited using plain English.
And then of course at the end you output the nix as an artifact to live in version control.
for what specifically?
Currently I'm setting up a second machine to distribute builds and share a cache on my local network. Overriding C flags Gentoo-style for better optimization is supported, but it can take a while to build--especially with LTO--as Hydra only builds for generic x86_64 so sharing optimized kernels and other software is great. I successfully got a shared znver3 LTO-optimized Linux 6.1.19 kernel with ZFS support yesterday! I just wish I could have built in parallel the kernel on the faster PC and the ZFS stuff on the slower one and resynced the build input derivations when it was finished after running `nixos-rebuild switch --flake ..`.
For the future, I hope distributed Nix caches become the norm like BitTorrent and we can all share optimized builds.
Seriously, how is anything so hard to use still around after 20 years!
Deleted Comment
I use nix flakes to manage my own configuration, but last time I played with building docker images on macOS I had to stand up a builder image on qemu or inside docker.
Further, I’ve historically run into friction between other package managers and nix. The poetry2nix and pnpm2nix kind of tools have a lot of friction [for example, private registry support for poetry is poor]. My current project has a dependency on xmlsec and it’s a bit cumbersome to handle building non wheels on M1.
(The compile time of Nix itself is unpleasant, but not exactly exceptional among programs written in the modern C++ style. The eval time even for Nixpkgs even on a ten-year-old i5 is annoying but not a terrible problem the way it’s used now, though even on a recent Android device it’s admittedly measured in minutes.)
(yeah sorry couldn't resist, I know this is probably misleading)
e.g. to take a look at https://devenv.sh/ or https://www.jetpack.io/devbox/docs/installing_devbox/ which aim to leverage nix without requiring nix knowledge.
That being said, that fine-grained layering selection is done via symlinks in Nix afaik, whereas a couple newer packaging systems (e.g. OCI containers or flatpak) can do such layering with newer stuff like bind mounts and namespaces+sandboxing (and I don't just mean sandbox for build time but for run time) and thus increase the security by selectively choosing what a package is supposed to have access to. I wonder how fast Nix will adapt to such new possibilities. I think it should do so quickly (e.g. switch to OCI as the underlying layering system; I hear that the Tvix project is experimenting with that?), as that could establish Nix as the dominant system/distribution in that field whereas otherwise it would be overtaken and left behind by whatever OCI-container-based distribution manages to come out as the dominant one.
There is currently (temporarily) a unique window of opportunity in that:
* Docker is totally ruining their position in the OCI world, and had never really put effort into building a comprehensive quality curated distribution. That is: their registry may be "comprehensive" as in large choice, but apart from a small set of base images, it's mostly a hotchpotch of low-quality uncurated images with uncertain security… and often found to be of severely lacking in the security domain.
* Redhat has a much too closed policy for their OCI registries and has made the mistake of restricting their OCI stuff to the server side while fedora pushes flatpak/flathub which is too restricted to the desktop. That artificial chasm between a server-only and a desktop-only system sucks.
* Ubuntu has completely borked their attempts at new sandboxed/layered package formats, snap sucks. And Debian and the other remaining big distros have nothing in that category
Nix has the advantage of already having a large, comprehensive and curated set of packages. All it needs is to adopt OCI as its underlying layering system (instead of symlinks), make its large package base trivially accessible to OCI, and make an effort on UX (a little more accessible and easier) and it could come out as the dominant distribution.
This is why something as simple as getting more than one process to work with each other on Docker is such an overcomplicated mess. The runtime isolation boundary set by Docker doesn't represent any sort of logical component or security boundary in your system. It merely reflects how the underlying image was built.
This is a classic anti-pattern of mixing up policy with implementation. Runtime isolation policy should be independent of build time implementation. Nix gets this right with better design and composable packages. It's trivial to create a container that includes only the packages you want, with dependencies handled automatically by Nix. Docker, on the other hand, leaves you with a binary blob (i.e., Docker image) that's neither composable nor customizable.
But Docker in fact doesn't have compiletime dependencies, it needs you to specify runtime deps only. If you want to build something in Docker, you use two-stage builds and the runtime deps of the first stage become your compiletime deps.
> This is why something as simple as getting more than one process to work with each other on Docker is such an overcomplicated mess
I don't get this, why is it considered an overcomplicated mess? If you want to run several processes in one container, you just launch it with a lightweight process manager, if you want to run it in separate containers -- well, that's even easier, just launch separate containers and configure the communication between them with a network.
> Nix gets this right with better design and composable packages
Nix actually implements it worse than Docker in some sense. Particularly, the exact problem that you described:
> Just because some package didn't require another package at build time doesn't mean we don't ever want to use them together at runtime
is not solved in Nix, runtime deps must be a subset of compiletime deps.
> that's neither composable nor customizable
Compositionality is a completely different issue on which I do have problems with Docker. DAG-oriented environment building is strictly better than inheritance-oriented, but that's all orthogonal with compiletime-runtime separation.
Oci should adopt a nix based approach.
My point though was that at the implementational level, the old symlink-based way of implementing it in Nix is severely lacking the isolation and more general security capabilities of the bind-mount, namespaces and cgroup-based approach of OCI and other newer packaging systems. And Nix needs to implement that.
My impression is that the OCI package spec isn't what would be in the way of implementing a system that would combine the isolation and security of a system based on bind-mount, namespaces and cgroups like OCI with a graph topology like Nix and that there would thus be an opportunity to combine the two, which would help Nix take over the dominating position in that space. If OCI turned out to be impossible to use with the more graph-based approach of Nix, then that would mean a much higher implementational work - not that it couldn't be done, but it still would need to be done then. But either way, Nix cannot stay with its old symlink-based layering: Failing to implement the security features that are now expected from modern packaging systems (isolation, bind mounts, cgroups, namespaces etc) is a surefire way to progressively maneuver into irrelevance.
Nix has a window of opportunity here due to the current weakness of the big players in the field. But it can't afford to let that slip.