I would like a comparison with runit, which is a very minimal but almost full-fledged init system. I see many similarities: control directories, no declarative dependencies, a similar set of scripts, the same approach to logging. The page mentions runit in passing, and even suggests using the chpst utility from it.
One contrasting feature is parametrized services: several similar processes (like agetty) can be controlled by one service directory; I find it neat.
Another difference is the ability to initiate reboot or shutdown as an action of the same binary (nitroctl).
Last year I decommed our last couple of servers that ran processes configured using runit. It was a sad day. I first learned to write runit services probably about 15 years ago and it was very cool and very understandable and I kind of just thought that's how services worked on linux.
Then I left Linux for about 5 years and, by the time I got back, Systemd had taken over. I heard a few bad things about it, but eventually learned to recognise that so many of those arguments were in such bad faith that I don't even know what the real ones are any more. Currently I run a couple of services on Pi Zeros streaming camera and temperature data from the vivarium of our bearded dragon, and it was so very easy to set them up using systemd. And I could use it to run emacsd on my main OpenSuse desktop. And a google-drive Fuse solution on my work laptop. "having something standard is good, actually", I guess.
I made a process supervisor, probably less simple than nitro but much more simple (and focused) than systemd.
Aside from the overreach, I think there are some legitimate issues with systemd:
- It's really hard to make services reliable. There are all sorts of events in systemd which will cause something to turn off and then just stay off.
- It doesn't really help that the things you tell it to do (start/stop this service) use the same memory bits as when some dependency turns something on.
- All the commands have custom, nonstandard outputs, mostly for human consumption. This makes it really hard to interface with (reliably) if you need to write tooling around systemd. Ini files are not standardized, especially systemd's.
- The two way (requires, requiredby) dependencies make the control graph really hard to get a big picture of
The backlash against systemd was twofold. On one hand, when released and thrust upon distros via Gnome, it was quite rough around the edges, which caused both real problems and just understandable irritation. Fifteen years after, the kinks are ironed out, but it was sort of a long time. (Btrfs, released at about the same time, took even longer to stop being imprudent to use in production.)
On the other hand, systemd replaces Unix (sort of like Hurd, but differently). It grabs system init, logging, authentication, DNS, session management, cron, daemon monitoring, socket activation, running containers, etc. In an ideal Red Hat world, I suppose, a bare-metal box should contain a kernel, systemd, podman, IP tools, and maybe sshd and busybox. This is a very anti-Unix, mainframe-like approach, but for a big consulting firm, like Red Hat / IBM, it is very attractive.
One of the main issues with systemd (as someone using it everywhere) is IMO that even experienced people can have a hard time understanding from which context a command is running.
E.g. if you "just" want to automate a script that you were running from a terminal as a user, there can be a ton of problems and it is hard to figure them out ahead of time.
The thing I don’t like about systemd is the inexplicable need to have multiple files for a service. Why can’t they all be declared in a single unit file?
Leah Neukirchen is active member of the Void Linux community, I expect a lot of cross-pollination here. It would be really great if she could write up something how to use it for Void.
What I got from looking at that comparison is that runit starts a separate supervisor process for each process started. I like the cleaner process tree of nitro, but I wonder what the tradeoffs are for each.
I've gotten used to runit via Void Linux, and while it does the job of an init system, its UI and documentation leave something to be desired. The way logging is configured in particular was an exercise in frustration the last time I tried to set it up for a service.
I wouldn't mind trying something else that is as simple, but has sane defaults, better documentation, and a more intuitive UI.
I like using systemd but it also doesn't have great documentation either. I often find myself unable to grok things by only reading the official documentation and I have to resort to reading forum posts, other people's blogposts or Stack Overflow. To me documentation isn't good enough until it doesn't need any third party material.
Logging in runit seems simple (I don't remember running into problems), but indeed, the documentation leaves much to be desired. Could be a good thing to contribute to Void Handbook.
Because it's stupid easy? I just have to execute shell one liners and set environment variables, no need to read lenghty docs and do stuff the systemd way.
We use runit to supervise our services. It's 100% reliable as opposed to systemd which sometimes fails in mysterious ways.
I'm always torn when I see anything mentioning running an init system in a container. On one hand, I guess it's good that it's designed with that use case in mind. Mainly, though, I've just seen too many overly complicated things attempted (on greenfield even) inside a single container when they should have instead been designed for kubernetes/cloud/whatever-they-run-on directly and more properly decoupled.
It's probably just one of those "people are going to do it anyway" things. But I'm not sure if it's better to "do it better" and risk spreading the problem, or leave people with older solutions that fail harder.
Yes, application containers should stick to the Unix philosophy of, "do one thing and do it well." But if the thing in your docker container forks for _any_ reason, you should have a real init on PID 1.
There's nothing inherently wrong with containers in the abstract: virtualization is a critical tool in computer science (some might it's difficult to define computer science without a virtual machine). There's not even anything wrong with this "less than a new kernel, more than a new libc" neighborhood.
The broken, ugly, malignant thing is this one godawful implementation Docker and its attic-dwelling Quasimodo cousin docker-compose.yml
It's trivial to slot namespaces (or jails if you also like the finer things BSD) into a sane init system, process id regime, network interface regime: its an exercise in choosing good defaults for all the unshare-adjacent parameters.
But a whole generation of SWEs memorized docker jank instead of Unix, and so now people are emotionally invested in it. You run compose to run docker to get Alpine and a node built on musl.
You can just link node to musl. And if you want a chroot or a new tuntap scope? man unshare.
From my experience in the robotics space, a lot of containers start life as "this used to be a bare metal thing and then we moved it into a container", and with a lot of unstructured RPC going on between processes, there's little benefit in breaking up the processes into separate containers.
Supervisor, runit, systemd, even a tmux session are all popular options for how to run a bunch of stuff in a monolithic "app" container.
My experience in the robotics space is that containers are a way to not know how to put a system together properly. It's the quick equivalent of "I install it on my Ubuntu, then I clone my whole system into a .iso and I call that a distribution". Most of the time distributed without any consideration for the open source licences being part of it.
> Supervisor, runit, systemd, even a tmux session are all popular options for how to run a bunch of stuff in a monolithic "app" container.
Did docker+systemd get fixed at some point? I would be surprised to hear that it was popular given the hoops you had to jump through last time I looked at it
Nitro does not declaratively handle service dependencies, you cannot get a neat graph of them in one command.
You can still request other services to start in your setup script, and expect nitro to wait and retry starting your service when the dependent service is running. To get a nice graph, you can write a simple script using grep. OTOH it's easy to forget to require the shutdown of the dependent services when your service goes down, and there's no way to discover it using a nitro utility.
How does this compare to s6? I recently used it to setup an init system in docker containers & was wondering if nitro would be a good alternative (there's a lot of files I had to setup via s6-overlay that wasn't as intuitive as I would've hoped).
Thanks! Reading some of your other comments, it seems like runit or nitro may not have been a good choice for my usecase? (I'm using dependencies between services so there is a specific order enforced & also logging for 3 different services as well).
You seem to know quite a bit about init systems - for containers in particular do you have some heuristics on which init system would work best for specific usecases?
Yeah we only recently broke it out as a standalone repo/binary, as everyone historically vendored it, so docs will get love soon, but it will be part of the next stagex release built and signed by multiple parties deterministically as stagex/user-nit.
To run it all your need to know is put it in your filesystem as "/init" and then add this to your kernel command line for the binary you want nit to pivot to after bringing the system up:
nit.target=/path/to/binary
That's it. Minimum viable init for single application appliance/embedded linux use cases.
nit and your target binary are the only things you actually need to have in your CPIO root filesystem. Can be empty otherwise.
I wrote my own init system in C from scratch some 13 years ago. It was more work than anticipated by myself and the manager who approved it. It served the purpose to bring up a Linux GUI and some backend for it on not so capable hardware in n seconds (don't remember n, but it was impressive).
It was a nice programming exercise. Wouldn't be suprised if even back then something like that already existed and the whole effort just demonstrated a lack of insight of what is readily available.
Probably the code still exists on some backup I should not have. Have not looked back and don't know... The company who owned the rights has gone out of business.
Edit: After typing this it came to my mind a colleague of mine wrote yet another init in the same company. Mine had no dependencies except libc and not many features. The new one was built around libevent, probably a bit more advanced.
One contrasting feature is parametrized services: several similar processes (like agetty) can be controlled by one service directory; I find it neat.
Another difference is the ability to initiate reboot or shutdown as an action of the same binary (nitroctl).
Also, it's a single binary; runit has several.
Then I left Linux for about 5 years and, by the time I got back, Systemd had taken over. I heard a few bad things about it, but eventually learned to recognise that so many of those arguments were in such bad faith that I don't even know what the real ones are any more. Currently I run a couple of services on Pi Zeros streaming camera and temperature data from the vivarium of our bearded dragon, and it was so very easy to set them up using systemd. And I could use it to run emacsd on my main OpenSuse desktop. And a google-drive Fuse solution on my work laptop. "having something standard is good, actually", I guess.
Aside from the overreach, I think there are some legitimate issues with systemd:
- It's really hard to make services reliable. There are all sorts of events in systemd which will cause something to turn off and then just stay off.
- It doesn't really help that the things you tell it to do (start/stop this service) use the same memory bits as when some dependency turns something on.
- All the commands have custom, nonstandard outputs, mostly for human consumption. This makes it really hard to interface with (reliably) if you need to write tooling around systemd. Ini files are not standardized, especially systemd's.
- The two way (requires, requiredby) dependencies make the control graph really hard to get a big picture of
FWIW here's mine, where I wrote a bit more about the issues: https://github.com/andrewbaxter/puteron/
On the other hand, systemd replaces Unix (sort of like Hurd, but differently). It grabs system init, logging, authentication, DNS, session management, cron, daemon monitoring, socket activation, running containers, etc. In an ideal Red Hat world, I suppose, a bare-metal box should contain a kernel, systemd, podman, IP tools, and maybe sshd and busybox. This is a very anti-Unix, mainframe-like approach, but for a big consulting firm, like Red Hat / IBM, it is very attractive.
E.g. if you "just" want to automate a script that you were running from a terminal as a user, there can be a ton of problems and it is hard to figure them out ahead of time.
I wouldn't mind trying something else that is as simple, but has sane defaults, better documentation, and a more intuitive UI.
* https://jdebp.uk/FGA/daemontools-family.html#Logging
* https://jdebp.uk/Softwares/nosh/guide/logging.html
"If runsvdir receives a TERM signal, it exits with 0 immediately"
Is that a selling point? Could you explain why?
I've heard plenty of reasons why people find systemd distasteful as an init, but I've not heard much criticism of a declarative design.
Because it's stupid easy? I just have to execute shell one liners and set environment variables, no need to read lenghty docs and do stuff the systemd way.
We use runit to supervise our services. It's 100% reliable as opposed to systemd which sometimes fails in mysterious ways.
It's probably just one of those "people are going to do it anyway" things. But I'm not sure if it's better to "do it better" and risk spreading the problem, or leave people with older solutions that fail harder.
The broken, ugly, malignant thing is this one godawful implementation Docker and its attic-dwelling Quasimodo cousin docker-compose.yml
It's trivial to slot namespaces (or jails if you also like the finer things BSD) into a sane init system, process id regime, network interface regime: its an exercise in choosing good defaults for all the unshare-adjacent parameters.
But a whole generation of SWEs memorized docker jank instead of Unix, and so now people are emotionally invested in it. You run compose to run docker to get Alpine and a node built on musl.
You can just link node to musl. And if you want a chroot or a new tuntap scope? man unshare.
Got a handy list of those? My colleagues use supervisord and it kinda bugs me. Would love to know if it makes the list.
Supervisor, runit, systemd, even a tmux session are all popular options for how to run a bunch of stuff in a monolithic "app" container.
Did docker+systemd get fixed at some point? I would be surprised to hear that it was popular given the hoops you had to jump through last time I looked at it
Deleted Comment
Deleted Comment
I often find myself wanting to run more than one process in s container for pricing reasons.
Deleted Comment
This will be a game changer for porting to NixOS to new init systems, and even new kernels.
So, it's good time to be experimenting with things like Nitro here!
Giving the readme a brief scan, it doesn't look like it currently handles service dependencies?
[1]: https://github.com/davmac314/dinit
You can still request other services to start in your setup script, and expect nitro to wait and retry starting your service when the dependent service is running. To get a nice graph, you can write a simple script using grep. OTOH it's easy to forget to require the shutdown of the dependent services when your service goes down, and there's no way to discover it using a nitro utility.
https://docs.aws.amazon.com/whitepapers/latest/security-desi...
You seem to know quite a bit about init systems - for containers in particular do you have some heuristics on which init system would work best for specific usecases?
<500 lines and uses only the rust standard library to make auditing easy.
https://git.distrust.co/public/nit
To run it all your need to know is put it in your filesystem as "/init" and then add this to your kernel command line for the binary you want nit to pivot to after bringing the system up:
nit.target=/path/to/binary
That's it. Minimum viable init for single application appliance/embedded linux use cases.
nit and your target binary are the only things you actually need to have in your CPIO root filesystem. Can be empty otherwise.
It was a nice programming exercise. Wouldn't be suprised if even back then something like that already existed and the whole effort just demonstrated a lack of insight of what is readily available.
Probably the code still exists on some backup I should not have. Have not looked back and don't know... The company who owned the rights has gone out of business.
Edit: After typing this it came to my mind a colleague of mine wrote yet another init in the same company. Mine had no dependencies except libc and not many features. The new one was built around libevent, probably a bit more advanced.