I share the author's sentiment completely. At my day job, I manage multiple Kubernetes clusters running dozens of microservices with relative ease. However, for my hobby projects—which generate no revenue and thus have minimal budgets—I find myself in a frustrating position: desperately wanting to use Kubernetes but unable to due to its resource requirements. Kubernetes is simply too resource-intensive to run on a $10/month VPS with just 1 shared vCPU and 2GB of RAM.
This limitation creates numerous headaches. Instead of Deployments, I'm stuck with manual docker compose up/down commands over SSH. Rather than using Ingress, I have to rely on Traefik's container discovery functionality. Recently, I even wrote a small script to manage crontab idempotently because I can't use CronJobs. I'm constantly reinventing solutions to problems that Kubernetes already solves—just less efficiently.
What I really wish for is a lightweight alternative offering a Kubernetes-compatible API that runs well on inexpensive VPS instances. The gap between enterprise-grade container orchestration and affordable hobby hosting remains frustratingly wide.
> What I really wish for is a lightweight alternative offering a Kubernetes-compatible API that runs well on inexpensive VPS instances. The gap between enterprise-grade container orchestration and affordable hobby hosting remains frustratingly wide.
Depending on how much of the Kube API you need, Podman is that. It can generate containers and pods from Kubernetes manifests [0]. Kind of works like docker compose but with Kubernetes manifests.
This even works with systemd units, similar to how it's outlined in the article.
Podman also supports most (all?) of the Docker api, thus docker compose, works, but also, you can connect to remote sockets through ssh etc to do things.
The docs don't make it clear, can it do "zero downtime" deployments? Meaning it first creates the new pod, waits for it to be healthy using the defined health checks and then removes the old one? Somehow integrating this with service/ingress/whatever so network traffic only goes to the healthy one?
I tried k3s but even on an immutable system dealing with charts and all the other kubernetes stuff adds a new layer of mutability and hence maintenance, update, manual management steps that only really make sense on a cluster, not a single server.
If you're planning to eventually move to a cluster or you're trying to learn k8s, maybe, but if you're just hosting a single node project it's a massive effort, just because that's not what k8s is for.
I use k3s. With more than more master node, it's still a resource hog and when one master node goes down, all of them tend to follow. 2GB of RAM is not enough, especially if you also use longhorn for distributed storage. A single master node is fine and I haven't had it crash on me yet. In terms of scale, I'm able to use raspberry pis and such as agents so I only have to rent a single €4/month vps.
I'm laughing because I clicked your link thinking I agreed and had posted similar things and it's my comment.
Still on k3s, still love it.
My cluster is currently hosting 94 pods across 55 deployments. Using 500m cpu (half a core) average, spiking to 3cores under moderate load, and 25gb ram. Biggest ram hog is Jellyfin (which appears to have a slow leak, and gets restarted when it hits 16gb, although it's currently streaming to 5 family members).
The cluster is exclusively recycled old hardware (4 machines), mostly old gaming machines. The most recent is 5 years old, the oldest is nearing 15 years old.
The nodes are bare Arch linux installs - which are wonderfully slim, easy to configure, and light on resources.
It burns 450Watts on average, which is higher than I'd like, but mostly because I have jellyfin and whisper/willow (self hosted home automation via voice control) as GPU accelerated loads - so I'm running an old nvidia 1060 and 2080.
Everything is plain old yaml, I explicitly avoid absolutely anything more complicated (including things like helm and kustomize - with very few exceptions) and it's... wonderful.
It's by far the least amount of "dev-ops" I've had to do for self hosting. Things work, it's simple, spinning up new service is a new folder and 3 new yaml files (0-namespace.yaml, 1-deployment.yaml, 2-ingress.yaml) which are just copied and edited each time.
Any three machines can go down and the cluster stays up (metalLB is really, really cool - ARP/NDP announcements mean any machine can announce as the primary load balancer and take the configured IP). Sometimes services take a minute to reallocate (and jellyfin gets priority over willow if I lose a gpu, and can also deploy with cpu-only transcoding as a fallback), and I haven't tried to be clever getting 100% uptime because I mostly don't care. If I'm down for 3 minutes, it's not the end of the world. I have a couple of commercial services in there, but it's free hosting for family businesses, they can also afford to be down an hour or two a year.
Overall - I'm not going back. It's great. Strongly, STRONGLY recommend k3s over microk8s. Definitely don't want to go back to single machine wrangling. The learning curve is steeper for this... but man do I spend very little time thinking about it at this point.
I've streamed video from it as far away as literally the other side of the world (GA, USA -> Taiwan). Amazon/Google/Microsoft have everyone convinced you can't host things yourself. Even for tiny projects people default to VPS's on a cloud. It's a ripoff. Put an old laptop in your basement - faster machine for free. At GCP prices... I have 30k/year worth of cloud compute in my basement, because GCP is a god damned rip off. My costs are $32/month in power, and a network connection I already have to have, and it's replaced hundreds of dollars/month in subscription costs.
For personal use-cases... basement cloud is where it's at.
Or microk8s. I'm curious what it is about k8s that is sucking up all these resources. Surely the control plane is mostly idle when you aren't doing things with it?
> Kubernetes is simply too resource-intensive to run on a $10/month VPS with just 1 shared vCPU and 2GB of RAM
I hate sounding like an Oracle shill, but Oracle Cloud's Free Tier is hands-down the most generous. It can support running quite a bit, including a small k8s cluster[1]. Their k8s backplane service is also free.
They'll give you 4 x ARM64 cores and 24GB of ram for free. You can split this into 1-4 nodes, depending on what you want.
One thing to watch out for is that you pick your "home region" when you create your account. This cannot be changed later, and your "Always Free" instances can only be created in your home region (the non-free tier doesn't have that restriction).
So choose your home region carefully. Also, note that some regions have multiple availability domains (OCI-speak for availability zones) but some only have one AD. Though if you're only running one free instance then ADs don't really matter.
There are tons of horror stories about OCI's free tier (check r/oraclecloud on reddit, tl;dr: your account may get terminated at any moment and you will lose access to all data with no recovery options). I wouldn't suggest putting anything serious on it.
I recenlty wrote a guide on how to create a free 3 node cluster in Oracle cloud : https://macgain.net/posts/free-k8-cluster .
This guide currently uses kubeadm to create 3 node (1 control plane, 2 worker nodes) cluster.
Just do it like the olden days, use ansible or similar.
I have a couple dedicated servers I fully manage with ansible. It's docker compose on steroids. Use traefik and labeling to handle reverse proxy and tls certs in a generic way, with authelia as simple auth provider. There's a lot of example projects on github.
A weekend of setup and you have a pretty easy to manage system.
> I'm constantly reinventing solutions to problems that Kubernetes already solves—just less efficiently.
But you've already said yourself that the cost of using K8s is too high. In one sense, you're solving those solutions more efficiently, it just depends on the axis you use to measure things.
That picture with the almost-empty truck seems to be the situation that he describes. He wants the 18 wheeler truck, but it is too expensive for just a suitcase.
> Kubernetes is simply too resource-intensive to run on a $10/month VPS with just 1 shared vCPU and 2GB of RAM.
That's more than what I'm paying for far fewer resources than Hetzner. I'm paying about $8 a month for 4 vCPUs and 8GB of RAM: https://www.hetzner.com/cloud
Note that the really affordable ARM servers are German only, so if you're in the US you'll have to deal with higher latency to save that money, but I think it's worth it.
I recently set up an arm64 VPS at netcup: https://www.netcup.com/en/server/arm-server
Got it with no location fee (and 2x storage) during the easter sale but normally US is the cheapest.
I've been using Docker swarm for internal & lightweight production workloads for 5+ years with zero issues. FD: it's a single node cluster on a reasonably powerful machine, but if anything, it's over-specced for what it does.
Which I guess makes it more than good enough for hobby stuff - I'm playing with a multi-node cluster in my homelab and it's also working fine.
I think Docker Swarm makes a lot of sense for situations where K8s is too heavyweight. "Heavyweight" either in resource consumption, or just being too complex for a simple use case.
Podman is a fairly nice bridge. If you are familiar with Kubernetes yaml, it is relatively easy to do docker-compose like things except using more familiar (for me) K8s yaml.
In terms of the cloud, I think Digital Ocean costs about $12 / month for their control plane + a small instance.
I found k3s to be a happy medium. It feels very lean and works well even on a Pi, and scales ok to a few node cluster if needed. You can even host the database on a remote mysql server, if local sqlite is too much IO.
NixOS works really well for me. I used to write these kinds of idempotent scripts too but they are usually irrelevant in NixOS where that's the default behavior.
This is exactly why I built https://canine.sh -- basically for indie hackers to have the full experience of Heroku with the power and portability of Kubernetes.
For single server setups, it uses k3s, which takes up ~200MB of memory on your host machine. Its not ideal, but the pain of trying to wrangle docker deployments, and the cheapness of hetzner made it worth it.
I run my private stuff on a hosted vultr k8s cluster with 1 node for $10-$20 a month. All my hobby stuff is running on that "personal cluster" and it is that perfect sweetspot for me that you're talking about
I don't use ingresses or loadbalancers because those cost extra, and either have the services exposed through tailscale (with tailscale operator) for stuff I only use myself, or through cloudflare argo tunnels for stuff I want internet accessible
(Once a project graduates and becomes more serious, I migrate the container off this cluster and into a proper container runner)
It’s been a couple of years since I’ve last used it, but if you want container orchestration with a relatively small footprint, maybe Hashicorp Nomad (perhaps in conjunction with Consul and Traefik) is still an option. These were all single binary tools. I did not personally run them on 2G mem VPSes, but it might still be worthwhile for you to take a look.
It looks like Nomad has a driver to run software via isolated fork/exec, as well, in addition to Docker containers.
Yeah, unless you're doing k8s for the purpose of learning job skills, it's way overkill. Just run a container with docker, or a web server outside a container if it's a website. Way easier and it will work just fine.
I’ve been using https://www.coolify.io/ self hosted. It’s a good middle ground between full blown k8s and systemd services. I have a home lab where I host most of my hobby projects though. So take that into account. You can also use their cloud offering to connect to VPSs
Just go with a cloud provider that offers free control plane and shove a bunch of side projects into 1 node. I end up around $50 a month on GCP (was a bit cheaper at DO) once you include things like private docker registry etc.
The marginal cost of an additional project on the cluster is essentially $0
I've ran K3s on a couple of Raspberry Pi's as a homelab in the past. It's lightweight and ran nice for a few years, but even so, one Pi was always dedicated as controller, which seemed like a waste.
Recently I switched my entire setup (few Pi's, NAS and VM's) to NixOS. With Colmena[0] I can manage/update all hosts from one directory with a single command.
Kubernetes was a lot of fun, especially the declarative nature of it. But for small setups, where you are still managing the plumbing (OS, networking, firewall, hardening, etc) yourself, you still need some configuration management. Might as well put the rest of your stuff in there also.
They also have regular promotions that offer e.g. double the disk space.
There you get
6 vCore (ARM64)
8 GB RAM
512 GB NVMe
for 6 $ / m - traffic inclusive. You can choose between "6 vCore ARM64, 8 GB RAM" and "4 vCore x86, 8 GB ECC RAM" for the same price. And much more, of course.
I'm a cheapskate too, but at some point, the time you spend researching cheap hosting, signing up and getting deployed is not worth the hassle of paying a few more $ on bigger boxes.
I am curious why your no revenue projects need the complexity, features and benefits of something like Kubernetes. Why you cannot just to it the archaic way of compiling your app, copy the files to a folder and run it there and never touch it for the next 5 years. If it is a dev environment with many changes, its on a local computer, not on VPS, I guess. Just curious by nature, I am.
The thing is, most of those enterprise-grade container orchestrations probably don't need k8s either.
The more I look into it, the more I think of k8s as a way to "move to micro services" without actually moving to micro services. Loosely coupled micro services shouldn't need that level of coordination if they're truly loosely coupled.
> Kubernetes is simply too resource-intensive to run on a $10/month VPS with just 1 shared vCPU and 2GB of RAM
To put this in perspective, that’s less compute than a phone released in 2023, 12 years ago, Samsung Galaxy S4. To find this level of performance in a computer, we have to go to
The main issue is that Kubernetes has created good API and primitives for managing cloud stuff, and managing a single server is still kinda crap despite decades of effort.
I had K3S on my server, but replaced with docker + Traefik + Portainer - it’s not great, but less idle CPU use and fewer moving parts
I believe that Kubernetes is something you want to use if you have 1+ SRE full-time on your team. I actually got tired with complexity of kubernetes, AWS ECS and docker as well and just build a tool to deploy apps natively on the host. What's wrong with using Linux native primitives - systemd, crontab, postgresql or redis native package? Whose should work as intended, you don't need them in container.
I use Caprover to run about 26 services for personal projects on a Hetzner box. I like its simplicity. Worth it just for the one-click https cert management.
> I'm constantly reinventing solutions to problems that Kubernetes already solves
Another way to look at this is the Kubernetes created solutions to problems that were already solved at a lower scale level. Crontabs, http proxies, etc… were already solved at the individual server level. If you’re used to running large coordinated clusters, then yes — it can seem like you’re reinventing the wheel.
Systemd gets a lot of hate but it really solves a lot of problems. People really shouldn't dismiss it. I think it really happened because when systemd started appearing on distros by default people were upset they had to change
Here's some cool stuff:
- containers
- machinectl: used for controlling:
- nspawn: a more powerful chroot. This is often a better solution than docker. Super lightweight. Shares kernel
- vmspawn: when nspawn isn't enough and you need full virtualization
- importctl: download, import, export your machines. Get the download features in {vm,n}spawn like we have with docker. There's a hub, but it's not very active
- homed/homectl: extends user management to make it easier to do things like encryption home directories (different mounts), better control of permissions, and more
- mounts: forget fstab. Make it easy to auto mount and dismount drives or partitions. Can be access based, time, triggered by another unit (eg a spawn), sockets, or whatever
- boot: you can not only control boot but this is really what gives you access to starting and stopping services in the boot sequence.
- timers: forget cron. Cron can't wake your machine. Cron can't tell a service didn't run because your machine was off. Cron won't give you fuzzy timing, do more complicated things like wait for X minutes after boot if it's the third Sunday of the month and only if Y.service is running. Idk why you'd do that, but you can!
- service units: these are your jobs. You can really control them in their capabilities. Lock them down so they can only do what they are meant to do.
- overrides: use `systemctl edit` to edit your configs. Creates an override config and you don't need to destroy the original. No longer that annoying task of finding the original config and for some reason you can't get it back even if reinstalling! Same with when the original config changes in an install, your override doesn't get touched!!
It's got a lot of stuff and it's (almost) all there already on your system! It's a bit annoying to learn, but it really isn't too bad if you really don't want to do anything too complicated. But in that case, it's not like there's a tool that doesn't require docs but allows you to do super complicated things.
> Systemd gets a lot of hate but it really solves a lot of problems.
From my perspective, it got a lot of hate in its first few years (decade?), not because the project itself was bad -- on the contrary, it succeeded in spite of having loads of other issues, because it was so superior. The problem was the maintainer's attitude of wantonly breaking things that used to work just fine, without offering any suitable fixes.
I have an old comment somewhere with a big list. If you never felt the pain of systemd, it's either because you came late to the party, or because your needs always happened to overlap with the core maintainer's needs.
It didn't win on being superior [1] but because it was either systemd or you don't get to to use GNOME 3.8. On more than one distro it was the reason for switching towards systemd.
I will fully admit though that upstart was worse (which is an achievement), but the solution space was not at all settled.
[1] systemd project tackles a lot of important problems, but the quality of implementation, experience of using it, working with it, etc are not really good, especially the further you get from simplest cookie cutter services - especially because both systemd handling of defaults is borked, documentation when you hit that maybe makes sense to author, and whoever is the bright soul behind systemctl kindly never make CLIs again (with worst example being probably systemctl show this-service-does-not-exist)
The only issue I'm having with systemd is that it's taking over the role of PID 1, with a binary produced from an uncountable SLOC, then doing even more song and dance to exec itself in-place on upgrades. Here's a PID 1 program that does 100% of all of its duties correctly, and nothing else:
#define _XOPEN_SOURCE 700
#include <signal.h>
#include <unistd.h>
int main() {
sigset_t set;
int status;
if (getpid() != 1) return 1;
sigfillset(&set);
sigprocmask(SIG_BLOCK, &set, 0);
if (fork()) for (;;) wait(&status);
sigprocmask(SIG_UNBLOCK, &set, 0);
setsid();
setpgid(0, 0);
return execve("/etc/rc", (char *[]){ "rc", 0 }, (char *[]){ 0 });
}
If you have your init crashing wouldn't this just start a loop where you cannot do anything else than seeing it looping? How would this be better than just panicking?
Timers are so much better than cron it's not even funny. Managing Unix machines for decades with teens of thousands of vital cron entries across thousands of machines, the things that can and do go wrong are painful, especially when you include more esoteric systems. The fact that timers are able to be synced up, backed up, and updated as individual files is alone a massive advantage.
Some of these things that "worked for 50 years" have also actually sucked for 50 years. Look at C strings and C error handling. They've "worked", until you hold them slightly wrong and cause the entire world to start leaking sensitive data in a lesser-used code path.
I'd say the systemd interface is worse¹, but cron was never really good, and people tended to replace it very often.
1 - Really, what are the people upthread gloating about? That's the bare minimum all of the cron alternatives did. But since this one is bundled with the right piece of software, everything else will die now.
never have your filesystem mounted at the right time, because their automount rules are convoluted and sometimes just plain don't work despite being 1:1 according to the documentation.
I have this server running a docker container with a specific application. And it writes to a specific filesystem (properly mount binded inside the container of course).
Sometimes docker starts before the filesystem is mounted.
I know systemd can be taught about this but I haven't bothered. Because every time I have to do something in systemd, I have to read some nasty obscure doc. I need know how and where the config should go.
Systemd is great if your use case is Linux on a modern Desktop or Server, or something which resembles that. If you want to do anything else that doesn't fit into the project view of what you should be doing, you will be met with scorn and resistance (ask the musl team...).
What isn't great, and where the hate comes from, is that it makes the life of a distribution or upstream super easy, at the expense of adding a (slowly growing) complexity at the lowest levels of your system that--depending your perspective--does not follow the "unix way": journalctl, timedatectl, dependencies on/replacing dbus, etc. etc. It's also somehow been conflated with Poettering (he can be grating in his correctness), as well as the other projects Poettering works on (Avahi, Pulse Audio).
If all you want to do is coordinate some processes and ensure they run in the right order with automatic activation, etc. it's certainly capable and, I'd argue, the right level of tool as compared to something like k8s or docker.
My only bugbear with it is that there's no equivalent to the old timeout default you could set (note that doas explicitly said they won't implement this too). The workaround is to run it in `sudo -i` fashion and not put a command afterwards which is reasonable enough even though it worked hard against my muscle memory + copypaste commands when switching over.
> Systemd gets a lot of hate
I'd argue it doesn't and is simply another victim of loud internet minority syndrome.
It's just a generic name at this point, basically all associated with init and service units and none of the other stuff.
I was dismayed at having to go from simple clean linear BSD 4.3 / SunOS 4.1.3 era /etc/rc /etc/rc.local init scripts to that tangled rat king abomination of symbolic links and rc.d sub-directories and run levels that is the SysV / Solaris Rube Goldberg device. So people who want to go back to the "good old days" of that AT&T claptrap sound insane to me. Even Slowlaris moved on to SMF.
Oh yes, please add more! I'd love to see what others do because frankly, sometimes it feels like we're talking about forbidden magic or something lol
And honestly, I think the one thing systemd is really missing is... people talking about it. That's realistically the best way to get more documentation and spread all the cool tricks that everyone finds.
> I'd argue it doesn't
I definitely agree on loud minority, but they're visible enough that anytime systemd is brought up you can't avoid them. But then again, lots of people have much more passion about their opinions than passion about understanding the thing they opine about.
Of course. We suffered with sudo for a couple of decades already! Obviously it's wrong and outdated and has to be replaced with whatever LP says is the new norm.
> homed/homectl: extends user management to make it
impossible to have a clear picture of what's up with home dir, where is now located, how to have access to it or whether it will suddenly disappear. Obviously, plain /home worked for like five decades and therefore absolutely has to be replaced.
> Obviously, plain /home worked for like five decades and therefore absolutely has to be replaced.
Five decades ago, people didn't have laptops that they want to put on sleep and can get stolen. Actually, five decades ago, the rare people using a computer logged into remote, shared computers. Five decades ago, you didn't get hacked from the internet.
Today, people mostly each have their computer, and one session for themselves in it (when they have a computer at all)
I have not looked into homed yet, needs are very different from before. "It worked five decades ago" just isn't very convincing.
It'd be better to understand what homed tries to address, and argue why it does it wrong or why the concerns are not right.
You might not like it but there usually are legitimate reasons why systemd changes things, they don't do it because they like breaking stuff.
Learning curve is not the annoying part. It is kind of expected and fine.
systemd is annoying is parts that are so well described over the internet, that it makes it zero sense to repeat it. I am just venting and that comes from the experience.
never boot into the network reliably, because under systemd you have no control over the sequence.
BTW, I think that's one of the main pros and one of the strongest features of systemd, but it is also what makes it unreliable and boot unreproducible if you live outside of the very default Ubuntu instance and such.
It has a 600s timeout. You can reduce that if you want it to fail faster. But that doesn't seem like a problem with systemd, that seems like a problem with your network connection.
> If you live outside of the very default Ubuntu instance and such.
If you want that bare bones of a system I'd suggest using a minimal distribution. But honestly, I'm happy that I can wrap up servers and services into chroot jails with nspawn. Even when I'm not doing much, it makes it much easier to import, export, and limit capabilities
Simple example is I can have a duplicate of the "machine" running my server and spin it up (or have it already spun up) and take over if something goes wrong. Makes for a much more seamless experience.
It's a bit tricky and first and not a lot of good docs, but honestly I've been really liking it. I dropped docker in favor. Gives me a lot better control and flexibility.
I've run my homelab with podman-systemd (quadlet) for awhile and every time I investigate a new k8s variant it just isn't worth the extra hassle. As part of my ancient Ansible playbook I just pre-pull images and drop unit files in the right place.
I even run my entire Voron 3D printer stack with podman-systemd so I can update and rollback all the components at once, although I'm looking at switching to mkosi and systemd-sysupdate and just update/rollback the entire disk image at once.
The main issues are:
1. A lot of people just distribute docker-compose files, so you have to convert it to systemd units.
2. A lot of docker images have a variety of complexities around user/privilege setup that you don't need with podman. Sometimes you need to do annoying userns idmapping, especially if a container refuses to run as root and/or switches to another user.
Overall, though, it's way less complicated than any k8s (or k8s variant) setup. It's also nice to have everything integrated into systemd and journald instead of being split in two places.
Nice! I’ve been using a similar approach for years with my own setup: https://github.com/Mati365/hetzner-podman-bunjs-deploy. It’s built around Podman and systemd, and honestly, nothing has broken in all that time. Super stable, super simple. Just drop your units and go. Rock solid.
It works pretty well. I've also found that some AI models are pretty decent at it too. Obviously need to fix up some of the output but the tooling for conversion is much better than when I started.
Just a single (or bunch of independent) 'node'(s) though right?
To me podman/systems/quadlet could just as well be an implementation detail of how a k8s node runs a container (the.. CRI I suppose, in the lingo?) - it's not replacing the orchestration/scheduling abstraction over nodes that k8s provides. The 'here are my machines capable of running podman-systemd files, here is the spec I want to run, go'.
My servers are pets not cattle. They are heterogeneous and collected over the years. If I used k8s I'd end up having to mostly pin services to a specific machine anyway. I don't even have a rack: it's just a variety of box shapes stacked on a wire shelf.
At some point I do want to create a purpose built rack for my network equipment and maybe setup some homogenous servers for running k8s or whatever, but it's not a high priority.
I like the idea of podman-systemd being an impl detail of some higher level orchestration. Recent versions of podman support template units now, so in theory you wouldn't even need to create duplicate units to run more than one service.
Same experience, my workflow is to run the container from a podman run command, check it runs correctly, podlet to create a base container file, edit the container file (notably with volume and networks in other quadet file) and done (theorically).
I believe the podman-compose project is still actively maintened and could be a nice alternative for docker-compose. But the podman's interface with systemd is so enjoyable.
I don't know if podman-compose is actively developed, but it is unfortunately not a good alternative for docker-compose. It doesn't handle the full feature set of the compose spec and it tends to catch you by surprise sometimes. But the good news is, the new docker-compose (V2) can talk to podman just fine.
This us the way! Quadlets is such a nice way to run containers, really a set and forget experience. No need to install extra packages, at least on Fedora or Rocky Linux. I should do a write up of this some time...
Yep! My experience on Ubuntu 24.04 LTS was that I needed to create a system user to reserve the subuids / subgids for Podman (defaults to looking for a `containers` user):
useradd --comment "Helper user to reserve subuids and subgids for Podman" \
--no-create-home \
--shell /usr/sbin/nologin \
containers
I also found this blog post about the different `UserNS` options https://www.redhat.com/en/blog/rootless-podman-user-namespac... very helpful. In the end it seems that using `UserNS=auto` for rootful containers (with appropriate system security settings like private devices, etc) is easier and more secure than trying to get rootless containers running in a systemd user slice (Dan Walsh said it on a GitHub issue but I can't find it now).
This was touched on at the end of the article, but the author hadn't yet explored it. Thanks for the link.
> Of course, as my luck would have it, Podman integration with systemd appears to be deprecated already and they're now talking about defining containers in "Quadlet" files, whatever those are. I guess that will be something to learn some other time.
I came to the comments to make sure someone mentioned quadlets. Just last week, I migrated my home server from docker compose to rootless podman quadlets. The transition was challenging, but I am very happy with the result.
Seems very cool but can it do all one can do with compose? In other words, declare networks, multiple services, volumes, config(maps) and labels for e.g. traefik all in one single file?
To me that's why compose is neat. It's simple. Works well with rootless podman also.
I created skate (https://github.com/skateco/skate) to be basically this but multihost and support k8s manifests. Under the hood it’s podman and systemd
This is a great approach which resonates with me a lot. It's really frustrating that there is no simple way to run a multi-host Docker/Podman (Docker Swarm is abandonware since 2019 unfortunately).
However, in my opinion K8s has the worst API and UX possible. I find Docker Compose spec much more user friendly. So I'm experimenting with a multi-host docker-compose at the moment: https://github.com/psviderski/uncloud
Wouldn’t argue with you abut the k8s ux. Since it has all the ground concepts ( service, cronjob etc ) it required less effort than rolling yet another syntax.
We went back to just packaging debs and running them directly on ec2 instances with systemd. no more containers. Put the instances in an autoscaling group with an ALB. A simple ansible-pull installs the debs on-boot.
really raw-dogging it here but I got tired of endless json-inside-yaml-inside-hcl. ansible yaml is about all I want to deal with at this point.
I also really like in this approach that if there is a bug in a common library that I use, all I have to do is `apt full-upgrade` and restart my running processes, and I am protected. No rebuilding anything, or figuring out how to update some library buried deep a container that I may (or may not) have created.
Yes, I also have gone this route for a very simple application. Systemd was actually delightful, using a system assigned user account to run the service with the least amount of privileges is pretty cool. Also cgroup support does really make it nice to run many different services on one vps.
The article is more than one year old, systemd now even has specialized officially supported OS distro for immutable workflow namely ParticleOS [1],[2].
Also, another pro tip: set up your ~/.ssh/config so that you don't need the user@ part in any ssh invocations. It's quite practical when working in a team, you can just copy-paste commands between docs and each other.
Do what the sibling comment says or set DOCKER_HOST environment variable. Watch out, your local environment will be used in compose file interpolation!
This limitation creates numerous headaches. Instead of Deployments, I'm stuck with manual docker compose up/down commands over SSH. Rather than using Ingress, I have to rely on Traefik's container discovery functionality. Recently, I even wrote a small script to manage crontab idempotently because I can't use CronJobs. I'm constantly reinventing solutions to problems that Kubernetes already solves—just less efficiently.
What I really wish for is a lightweight alternative offering a Kubernetes-compatible API that runs well on inexpensive VPS instances. The gap between enterprise-grade container orchestration and affordable hobby hosting remains frustratingly wide.
Depending on how much of the Kube API you need, Podman is that. It can generate containers and pods from Kubernetes manifests [0]. Kind of works like docker compose but with Kubernetes manifests.
This even works with systemd units, similar to how it's outlined in the article.
Podman also supports most (all?) of the Docker api, thus docker compose, works, but also, you can connect to remote sockets through ssh etc to do things.
[0] https://docs.podman.io/en/latest/markdown/podman-kube-play.1...
[1] https://docs.podman.io/en/latest/markdown/podman-systemd.uni...
If you're planning to eventually move to a cluster or you're trying to learn k8s, maybe, but if you're just hosting a single node project it's a massive effort, just because that's not what k8s is for.
Still on k3s, still love it.
My cluster is currently hosting 94 pods across 55 deployments. Using 500m cpu (half a core) average, spiking to 3cores under moderate load, and 25gb ram. Biggest ram hog is Jellyfin (which appears to have a slow leak, and gets restarted when it hits 16gb, although it's currently streaming to 5 family members).
The cluster is exclusively recycled old hardware (4 machines), mostly old gaming machines. The most recent is 5 years old, the oldest is nearing 15 years old.
The nodes are bare Arch linux installs - which are wonderfully slim, easy to configure, and light on resources.
It burns 450Watts on average, which is higher than I'd like, but mostly because I have jellyfin and whisper/willow (self hosted home automation via voice control) as GPU accelerated loads - so I'm running an old nvidia 1060 and 2080.
Everything is plain old yaml, I explicitly avoid absolutely anything more complicated (including things like helm and kustomize - with very few exceptions) and it's... wonderful.
It's by far the least amount of "dev-ops" I've had to do for self hosting. Things work, it's simple, spinning up new service is a new folder and 3 new yaml files (0-namespace.yaml, 1-deployment.yaml, 2-ingress.yaml) which are just copied and edited each time.
Any three machines can go down and the cluster stays up (metalLB is really, really cool - ARP/NDP announcements mean any machine can announce as the primary load balancer and take the configured IP). Sometimes services take a minute to reallocate (and jellyfin gets priority over willow if I lose a gpu, and can also deploy with cpu-only transcoding as a fallback), and I haven't tried to be clever getting 100% uptime because I mostly don't care. If I'm down for 3 minutes, it's not the end of the world. I have a couple of commercial services in there, but it's free hosting for family businesses, they can also afford to be down an hour or two a year.
Overall - I'm not going back. It's great. Strongly, STRONGLY recommend k3s over microk8s. Definitely don't want to go back to single machine wrangling. The learning curve is steeper for this... but man do I spend very little time thinking about it at this point.
I've streamed video from it as far away as literally the other side of the world (GA, USA -> Taiwan). Amazon/Google/Microsoft have everyone convinced you can't host things yourself. Even for tiny projects people default to VPS's on a cloud. It's a ripoff. Put an old laptop in your basement - faster machine for free. At GCP prices... I have 30k/year worth of cloud compute in my basement, because GCP is a god damned rip off. My costs are $32/month in power, and a network connection I already have to have, and it's replaced hundreds of dollars/month in subscription costs.
For personal use-cases... basement cloud is where it's at.
I hate sounding like an Oracle shill, but Oracle Cloud's Free Tier is hands-down the most generous. It can support running quite a bit, including a small k8s cluster[1]. Their k8s backplane service is also free.
They'll give you 4 x ARM64 cores and 24GB of ram for free. You can split this into 1-4 nodes, depending on what you want.
[1] https://www.oracle.com/cloud/free/
So choose your home region carefully. Also, note that some regions have multiple availability domains (OCI-speak for availability zones) but some only have one AD. Though if you're only running one free instance then ADs don't really matter.
Dead Comment
I have a couple dedicated servers I fully manage with ansible. It's docker compose on steroids. Use traefik and labeling to handle reverse proxy and tls certs in a generic way, with authelia as simple auth provider. There's a lot of example projects on github.
A weekend of setup and you have a pretty easy to manage system.
Deleted Comment
But you've already said yourself that the cost of using K8s is too high. In one sense, you're solving those solutions more efficiently, it just depends on the axis you use to measure things.
That's more than what I'm paying for far fewer resources than Hetzner. I'm paying about $8 a month for 4 vCPUs and 8GB of RAM: https://www.hetzner.com/cloud
Note that the really affordable ARM servers are German only, so if you're in the US you'll have to deal with higher latency to save that money, but I think it's worth it.
Which I guess makes it more than good enough for hobby stuff - I'm playing with a multi-node cluster in my homelab and it's also working fine.
In terms of the cloud, I think Digital Ocean costs about $12 / month for their control plane + a small instance.
> Particularly with GitOps and Flux, making changes was a breeze.
i'm writing comin [1] which is GitOps for NixOS machines: you Git push your changes and your machines fetch and deploy them automatically.
[1] https://github.com/nlewo/comin
For single server setups, it uses k3s, which takes up ~200MB of memory on your host machine. Its not ideal, but the pain of trying to wrangle docker deployments, and the cheapness of hetzner made it worth it.
I don't use ingresses or loadbalancers because those cost extra, and either have the services exposed through tailscale (with tailscale operator) for stuff I only use myself, or through cloudflare argo tunnels for stuff I want internet accessible
(Once a project graduates and becomes more serious, I migrate the container off this cluster and into a proper container runner)
It looks like Nomad has a driver to run software via isolated fork/exec, as well, in addition to Docker containers.
Let it not be idempotent. Let it crash sometimes.
We lived without kubs for years and the web was ok. Your users will survive.
Out of curiosity, what is so bad about this for smaller projects?
The marginal cost of an additional project on the cluster is essentially $0
Recently I switched my entire setup (few Pi's, NAS and VM's) to NixOS. With Colmena[0] I can manage/update all hosts from one directory with a single command.
Kubernetes was a lot of fun, especially the declarative nature of it. But for small setups, where you are still managing the plumbing (OS, networking, firewall, hardening, etc) yourself, you still need some configuration management. Might as well put the rest of your stuff in there also.
[0] https://colmena.cli.rs/unstable/
There you get
for 6 $ / m - traffic inclusive. You can choose between "6 vCore ARM64, 8 GB RAM" and "4 vCore x86, 8 GB ECC RAM" for the same price. And much more, of course.https://www.netcup.com/en/server/vps
The more I look into it, the more I think of k8s as a way to "move to micro services" without actually moving to micro services. Loosely coupled micro services shouldn't need that level of coordination if they're truly loosely coupled.
To put this in perspective, that’s less compute than a phone released in 2023, 12 years ago, Samsung Galaxy S4. To find this level of performance in a computer, we have to go to
The main issue is that Kubernetes has created good API and primitives for managing cloud stuff, and managing a single server is still kinda crap despite decades of effort.
I had K3S on my server, but replaced with docker + Traefik + Portainer - it’s not great, but less idle CPU use and fewer moving parts
Or maybe look into Kamal?
Or use Digital Ocean app service. Got integration, cheap, just run a container. But get your postgres from a cheaper VC funded shop :)
It can manage multiple machine with just ssh access and docker install.
https://github.com/virtual-kubelet/virtual-kubelet
https://minikube.sigs.k8s.io/
Another way to look at this is the Kubernetes created solutions to problems that were already solved at a lower scale level. Crontabs, http proxies, etc… were already solved at the individual server level. If you’re used to running large coordinated clusters, then yes — it can seem like you’re reinventing the wheel.
Here's some cool stuff:
It's got a lot of stuff and it's (almost) all there already on your system! It's a bit annoying to learn, but it really isn't too bad if you really don't want to do anything too complicated. But in that case, it's not like there's a tool that doesn't require docs but allows you to do super complicated things.From my perspective, it got a lot of hate in its first few years (decade?), not because the project itself was bad -- on the contrary, it succeeded in spite of having loads of other issues, because it was so superior. The problem was the maintainer's attitude of wantonly breaking things that used to work just fine, without offering any suitable fixes.
I have an old comment somewhere with a big list. If you never felt the pain of systemd, it's either because you came late to the party, or because your needs always happened to overlap with the core maintainer's needs.
https://news.ycombinator.com/item?id=21897993
I will fully admit though that upstart was worse (which is an achievement), but the solution space was not at all settled.
[1] systemd project tackles a lot of important problems, but the quality of implementation, experience of using it, working with it, etc are not really good, especially the further you get from simplest cookie cutter services - especially because both systemd handling of defaults is borked, documentation when you hit that maybe makes sense to author, and whoever is the bright soul behind systemctl kindly never make CLIs again (with worst example being probably systemctl show this-service-does-not-exist)
You can spawn systemd from there, and in case anything goes wrong with it, you won't get an instant kernel panic.
Systemd wants PID1. Don't know if there are forks to disable that.
Sure. It worked for _50 years_ just fine but obviously it is very wrong and should be replaced with - of course - systemd.
Some of these things that "worked for 50 years" have also actually sucked for 50 years. Look at C strings and C error handling. They've "worked", until you hold them slightly wrong and cause the entire world to start leaking sensitive data in a lesser-used code path.
1 - Really, what are the people upthread gloating about? That's the bare minimum all of the cron alternatives did. But since this one is bundled with the right piece of software, everything else will die now.
As the memes would say: the future is now old man
never have your filesystem mounted at the right time, because their automount rules are convoluted and sometimes just plain don't work despite being 1:1 according to the documentation.
I have this server running a docker container with a specific application. And it writes to a specific filesystem (properly mount binded inside the container of course).
Sometimes docker starts before the filesystem is mounted.
I know systemd can be taught about this but I haven't bothered. Because every time I have to do something in systemd, I have to read some nasty obscure doc. I need know how and where the config should go.
I did manage to disable journalctl at least. Because grepping through simple rotated log files is a billion times faster than journalctl. See my comment and the whole thread https://github.com/systemd/systemd/issues/2460#issuecomment-...
I like the concept of systemd. Not the implementation and its leader.
What isn't great, and where the hate comes from, is that it makes the life of a distribution or upstream super easy, at the expense of adding a (slowly growing) complexity at the lowest levels of your system that--depending your perspective--does not follow the "unix way": journalctl, timedatectl, dependencies on/replacing dbus, etc. etc. It's also somehow been conflated with Poettering (he can be grating in his correctness), as well as the other projects Poettering works on (Avahi, Pulse Audio).
If all you want to do is coordinate some processes and ensure they run in the right order with automatic activation, etc. it's certainly capable and, I'd argue, the right level of tool as compared to something like k8s or docker.
My only bugbear with it is that there's no equivalent to the old timeout default you could set (note that doas explicitly said they won't implement this too). The workaround is to run it in `sudo -i` fashion and not put a command afterwards which is reasonable enough even though it worked hard against my muscle memory + copypaste commands when switching over.
> Systemd gets a lot of hate
I'd argue it doesn't and is simply another victim of loud internet minority syndrome.
It's just a generic name at this point, basically all associated with init and service units and none of the other stuff.
https://man.archlinux.org/man/run0.1.en
And honestly, I think the one thing systemd is really missing is... people talking about it. That's realistically the best way to get more documentation and spread all the cool tricks that everyone finds.
I definitely agree on loud minority, but they're visible enough that anytime systemd is brought up you can't avoid them. But then again, lots of people have much more passion about their opinions than passion about understanding the thing they opine about.Of course. We suffered with sudo for a couple of decades already! Obviously it's wrong and outdated and has to be replaced with whatever LP says is the new norm.
impossible to have a clear picture of what's up with home dir, where is now located, how to have access to it or whether it will suddenly disappear. Obviously, plain /home worked for like five decades and therefore absolutely has to be replaced.
Five decades ago, people didn't have laptops that they want to put on sleep and can get stolen. Actually, five decades ago, the rare people using a computer logged into remote, shared computers. Five decades ago, you didn't get hacked from the internet.
Today, people mostly each have their computer, and one session for themselves in it (when they have a computer at all)
I have not looked into homed yet, needs are very different from before. "It worked five decades ago" just isn't very convincing.
It'd be better to understand what homed tries to address, and argue why it does it wrong or why the concerns are not right.
You might not like it but there usually are legitimate reasons why systemd changes things, they don't do it because they like breaking stuff.
Learning curve is not the annoying part. It is kind of expected and fine.
systemd is annoying is parts that are so well described over the internet, that it makes it zero sense to repeat it. I am just venting and that comes from the experience.
never boot into the network reliably, because under systemd you have no control over the sequence.
BTW, I think that's one of the main pros and one of the strongest features of systemd, but it is also what makes it unreliable and boot unreproducible if you live outside of the very default Ubuntu instance and such.
It has a 600s timeout. You can reduce that if you want it to fail faster. But that doesn't seem like a problem with systemd, that seems like a problem with your network connection.
I use Arch btwWhat does this mean? Your machine boots and sometimes doesn't have network?
If your boot is unreliable, isn't it because some service you try to boot has a dependency that's not declared in its unit file?
Simple example is I can have a duplicate of the "machine" running my server and spin it up (or have it already spun up) and take over if something goes wrong. Makes for a much more seamless experience.
I even run my entire Voron 3D printer stack with podman-systemd so I can update and rollback all the components at once, although I'm looking at switching to mkosi and systemd-sysupdate and just update/rollback the entire disk image at once.
The main issues are: 1. A lot of people just distribute docker-compose files, so you have to convert it to systemd units. 2. A lot of docker images have a variety of complexities around user/privilege setup that you don't need with podman. Sometimes you need to do annoying userns idmapping, especially if a container refuses to run as root and/or switches to another user.
Overall, though, it's way less complicated than any k8s (or k8s variant) setup. It's also nice to have everything integrated into systemd and journald instead of being split in two places.
To me podman/systems/quadlet could just as well be an implementation detail of how a k8s node runs a container (the.. CRI I suppose, in the lingo?) - it's not replacing the orchestration/scheduling abstraction over nodes that k8s provides. The 'here are my machines capable of running podman-systemd files, here is the spec I want to run, go'.
At some point I do want to create a purpose built rack for my network equipment and maybe setup some homogenous servers for running k8s or whatever, but it's not a high priority.
I like the idea of podman-systemd being an impl detail of some higher level orchestration. Recent versions of podman support template units now, so in theory you wouldn't even need to create duplicate units to run more than one service.
I believe the podman-compose project is still actively maintened and could be a nice alternative for docker-compose. But the podman's interface with systemd is so enjoyable.
> Of course, as my luck would have it, Podman integration with systemd appears to be deprecated already and they're now talking about defining containers in "Quadlet" files, whatever those are. I guess that will be something to learn some other time.
To me that's why compose is neat. It's simple. Works well with rootless podman also.
really raw-dogging it here but I got tired of endless json-inside-yaml-inside-hcl. ansible yaml is about all I want to deal with at this point.
[1] ParticleOS:
https://github.com/systemd/particleos
[2] Systemd ParticleOS:
https://news.ycombinator.com/item?id=43649088
It's basically just this command once you have compose.yaml: `docker compose up -d --pull always`
And then the CI setup is this:
The benefit here is that it is simple and also works on your development machine.Of course if the side goal is to also do something fun and cool and learn, then Quadlet/k8s/systemd are great options too!
Also, another pro tip: set up your ~/.ssh/config so that you don't need the user@ part in any ssh invocations. It's quite practical when working in a team, you can just copy-paste commands between docs and each other.