Readit News logoReadit News
drivenextfunc · 8 months ago
I share the author's sentiment completely. At my day job, I manage multiple Kubernetes clusters running dozens of microservices with relative ease. However, for my hobby projects—which generate no revenue and thus have minimal budgets—I find myself in a frustrating position: desperately wanting to use Kubernetes but unable to due to its resource requirements. Kubernetes is simply too resource-intensive to run on a $10/month VPS with just 1 shared vCPU and 2GB of RAM.

This limitation creates numerous headaches. Instead of Deployments, I'm stuck with manual docker compose up/down commands over SSH. Rather than using Ingress, I have to rely on Traefik's container discovery functionality. Recently, I even wrote a small script to manage crontab idempotently because I can't use CronJobs. I'm constantly reinventing solutions to problems that Kubernetes already solves—just less efficiently.

What I really wish for is a lightweight alternative offering a Kubernetes-compatible API that runs well on inexpensive VPS instances. The gap between enterprise-grade container orchestration and affordable hobby hosting remains frustratingly wide.

figmert · 8 months ago
> What I really wish for is a lightweight alternative offering a Kubernetes-compatible API that runs well on inexpensive VPS instances. The gap between enterprise-grade container orchestration and affordable hobby hosting remains frustratingly wide.

Depending on how much of the Kube API you need, Podman is that. It can generate containers and pods from Kubernetes manifests [0]. Kind of works like docker compose but with Kubernetes manifests.

This even works with systemd units, similar to how it's outlined in the article.

Podman also supports most (all?) of the Docker api, thus docker compose, works, but also, you can connect to remote sockets through ssh etc to do things.

[0] https://docs.podman.io/en/latest/markdown/podman-kube-play.1...

[1] https://docs.podman.io/en/latest/markdown/podman-systemd.uni...

jillyboel · 8 months ago
The docs don't make it clear, can it do "zero downtime" deployments? Meaning it first creates the new pod, waits for it to be healthy using the defined health checks and then removes the old one? Somehow integrating this with service/ingress/whatever so network traffic only goes to the healthy one?
sciencesama · 8 months ago
You can use firecracker !
sweettea · 8 months ago
Have you seen k0s or k3s? Lots of stories about folks using these to great success on a tiny scale, e.g. https://news.ycombinator.com/item?id=43593269
rendaw · 8 months ago
I tried k3s but even on an immutable system dealing with charts and all the other kubernetes stuff adds a new layer of mutability and hence maintenance, update, manual management steps that only really make sense on a cluster, not a single server.

If you're planning to eventually move to a cluster or you're trying to learn k8s, maybe, but if you're just hosting a single node project it's a massive effort, just because that's not what k8s is for.

acheong08 · 8 months ago
I use k3s. With more than more master node, it's still a resource hog and when one master node goes down, all of them tend to follow. 2GB of RAM is not enough, especially if you also use longhorn for distributed storage. A single master node is fine and I haven't had it crash on me yet. In terms of scale, I'm able to use raspberry pis and such as agents so I only have to rent a single €4/month vps.
horsawlarway · 8 months ago
I'm laughing because I clicked your link thinking I agreed and had posted similar things and it's my comment.

Still on k3s, still love it.

My cluster is currently hosting 94 pods across 55 deployments. Using 500m cpu (half a core) average, spiking to 3cores under moderate load, and 25gb ram. Biggest ram hog is Jellyfin (which appears to have a slow leak, and gets restarted when it hits 16gb, although it's currently streaming to 5 family members).

The cluster is exclusively recycled old hardware (4 machines), mostly old gaming machines. The most recent is 5 years old, the oldest is nearing 15 years old.

The nodes are bare Arch linux installs - which are wonderfully slim, easy to configure, and light on resources.

It burns 450Watts on average, which is higher than I'd like, but mostly because I have jellyfin and whisper/willow (self hosted home automation via voice control) as GPU accelerated loads - so I'm running an old nvidia 1060 and 2080.

Everything is plain old yaml, I explicitly avoid absolutely anything more complicated (including things like helm and kustomize - with very few exceptions) and it's... wonderful.

It's by far the least amount of "dev-ops" I've had to do for self hosting. Things work, it's simple, spinning up new service is a new folder and 3 new yaml files (0-namespace.yaml, 1-deployment.yaml, 2-ingress.yaml) which are just copied and edited each time.

Any three machines can go down and the cluster stays up (metalLB is really, really cool - ARP/NDP announcements mean any machine can announce as the primary load balancer and take the configured IP). Sometimes services take a minute to reallocate (and jellyfin gets priority over willow if I lose a gpu, and can also deploy with cpu-only transcoding as a fallback), and I haven't tried to be clever getting 100% uptime because I mostly don't care. If I'm down for 3 minutes, it's not the end of the world. I have a couple of commercial services in there, but it's free hosting for family businesses, they can also afford to be down an hour or two a year.

Overall - I'm not going back. It's great. Strongly, STRONGLY recommend k3s over microk8s. Definitely don't want to go back to single machine wrangling. The learning curve is steeper for this... but man do I spend very little time thinking about it at this point.

I've streamed video from it as far away as literally the other side of the world (GA, USA -> Taiwan). Amazon/Google/Microsoft have everyone convinced you can't host things yourself. Even for tiny projects people default to VPS's on a cloud. It's a ripoff. Put an old laptop in your basement - faster machine for free. At GCP prices... I have 30k/year worth of cloud compute in my basement, because GCP is a god damned rip off. My costs are $32/month in power, and a network connection I already have to have, and it's replaced hundreds of dollars/month in subscription costs.

For personal use-cases... basement cloud is where it's at.

mikepurvis · 8 months ago
Or microk8s. I'm curious what it is about k8s that is sucking up all these resources. Surely the control plane is mostly idle when you aren't doing things with it?
Seattle3503 · 8 months ago
How hard is it to host a Postgres server on one node and access it from another?
Alupis · 8 months ago
> Kubernetes is simply too resource-intensive to run on a $10/month VPS with just 1 shared vCPU and 2GB of RAM

I hate sounding like an Oracle shill, but Oracle Cloud's Free Tier is hands-down the most generous. It can support running quite a bit, including a small k8s cluster[1]. Their k8s backplane service is also free.

They'll give you 4 x ARM64 cores and 24GB of ram for free. You can split this into 1-4 nodes, depending on what you want.

[1] https://www.oracle.com/cloud/free/

lemoncucumber · 8 months ago
One thing to watch out for is that you pick your "home region" when you create your account. This cannot be changed later, and your "Always Free" instances can only be created in your home region (the non-free tier doesn't have that restriction).

So choose your home region carefully. Also, note that some regions have multiple availability domains (OCI-speak for availability zones) but some only have one AD. Though if you're only running one free instance then ADs don't really matter.

waveringana · 8 months ago
the catch is: no commercial usage and half the time you try to spin up an instance itll tell you theres no room left
rfl890 · 8 months ago
There are tons of horror stories about OCI's free tier (check r/oraclecloud on reddit, tl;dr: your account may get terminated at any moment and you will lose access to all data with no recovery options). I wouldn't suggest putting anything serious on it.
mulakosag · 8 months ago
I recenlty wrote a guide on how to create a free 3 node cluster in Oracle cloud : https://macgain.net/posts/free-k8-cluster . This guide currently uses kubeadm to create 3 node (1 control plane, 2 worker nodes) cluster.

Dead Comment

nvarsj · 8 months ago
Just do it like the olden days, use ansible or similar.

I have a couple dedicated servers I fully manage with ansible. It's docker compose on steroids. Use traefik and labeling to handle reverse proxy and tls certs in a generic way, with authelia as simple auth provider. There's a lot of example projects on github.

A weekend of setup and you have a pretty easy to manage system.

Deleted Comment

nicce · 8 months ago
What is the advantage of traefik over oldschool Nginx?
thenewwazoo · 8 months ago
> I'm constantly reinventing solutions to problems that Kubernetes already solves—just less efficiently.

But you've already said yourself that the cost of using K8s is too high. In one sense, you're solving those solutions more efficiently, it just depends on the axis you use to measure things.

randallsquared · 8 months ago
The original statement is ambiguous. I read it as "problems that k8s already solves -- but k8s is less efficient, so can't be used".
AdrianB1 · 8 months ago
That picture with the almost-empty truck seems to be the situation that he describes. He wants the 18 wheeler truck, but it is too expensive for just a suitcase.
MyOutfitIsVague · 8 months ago
> Kubernetes is simply too resource-intensive to run on a $10/month VPS with just 1 shared vCPU and 2GB of RAM.

That's more than what I'm paying for far fewer resources than Hetzner. I'm paying about $8 a month for 4 vCPUs and 8GB of RAM: https://www.hetzner.com/cloud

Note that the really affordable ARM servers are German only, so if you're in the US you'll have to deal with higher latency to save that money, but I think it's worth it.

fhcbix · 8 months ago
I recently set up an arm64 VPS at netcup: https://www.netcup.com/en/server/arm-server Got it with no location fee (and 2x storage) during the easter sale but normally US is the cheapest.
andrewmcwatters · 8 months ago
Thank you for sharing this. Do you have a referral link we can use to give you a little credit for informing us?
rollcat · 8 months ago
I've been using Docker swarm for internal & lightweight production workloads for 5+ years with zero issues. FD: it's a single node cluster on a reasonably powerful machine, but if anything, it's over-specced for what it does.

Which I guess makes it more than good enough for hobby stuff - I'm playing with a multi-node cluster in my homelab and it's also working fine.

Taikonerd · 8 months ago
I think Docker Swarm makes a lot of sense for situations where K8s is too heavyweight. "Heavyweight" either in resource consumption, or just being too complex for a simple use case.
osigurdson · 8 months ago
Podman is a fairly nice bridge. If you are familiar with Kubernetes yaml, it is relatively easy to do docker-compose like things except using more familiar (for me) K8s yaml.

In terms of the cloud, I think Digital Ocean costs about $12 / month for their control plane + a small instance.

404mm · 8 months ago
I found k3s to be a happy medium. It feels very lean and works well even on a Pi, and scales ok to a few node cluster if needed. You can even host the database on a remote mysql server, if local sqlite is too much IO.
singron · 8 months ago
NixOS works really well for me. I used to write these kinds of idempotent scripts too but they are usually irrelevant in NixOS where that's the default behavior.
lewo · 8 months ago
And regarding this part of the article

> Particularly with GitOps and Flux, making changes was a breeze.

i'm writing comin [1] which is GitOps for NixOS machines: you Git push your changes and your machines fetch and deploy them automatically.

[1] https://github.com/nlewo/comin

czhu12 · 8 months ago
This is exactly why I built https://canine.sh -- basically for indie hackers to have the full experience of Heroku with the power and portability of Kubernetes.

For single server setups, it uses k3s, which takes up ~200MB of memory on your host machine. Its not ideal, but the pain of trying to wrangle docker deployments, and the cheapness of hetzner made it worth it.

satvikpendem · 8 months ago
How does it compare to Coolify and Dokploy?
artdigital · 8 months ago
I run my private stuff on a hosted vultr k8s cluster with 1 node for $10-$20 a month. All my hobby stuff is running on that "personal cluster" and it is that perfect sweetspot for me that you're talking about

I don't use ingresses or loadbalancers because those cost extra, and either have the services exposed through tailscale (with tailscale operator) for stuff I only use myself, or through cloudflare argo tunnels for stuff I want internet accessible

(Once a project graduates and becomes more serious, I migrate the container off this cluster and into a proper container runner)

eigengrau · 8 months ago
It’s been a couple of years since I’ve last used it, but if you want container orchestration with a relatively small footprint, maybe Hashicorp Nomad (perhaps in conjunction with Consul and Traefik) is still an option. These were all single binary tools. I did not personally run them on 2G mem VPSes, but it might still be worthwhile for you to take a look.

It looks like Nomad has a driver to run software via isolated fork/exec, as well, in addition to Docker containers.

BiteCode_dev · 8 months ago
The solution to this is to not solve all the problems a billion dollar tech does on a personnal project.

Let it not be idempotent. Let it crash sometimes.

We lived without kubs for years and the web was ok. Your users will survive.

bigstrat2003 · 8 months ago
Yeah, unless you're doing k8s for the purpose of learning job skills, it's way overkill. Just run a container with docker, or a web server outside a container if it's a website. Way easier and it will work just fine.
pachevjoseph · 8 months ago
I’ve been using https://www.coolify.io/ self hosted. It’s a good middle ground between full blown k8s and systemd services. I have a home lab where I host most of my hobby projects though. So take that into account. You can also use their cloud offering to connect to VPSs
alex5207 · 8 months ago
> I'm stuck with manual docker compose up/down commands over SSH

Out of curiosity, what is so bad about this for smaller projects?

JamesSwift · 8 months ago
Just go with a cloud provider that offers free control plane and shove a bunch of side projects into 1 node. I end up around $50 a month on GCP (was a bit cheaper at DO) once you include things like private docker registry etc.

The marginal cost of an additional project on the cluster is essentially $0

aequitas · 8 months ago
I've ran K3s on a couple of Raspberry Pi's as a homelab in the past. It's lightweight and ran nice for a few years, but even so, one Pi was always dedicated as controller, which seemed like a waste.

Recently I switched my entire setup (few Pi's, NAS and VM's) to NixOS. With Colmena[0] I can manage/update all hosts from one directory with a single command.

Kubernetes was a lot of fun, especially the declarative nature of it. But for small setups, where you are still managing the plumbing (OS, networking, firewall, hardening, etc) yourself, you still need some configuration management. Might as well put the rest of your stuff in there also.

[0] https://colmena.cli.rs/unstable/

CoolCold · 8 months ago
6$/m - will likely bring you peace of mind - Netcup hosting VPS 1000 ARM G11

    6 vCore (ARM64)
    8 GB RAM
    256 GB NVMe

Jipazgqmnm · 8 months ago
They also have regular promotions that offer e.g. double the disk space.

There you get

    6 vCore (ARM64)
    8 GB RAM
    512 GB NVMe
for 6 $ / m - traffic inclusive. You can choose between "6 vCore ARM64, 8 GB RAM" and "4 vCore x86, 8 GB ECC RAM" for the same price. And much more, of course.

https://www.netcup.com/en/server/vps

turtlebits · 8 months ago
I'm a cheapskate too, but at some point, the time you spend researching cheap hosting, signing up and getting deployed is not worth the hassle of paying a few more $ on bigger boxes.
nullpoint420 · 8 months ago
Have you tried nixOS? I feel like it solves the functional aspect you're looking for.
AdrianB1 · 8 months ago
I am curious why your no revenue projects need the complexity, features and benefits of something like Kubernetes. Why you cannot just to it the archaic way of compiling your app, copy the files to a folder and run it there and never touch it for the next 5 years. If it is a dev environment with many changes, its on a local computer, not on VPS, I guess. Just curious by nature, I am.
lenerdenator · 8 months ago
The thing is, most of those enterprise-grade container orchestrations probably don't need k8s either.

The more I look into it, the more I think of k8s as a way to "move to micro services" without actually moving to micro services. Loosely coupled micro services shouldn't need that level of coordination if they're truly loosely coupled.

ClumsyPilot · 8 months ago
> Kubernetes is simply too resource-intensive to run on a $10/month VPS with just 1 shared vCPU and 2GB of RAM

To put this in perspective, that’s less compute than a phone released in 2023, 12 years ago, Samsung Galaxy S4. To find this level of performance in a computer, we have to go to

The main issue is that Kubernetes has created good API and primitives for managing cloud stuff, and managing a single server is still kinda crap despite decades of effort.

I had K3S on my server, but replaced with docker + Traefik + Portainer - it’s not great, but less idle CPU use and fewer moving parts

huksley · 8 months ago
I believe that Kubernetes is something you want to use if you have 1+ SRE full-time on your team. I actually got tired with complexity of kubernetes, AWS ECS and docker as well and just build a tool to deploy apps natively on the host. What's wrong with using Linux native primitives - systemd, crontab, postgresql or redis native package? Whose should work as intended, you don't need them in container.
investa · 8 months ago
SSH up/down can be scripted.

Or maybe look into Kamal?

Or use Digital Ocean app service. Got integration, cheap, just run a container. But get your postgres from a cheaper VC funded shop :)

vrosas · 8 months ago
Why not just use something like Cloud Run? If you're only running a microVM deploying it there will probably be at or near free.
jdsleppy · 8 months ago
I really like `DOCKER_HOST=ssh://... docker compose up -d`, what do you miss about Deployments?
daitangio · 8 months ago
I developed a tiny wrapper around docker compose which work on my use case: https://github.com/daitangio/misterio

It can manage multiple machine with just ssh access and docker install.

byrnedo · 8 months ago
Please try https://github.com/skateco/skate, this is pretty much the exact same reason why I built it!
kartikarti · 8 months ago
Virtual Kubelet is one step forward towards Kubernetes as an API

https://github.com/virtual-kubelet/virtual-kubelet

parliament32 · 8 months ago
Why not minikube or one of the other resource-constrained k8s variants?

https://minikube.sigs.k8s.io/

jjwiseman · 8 months ago
I use Caprover to run about 26 services for personal projects on a Hetzner box. I like its simplicity. Worth it just for the one-click https cert management.
SEJeff · 8 months ago
Have you tried k3s? I think it would run on a tiny vps like that and is a full stack. Instead of etcd it has sqlite embedded.
mbreese · 8 months ago
> I'm constantly reinventing solutions to problems that Kubernetes already solves

Another way to look at this is the Kubernetes created solutions to problems that were already solved at a lower scale level. Crontabs, http proxies, etc… were already solved at the individual server level. If you’re used to running large coordinated clusters, then yes — it can seem like you’re reinventing the wheel.

melodyogonna · 8 months ago
For $10 you can buy VPS with a lot more resources than that on both Contabo and Ovh
hkon · 8 months ago
I've used caprover a bunch
rcarmo · 8 months ago
What about Portainer? I deploy my compose files via git using it.
godelski · 8 months ago
Systemd gets a lot of hate but it really solves a lot of problems. People really shouldn't dismiss it. I think it really happened because when systemd started appearing on distros by default people were upset they had to change

Here's some cool stuff:

  - containers

    - machinectl: used for controlling:

      - nspawn: a more powerful chroot. This is often a better solution than docker. Super lightweight. Shares kernel

      - vmspawn: when nspawn isn't enough and you need full virtualization

    - importctl: download, import, export your machines. Get the download features in {vm,n}spawn like we have with docker. There's a hub, but it's not very active

  - homed/homectl: extends user management to make it easier to do things like encryption home directories (different mounts), better control of permissions, and more

  - mounts: forget fstab. Make it easy to auto mount and dismount drives or partitions. Can be access based, time, triggered by another unit (eg a spawn), sockets, or whatever

  - boot: you can not only control boot but this is really what gives you access to starting and stopping services in the boot sequence. 

  - timers: forget cron. Cron can't wake your machine. Cron can't tell a service didn't run because your machine was off. Cron won't give you fuzzy timing, do more complicated things like wait for X minutes after boot if it's the third Sunday of the month and only if Y.service is running. Idk why you'd do that, but you can!

  - service units: these are your jobs. You can really control them in their capabilities. Lock them down so they can only do what they are meant to do.

    - overrides: use `systemctl edit` to edit your configs. Creates an override config and you don't need to destroy the original. No longer that annoying task of finding the original config and for some reason you can't get it back even if reinstalling! Same with when the original config changes in an install, your override doesn't get touched!!
It's got a lot of stuff and it's (almost) all there already on your system! It's a bit annoying to learn, but it really isn't too bad if you really don't want to do anything too complicated. But in that case, it's not like there's a tool that doesn't require docs but allows you to do super complicated things.

gwd · 8 months ago
> Systemd gets a lot of hate but it really solves a lot of problems.

From my perspective, it got a lot of hate in its first few years (decade?), not because the project itself was bad -- on the contrary, it succeeded in spite of having loads of other issues, because it was so superior. The problem was the maintainer's attitude of wantonly breaking things that used to work just fine, without offering any suitable fixes.

I have an old comment somewhere with a big list. If you never felt the pain of systemd, it's either because you came late to the party, or because your needs always happened to overlap with the core maintainer's needs.

gwd · 8 months ago
Aldipower · 8 months ago
Full ack. Systemd broke a lot of things that just worked. Combined with the maintainer's attitude this produced a lot of anti reaction.
p_l · 8 months ago
It didn't win on being superior [1] but because it was either systemd or you don't get to to use GNOME 3.8. On more than one distro it was the reason for switching towards systemd.

I will fully admit though that upstart was worse (which is an achievement), but the solution space was not at all settled.

[1] systemd project tackles a lot of important problems, but the quality of implementation, experience of using it, working with it, etc are not really good, especially the further you get from simplest cookie cutter services - especially because both systemd handling of defaults is borked, documentation when you hit that maybe makes sense to author, and whoever is the bright soul behind systemctl kindly never make CLIs again (with worst example being probably systemctl show this-service-does-not-exist)

rollcat · 8 months ago
The only issue I'm having with systemd is that it's taking over the role of PID 1, with a binary produced from an uncountable SLOC, then doing even more song and dance to exec itself in-place on upgrades. Here's a PID 1 program that does 100% of all of its duties correctly, and nothing else:

    #define _XOPEN_SOURCE 700
    #include <signal.h>
    #include <unistd.h>
    int main() {
        sigset_t set;
        int status;
        if (getpid() != 1) return 1;
        sigfillset(&set);
        sigprocmask(SIG_BLOCK, &set, 0);
        if (fork()) for (;;) wait(&status);
        sigprocmask(SIG_UNBLOCK, &set, 0);
        setsid();
        setpgid(0, 0);
        return execve("/etc/rc", (char *[]){ "rc", 0 }, (char *[]){ 0 });
    }
(Credit: https://ewontfix.com/14/)

You can spawn systemd from there, and in case anything goes wrong with it, you won't get an instant kernel panic.

ptsneves · 8 months ago
If you have your init crashing wouldn't this just start a loop where you cannot do anything else than seeing it looping? How would this be better than just panicking?
zoobab · 8 months ago
"You can spawn systemd from there"

Systemd wants PID1. Don't know if there are forks to disable that.

egorfine · 8 months ago
> forget cron

Sure. It worked for _50 years_ just fine but obviously it is very wrong and should be replaced with - of course - systemd.

MyOutfitIsVague · 8 months ago
Timers are so much better than cron it's not even funny. Managing Unix machines for decades with teens of thousands of vital cron entries across thousands of machines, the things that can and do go wrong are painful, especially when you include more esoteric systems. The fact that timers are able to be synced up, backed up, and updated as individual files is alone a massive advantage.

Some of these things that "worked for 50 years" have also actually sucked for 50 years. Look at C strings and C error handling. They've "worked", until you hold them slightly wrong and cause the entire world to start leaking sensitive data in a lesser-used code path.

marcosdumay · 8 months ago
I'd say the systemd interface is worse¹, but cron was never really good, and people tended to replace it very often.

1 - Really, what are the people upthread gloating about? That's the bare minimum all of the cron alternatives did. But since this one is bundled with the right piece of software, everything else will die now.

godelski · 8 months ago
Cron works fine. But that doesn't mean something better hasn't come by in _50 years_

As the memes would say: the future is now old man

egorfine · 8 months ago
> mounts: forget fstab. Make it easy to

never have your filesystem mounted at the right time, because their automount rules are convoluted and sometimes just plain don't work despite being 1:1 according to the documentation.

bombela · 8 months ago
Man this one annoys me.

I have this server running a docker container with a specific application. And it writes to a specific filesystem (properly mount binded inside the container of course).

Sometimes docker starts before the filesystem is mounted.

I know systemd can be taught about this but I haven't bothered. Because every time I have to do something in systemd, I have to read some nasty obscure doc. I need know how and where the config should go.

I did manage to disable journalctl at least. Because grepping through simple rotated log files is a billion times faster than journalctl. See my comment and the whole thread https://github.com/systemd/systemd/issues/2460#issuecomment-...

I like the concept of systemd. Not the implementation and its leader.

yc-kraln · 8 months ago
Systemd is great if your use case is Linux on a modern Desktop or Server, or something which resembles that. If you want to do anything else that doesn't fit into the project view of what you should be doing, you will be met with scorn and resistance (ask the musl team...).

What isn't great, and where the hate comes from, is that it makes the life of a distribution or upstream super easy, at the expense of adding a (slowly growing) complexity at the lowest levels of your system that--depending your perspective--does not follow the "unix way": journalctl, timedatectl, dependencies on/replacing dbus, etc. etc. It's also somehow been conflated with Poettering (he can be grating in his correctness), as well as the other projects Poettering works on (Avahi, Pulse Audio).

If all you want to do is coordinate some processes and ensure they run in the right order with automatic activation, etc. it's certainly capable and, I'd argue, the right level of tool as compared to something like k8s or docker.

holuponemoment · 8 months ago
Nice list, I'd add run0 as the sudo replacement.

My only bugbear with it is that there's no equivalent to the old timeout default you could set (note that doas explicitly said they won't implement this too). The workaround is to run it in `sudo -i` fashion and not put a command afterwards which is reasonable enough even though it worked hard against my muscle memory + copypaste commands when switching over.

> Systemd gets a lot of hate

I'd argue it doesn't and is simply another victim of loud internet minority syndrome.

It's just a generic name at this point, basically all associated with init and service units and none of the other stuff.

https://man.archlinux.org/man/run0.1.en

DonHopkins · 8 months ago
I was dismayed at having to go from simple clean linear BSD 4.3 / SunOS 4.1.3 era /etc/rc /etc/rc.local init scripts to that tangled rat king abomination of symbolic links and rc.d sub-directories and run levels that is the SysV / Solaris Rube Goldberg device. So people who want to go back to the "good old days" of that AT&T claptrap sound insane to me. Even Slowlaris moved on to SMF.
godelski · 8 months ago
Oh yes, please add more! I'd love to see what others do because frankly, sometimes it feels like we're talking about forbidden magic or something lol

And honestly, I think the one thing systemd is really missing is... people talking about it. That's realistically the best way to get more documentation and spread all the cool tricks that everyone finds.

  > I'd argue it doesn't 
I definitely agree on loud minority, but they're visible enough that anytime systemd is brought up you can't avoid them. But then again, lots of people have much more passion about their opinions than passion about understanding the thing they opine about.

egorfine · 8 months ago
> run0 as the sudo replacement

Of course. We suffered with sudo for a couple of decades already! Obviously it's wrong and outdated and has to be replaced with whatever LP says is the new norm.

egorfine · 8 months ago
> homed/homectl: extends user management to make it

impossible to have a clear picture of what's up with home dir, where is now located, how to have access to it or whether it will suddenly disappear. Obviously, plain /home worked for like five decades and therefore absolutely has to be replaced.

jraph · 8 months ago
> Obviously, plain /home worked for like five decades and therefore absolutely has to be replaced.

Five decades ago, people didn't have laptops that they want to put on sleep and can get stolen. Actually, five decades ago, the rare people using a computer logged into remote, shared computers. Five decades ago, you didn't get hacked from the internet.

Today, people mostly each have their computer, and one session for themselves in it (when they have a computer at all)

I have not looked into homed yet, needs are very different from before. "It worked five decades ago" just isn't very convincing.

It'd be better to understand what homed tries to address, and argue why it does it wrong or why the concerns are not right.

You might not like it but there usually are legitimate reasons why systemd changes things, they don't do it because they like breaking stuff.

egorfine · 8 months ago
> It's a bit annoying to learn

Learning curve is not the annoying part. It is kind of expected and fine.

systemd is annoying is parts that are so well described over the internet, that it makes it zero sense to repeat it. I am just venting and that comes from the experience.

egorfine · 8 months ago
> boot: you can not only control boot but

never boot into the network reliably, because under systemd you have no control over the sequence.

BTW, I think that's one of the main pros and one of the strongest features of systemd, but it is also what makes it unreliable and boot unreproducible if you live outside of the very default Ubuntu instance and such.

godelski · 8 months ago
Are you talking about `NetworkManager.service`?

It has a 600s timeout. You can reduce that if you want it to fail faster. But that doesn't seem like a problem with systemd, that seems like a problem with your network connection.

  > If you live outside of the very default Ubuntu instance and such.
I use Arch btw

jraph · 8 months ago
> never boot into the network reliably

What does this mean? Your machine boots and sometimes doesn't have network?

If your boot is unreliable, isn't it because some service you try to boot has a dependency that's not declared in its unit file?

blueflow · 8 months ago
Then you have that machine that only runs an sshd and apache2 and you still get all that stuff shoehorned into your system.
godelski · 8 months ago
If you want that bare bones of a system I'd suggest using a minimal distribution. But honestly, I'm happy that I can wrap up servers and services into chroot jails with nspawn. Even when I'm not doing much, it makes it much easier to import, export, and limit capabilities

Simple example is I can have a duplicate of the "machine" running my server and spin it up (or have it already spun up) and take over if something goes wrong. Makes for a much more seamless experience.

dsp_person · 8 months ago
ooh i didn't know about vmspawn. maybe this can replace some of where I use incus
godelski · 8 months ago
It's a bit tricky and first and not a lot of good docs, but honestly I've been really liking it. I dropped docker in favor. Gives me a lot better control and flexibility.
kaylynb · 8 months ago
I've run my homelab with podman-systemd (quadlet) for awhile and every time I investigate a new k8s variant it just isn't worth the extra hassle. As part of my ancient Ansible playbook I just pre-pull images and drop unit files in the right place.

I even run my entire Voron 3D printer stack with podman-systemd so I can update and rollback all the components at once, although I'm looking at switching to mkosi and systemd-sysupdate and just update/rollback the entire disk image at once.

The main issues are: 1. A lot of people just distribute docker-compose files, so you have to convert it to systemd units. 2. A lot of docker images have a variety of complexities around user/privilege setup that you don't need with podman. Sometimes you need to do annoying userns idmapping, especially if a container refuses to run as root and/or switches to another user.

Overall, though, it's way less complicated than any k8s (or k8s variant) setup. It's also nice to have everything integrated into systemd and journald instead of being split in two places.

mati365 · 8 months ago
Nice! I’ve been using a similar approach for years with my own setup: https://github.com/Mati365/hetzner-podman-bunjs-deploy. It’s built around Podman and systemd, and honestly, nothing has broken in all that time. Super stable, super simple. Just drop your units and go. Rock solid.
kaylynb · 8 months ago
Neat. I like to see other takes on this. Any reason to use rootless vs `userns=auto`? I haven't really seen any discussion of it other than this issue: https://github.com/containers/podman/discussions/13728
Touche · 8 months ago
You can use podlet to convert compose files to quadlet files. https://github.com/containers/podlet
kaylynb · 8 months ago
It works pretty well. I've also found that some AI models are pretty decent at it too. Obviously need to fix up some of the output but the tooling for conversion is much better than when I started.
OJFord · 8 months ago
Just a single (or bunch of independent) 'node'(s) though right?

To me podman/systems/quadlet could just as well be an implementation detail of how a k8s node runs a container (the.. CRI I suppose, in the lingo?) - it's not replacing the orchestration/scheduling abstraction over nodes that k8s provides. The 'here are my machines capable of running podman-systemd files, here is the spec I want to run, go'.

kaylynb · 8 months ago
My servers are pets not cattle. They are heterogeneous and collected over the years. If I used k8s I'd end up having to mostly pin services to a specific machine anyway. I don't even have a rack: it's just a variety of box shapes stacked on a wire shelf.

At some point I do want to create a purpose built rack for my network equipment and maybe setup some homogenous servers for running k8s or whatever, but it's not a high priority.

I like the idea of podman-systemd being an impl detail of some higher level orchestration. Recent versions of podman support template units now, so in theory you wouldn't even need to create duplicate units to run more than one service.

mufasachan · 8 months ago
Same experience, my workflow is to run the container from a podman run command, check it runs correctly, podlet to create a base container file, edit the container file (notably with volume and networks in other quadet file) and done (theorically).

I believe the podman-compose project is still actively maintened and could be a nice alternative for docker-compose. But the podman's interface with systemd is so enjoyable.

goku12 · 8 months ago
I don't know if podman-compose is actively developed, but it is unfortunately not a good alternative for docker-compose. It doesn't handle the full feature set of the compose spec and it tends to catch you by surprise sometimes. But the good news is, the new docker-compose (V2) can talk to podman just fine.
masneyb · 8 months ago
The next step to simplify this even further is to use Quadlet within systemd to manage the containers. More details are at https://www.redhat.com/en/blog/quadlet-podman
rsolva · 8 months ago
This us the way! Quadlets is such a nice way to run containers, really a set and forget experience. No need to install extra packages, at least on Fedora or Rocky Linux. I should do a write up of this some time...
aorth · 8 months ago
Yep! My experience on Ubuntu 24.04 LTS was that I needed to create a system user to reserve the subuids / subgids for Podman (defaults to looking for a `containers` user):

  useradd --comment "Helper user to reserve subuids and subgids for Podman" \
    --no-create-home \
    --shell /usr/sbin/nologin \
    containers
I also found this blog post about the different `UserNS` options https://www.redhat.com/en/blog/rootless-podman-user-namespac... very helpful. In the end it seems that using `UserNS=auto` for rootful containers (with appropriate system security settings like private devices, etc) is easier and more secure than trying to get rootless containers running in a systemd user slice (Dan Walsh said it on a GitHub issue but I can't find it now).

al_borland · 8 months ago
This was touched on at the end of the article, but the author hadn't yet explored it. Thanks for the link.

> Of course, as my luck would have it, Podman integration with systemd appears to be deprecated already and they're now talking about defining containers in "Quadlet" files, whatever those are. I guess that will be something to learn some other time.

overtone1000 · 8 months ago
I came to the comments to make sure someone mentioned quadlets. Just last week, I migrated my home server from docker compose to rootless podman quadlets. The transition was challenging, but I am very happy with the result.
sureglymop · 8 months ago
Seems very cool but can it do all one can do with compose? In other words, declare networks, multiple services, volumes, config(maps) and labels for e.g. traefik all in one single file?

To me that's why compose is neat. It's simple. Works well with rootless podman also.

lstolcman · 8 months ago
I encourage you to look into this blog post as well; it helped me greatly with seamlessly switching into quadlets in my homelab: https://news.ycombinator.com/item?id=43456934
byrnedo · 8 months ago
I created skate (https://github.com/skateco/skate) to be basically this but multihost and support k8s manifests. Under the hood it’s podman and systemd
psviderski · 8 months ago
This is a great approach which resonates with me a lot. It's really frustrating that there is no simple way to run a multi-host Docker/Podman (Docker Swarm is abandonware since 2019 unfortunately). However, in my opinion K8s has the worst API and UX possible. I find Docker Compose spec much more user friendly. So I'm experimenting with a multi-host docker-compose at the moment: https://github.com/psviderski/uncloud
byrnedo · 8 months ago
Wouldn’t argue with you abut the k8s ux. Since it has all the ground concepts ( service, cronjob etc ) it required less effort than rolling yet another syntax.
byrnedo · 8 months ago
Uncloud looks awesome and seems to have a great feature set!! Nice work!
nemofoo · 8 months ago
Thank you for building this. I appreciate you.
woile · 8 months ago
This looks awesome!
VWWHFSfQ · 8 months ago
We went back to just packaging debs and running them directly on ec2 instances with systemd. no more containers. Put the instances in an autoscaling group with an ALB. A simple ansible-pull installs the debs on-boot.

really raw-dogging it here but I got tired of endless json-inside-yaml-inside-hcl. ansible yaml is about all I want to deal with at this point.

secabeen · 8 months ago
I also really like in this approach that if there is a bug in a common library that I use, all I have to do is `apt full-upgrade` and restart my running processes, and I am protected. No rebuilding anything, or figuring out how to update some library buried deep a container that I may (or may not) have created.
SvenL · 8 months ago
Yes, I also have gone this route for a very simple application. Systemd was actually delightful, using a system assigned user account to run the service with the least amount of privileges is pretty cool. Also cgroup support does really make it nice to run many different services on one vps.
r3trohack3r · 8 months ago
The number of human lifetimes wasted on the problem domain of "managing YAML at scale"...
teleforce · 8 months ago
The article is more than one year old, systemd now even has specialized officially supported OS distro for immutable workflow namely ParticleOS [1],[2].

[1] ParticleOS:

https://github.com/systemd/particleos

[2] Systemd ParticleOS:

https://news.ycombinator.com/item?id=43649088

egorfine · 8 months ago
Nice. The next logical step is to replace the Linux kernel with "systemd-kernel" and that will make it complete.
noisy_boy · 8 months ago
Not deep enough; we need systemd to replace the BIOS and preferably the CPU microcodes too.
mdeeks · 8 months ago
From what I read, I think you can replace this all with a docker compose command and something like Caddy to automatically get certs.

It's basically just this command once you have compose.yaml: `docker compose up -d --pull always`

And then the CI setup is this:

  scp compose.yaml user@remote-host:~/
  ssh user@remote-host 'docker compose up -d --pull always'
The benefit here is that it is simple and also works on your development machine.

Of course if the side goal is to also do something fun and cool and learn, then Quadlet/k8s/systemd are great options too!

rollcat · 8 months ago
Do this (once):

    docker context create --docker 'host=ssh://user@remote-host' remote-host
Then try this instead:

    docker -c remote-host compose -f compose.yaml up -d --pull always
           ^^^^^^^^^^^^^^         ^^^^^^^^^^^^^^^
No need to copy files around.

Also, another pro tip: set up your ~/.ssh/config so that you don't need the user@ part in any ssh invocations. It's quite practical when working in a team, you can just copy-paste commands between docs and each other.

    Host *.example.com
        User myusername

jdsleppy · 8 months ago
Do what the sibling comment says or set DOCKER_HOST environment variable. Watch out, your local environment will be used in compose file interpolation!