If you're not wedded to docker-compose, with podman you can instead use the podman kube support, which provides roughly docker-compose equivalent features using a subset of the Kubernetes pod deployment syntax.
Additionally, podman has nice systemd integration for such kube services, you just need to write a short systemd config snippet and then you can manage the kube service just like any other systemd service.
Altogether a very nice combination for deploying containerized services if you don't want to go the whole hog to something like Kubernetes.
This is sort of "fixed" by using a Quadlet ".kube" but IMO that's a pretty weak solution and removes the "here's your compose file, run it" aspect.
Recently (now that Deb13 is out with Podman 5) I have started transitioning to Podmans Quadlet files which have been quite smooth so far. As you say, its great to run things without all the overhead of kubernetes.
Docker has one of the most severe cases of not-invented-here. All solutions require a combination of a new DSL, a new protocol, a new encryption scheme, a new daemon, or any combination there-of. People are sleeping on using buildah directly; which OP alluded to with Bakah (but fell short of just using it directly).
Ever wish you could run multiple commands in a single layer? Buildah lets you do that. Ever wish you could loop or some other branching in a dockerfile? Buildah lets you do that. Why? Because they didn't invent something new, and so the equivalent of a dockerfile in buildah is just a script in whatever scripting language you want (probably sh, though).
I came across this when struggling and repeatedly failing to get multi-arch containers built in Circle CI a few gears ago. You don't have access to an arm64 docker context on their x86 machines, so you are forced to orchestrate that manually (unless your arm64 build is fast enough under qemu). Things begin to rapidly fall apart once you are off of the blessed Docker happy path because of their NIH obsession. That's when I discovered buildah and it made the whole thing a cinch.
They both utilize all the linux c-group magic to containerize. So performance is roughly the same.
Incus is an LXD fork, and focuses on "system" containers. You basically get a full distro, complete with systemd, sshd, etc. etc. so it is easy to replace a VM with one of these.
podman and docker are focused on OCI containers which typically run a single application (think webserver, database, etc).
I actually use them together. My host machine runs both docker and incus. Docker runs my home server utilities (syncthing, vaultwarden, etc) and Incus runs a system container with my development environment in it. I have nested c-groups enabled so that incus container actually runs another copy of docker _within itself_ for all my development needs (redis, postgres, etc).
What's nice about this is that the development environment can easily be backed up, or completely nuked without affecting my host. I use VS Code remote SSH to develop in it.
The host typically uses < 10GB RAM with all this stuff running.. about half what it did when I was using KVM instead of Incus.
If you are using podman "rootless" mode prior to 5.3 then typically you are going to be using the rootless networking, which is based around slirp4netns.
That is going to be slower and limited compared to rootful solutions like incus. The easy work around is to use 'host' networking.
If you are using rootful podman then normal Linux network stack gets used.
Otherwise they are all going to execute at native speed since they all use the same Linux facilities for creating containers.
Note that from Podman 5.3 (Nov 24) and newer they switched to "pasta" networking for rootless containers. Which is a lot better, performance wise.
edit:
There are various other tricks you can use for improving podman "rootless" networking, like using systemd socket activation. This way if you want to host services this way you can setup a reverse proxy and such things that runs at native speeds.
How would you configure a cluster? I’m trying to explore lightweight alternatives to kubernetes, such as docker swarm, but I think that the options are limited if you must support clusters with equivalent of pods and services at least.
I've found you can get pretty far with a couple of fixed nodes and scaling vertically before bringing in k8s these days.
Right now I'm running,
- podman, with quadlet to orchestrate both single containers and `pods` using their k8s-compatible yaml definition
- systemd for other services - you can control and harden services via systemd pretty well (see https://news.ycombinator.com/item?id=44937550 from the other day). I prefer using systemd directly for Java services over containers, seems to work better imo
So, unless you have a service that requires a fixed number of running instances that is not the same count as the number of servers, I would argue that maybe you don't need Kubernetes.
For example, I built up a Django web application and a set of Celery workers, and just have the same pod running on 8 servers, and I just use an Ansible playbook that creates the podman pod and runs the containers in the pod.
In the off chance your search didn't expand to k3s, I can semi-recommend it.
My setup is a bit clunky (having a Hetzner cloud instance as controller and a local server as a node throught Tailscale), from which I get an occasional strange error that k3s pods fail to resolve another pod's domain without me having to re-create the DNS resolver system pod, and that I so far failed at getting Velero backups to work with k3s's local storage providers, but otherwise it is pretty decent.
I've been reading and watching videos about how you can use Ansible with Podman as a simpler alternative to Kubernetes. Basically Ansible just SSHs into each server and uses podman to start up the various pods / containers etc. that you specify. I have not tried this yet though so take this idea with a grain of salt.
That is what I do as well. I'd rather not have to remember more than one way of doing things so 'podman play kube' allows me to use Kubernetes knowledge for local / smaller scale things as well.
I tried Podman on my messing around VPS but quickly reverted to rootless Docker.
The straw that broke the camels back was a bug in `podman compose` that funnily enough was fixed two hours ago[1]; if `service1` has a `depends_on` on `service2`, bringing down `service1` will unconditionally bring down `service2`, even if other services also depend on it. So if two separate services depend on a database, killing one of them will kill the database too.
Another incompatibility with Docker I experienced was raised in 2020 and fixed a few months ago[2]; you couldn't pass URLs to `build:` to automatically pull and build images. The patch for this turned out to be a few lines long.
I'm sure Podman will be great once all of these bugs are ironed out, but for me, it's not quite there yet.
Podman compose is an attempt to court Docker users by porting over a bad idea. Instead of that, learn how to create "quadlets" and you'll never want to touch docker again. See: https://www.redhat.com/en/blog/quadlet-podman
I recommend starting with .container files instead of .kube, unless you're already familiar with kubernetes.
So for my set of DVR services, quadlets would have me replace a single compose.yml with 6 .container files, and manually create the network, and have to stop and start all of the services individually.
Can you use those quadlets inside a development project? I use docker-compose (with podman) just so i can work on a project that is completely self-contained. No copying files to ~/.config/systemd just run docker-compose to start and stop.
I use rootless podman in socket mode but use the docker CLI (just the CLI, no daemon or service or messing with iptables) as the frontend. Can recommend!
What does the docker CLI give you that the podman CLI doesn't? (Surely you aren't suggesting that `docker compose` works with a podman rootless daemon?)
Unfortunately, it's quite a big mess (as the article indicates), which leads to a steep learning curve for someone who "just wants to build some images".
And that's just half of it. Want to build an image on two native architectures (ARM64 and AMD64) and then make a multi-arch image out of them. Might blow someones mind on how complicated that is with 2025 docker technologies: https://docs.docker.com/build/ci/github-actions/multi-platfo...
I was a huge fan of Podman, but I eventually gave up and use Docker Compose for local development. It's not worth fighting the system.
However, for single server deployments, where I don't need Kubernetes, I now exclusively use Quadlets to run apps and I couldn't be happier. It's a much nicer experience that using typical Docker/Podman setup. It feels integrated into the system.
> I was a huge fan of Podman, but I eventually gave up and use Docker Compose
You can mix them. I was using docker-compose with podman instead of docker before switching to quadlets. I still prefer the experience of compose files, but quadlets do integrate much better into systemd.
I replaced my Docker usage entirely with OrbStack[1] a few months ago, and have had zero issues with it so far. Great product that I happily pay a license for.
My usage is fairly basic though and I'm sure mileage varies, but for my basic web dev setup it's been perfect.
orbstack is just a vm provider for docker on mac, colima offers the same features without a ui and is a great open replacement but as neither supports podman both are not really relevant to the podman discussion.
The UI of OrbStack is probably one the biggest features, so a replacement without the UI doesn't make a ton of sense for most people that like OrbStack.
I can't think of any stellar reason why colima couldn't also support it, since they even go out of their way to support Incus as a runtime, but I don't currently have the emotional energy to prosecute such a PR
It's more general than that, closer to WSL. I usually use Podman Desktop for container stuff, but I like OrbStack for managing Linux VMs. It has some really slick integrations and it performs very, very well.
I've replaced my OrbStack usage entirely with Podman Desktop and have zero issues with it, unlike with OrbStack.
In particular the 1TB VM disk image OrbStack uses wreaks havok with deduplicating backups. Their disk cache also caused me hours of debugging why my assets weren't up-to-date.
This is an interesting find OP and could help people transition from Docker to Podman (especially if they're used to deploying with Docker-Compose).
I think the better long-term approach though is to use systemd user units for deployment, or the more modern approach of using Podman Quadlets. There's a bit of a learning curve, but these approaches are more native to the Podman platform, and learning how systemd services work is a great skill to have.
It's not clear from the article, but is this for local development or production deployments? Because it's worth noting that Swarm solves a lot of the limitations that Compose and Podman have for running containers in a production environment. Swarm runs well on singular vms and people with Docker experience can learn the ropes in a day.
Additionally, podman has nice systemd integration for such kube services, you just need to write a short systemd config snippet and then you can manage the kube service just like any other systemd service.
Altogether a very nice combination for deploying containerized services if you don't want to go the whole hog to something like Kubernetes.
Last I tried using the .kube files I ran into issues with specifying container networks (https://github.com/containers/podman/issues/12965).
This is sort of "fixed" by using a Quadlet ".kube" but IMO that's a pretty weak solution and removes the "here's your compose file, run it" aspect.
Recently (now that Deb13 is out with Podman 5) I have started transitioning to Podmans Quadlet files which have been quite smooth so far. As you say, its great to run things without all the overhead of kubernetes.
I agree about quadlets, amazing.
Docker has one of the most severe cases of not-invented-here. All solutions require a combination of a new DSL, a new protocol, a new encryption scheme, a new daemon, or any combination there-of. People are sleeping on using buildah directly; which OP alluded to with Bakah (but fell short of just using it directly).
Ever wish you could run multiple commands in a single layer? Buildah lets you do that. Ever wish you could loop or some other branching in a dockerfile? Buildah lets you do that. Why? Because they didn't invent something new, and so the equivalent of a dockerfile in buildah is just a script in whatever scripting language you want (probably sh, though).
This will probably give you the general idea: https://www.mankier.com/1/buildah-from
I came across this when struggling and repeatedly failing to get multi-arch containers built in Circle CI a few gears ago. You don't have access to an arm64 docker context on their x86 machines, so you are forced to orchestrate that manually (unless your arm64 build is fast enough under qemu). Things begin to rapidly fall apart once you are off of the blessed Docker happy path because of their NIH obsession. That's when I discovered buildah and it made the whole thing a cinch.
Claude recently hallucinated this for me:
For a brief moment in time I was happy but then:Can you really use "ComposeService" in the systemd unit file? I can't find any reference to it
You're absolutely right to question that - I made an error. There is no ComposeService directive in systemd or Quadlet.
It would be a nice best of both worlds...
Just FYI, `podman generate systemd --files --name mypod` will create all the systemd service files for you.
https://docs.podman.io/en/latest/markdown/podman-generate-sy...
Quadlets now make it much easier to create the units by hand, and ‘ `podman generate systemd` is deprecated.
Incus is an LXD fork, and focuses on "system" containers. You basically get a full distro, complete with systemd, sshd, etc. etc. so it is easy to replace a VM with one of these.
podman and docker are focused on OCI containers which typically run a single application (think webserver, database, etc).
I actually use them together. My host machine runs both docker and incus. Docker runs my home server utilities (syncthing, vaultwarden, etc) and Incus runs a system container with my development environment in it. I have nested c-groups enabled so that incus container actually runs another copy of docker _within itself_ for all my development needs (redis, postgres, etc).
What's nice about this is that the development environment can easily be backed up, or completely nuked without affecting my host. I use VS Code remote SSH to develop in it.
The host typically uses < 10GB RAM with all this stuff running.. about half what it did when I was using KVM instead of Incus.
That is going to be slower and limited compared to rootful solutions like incus. The easy work around is to use 'host' networking.
If you are using rootful podman then normal Linux network stack gets used.
Otherwise they are all going to execute at native speed since they all use the same Linux facilities for creating containers.
Note that from Podman 5.3 (Nov 24) and newer they switched to "pasta" networking for rootless containers. Which is a lot better, performance wise.
edit:
There are various other tricks you can use for improving podman "rootless" networking, like using systemd socket activation. This way if you want to host services this way you can setup a reverse proxy and such things that runs at native speeds.
How would you configure a cluster? I’m trying to explore lightweight alternatives to kubernetes, such as docker swarm, but I think that the options are limited if you must support clusters with equivalent of pods and services at least.
Right now I'm running,
- podman, with quadlet to orchestrate both single containers and `pods` using their k8s-compatible yaml definition
- systemd for other services - you can control and harden services via systemd pretty well (see https://news.ycombinator.com/item?id=44937550 from the other day). I prefer using systemd directly for Java services over containers, seems to work better imo
- Pyinfra (https://pyinfra.com/) to manage and provision the VMs and services
- Fedora CoreOS as an immutable base OS with regular automatic updates
All seems to be working really well.
Yes. Though unless you have a very dynamic environment maybe statically assigning containers to hosts isn't an insurmountable burden?
So, unless you have a service that requires a fixed number of running instances that is not the same count as the number of servers, I would argue that maybe you don't need Kubernetes.
For example, I built up a Django web application and a set of Celery workers, and just have the same pod running on 8 servers, and I just use an Ansible playbook that creates the podman pod and runs the containers in the pod.
My setup is a bit clunky (having a Hetzner cloud instance as controller and a local server as a node throught Tailscale), from which I get an occasional strange error that k3s pods fail to resolve another pod's domain without me having to re-create the DNS resolver system pod, and that I so far failed at getting Velero backups to work with k3s's local storage providers, but otherwise it is pretty decent.
microk8s seems exceedingly simple to setup and use. k3s is easy as well.
The straw that broke the camels back was a bug in `podman compose` that funnily enough was fixed two hours ago[1]; if `service1` has a `depends_on` on `service2`, bringing down `service1` will unconditionally bring down `service2`, even if other services also depend on it. So if two separate services depend on a database, killing one of them will kill the database too.
Another incompatibility with Docker I experienced was raised in 2020 and fixed a few months ago[2]; you couldn't pass URLs to `build:` to automatically pull and build images. The patch for this turned out to be a few lines long.
I'm sure Podman will be great once all of these bugs are ironed out, but for me, it's not quite there yet.
[1]: https://github.com/containers/podman-compose/pull/1283
[2]: https://github.com/containers/podman-compose/issues/127
I recommend starting with .container files instead of .kube, unless you're already familiar with kubernetes.
Not sure I'm sold.
Can i do that with quadlets?
You just mentioned they are.
And that's just half of it. Want to build an image on two native architectures (ARM64 and AMD64) and then make a multi-arch image out of them. Might blow someones mind on how complicated that is with 2025 docker technologies: https://docs.docker.com/build/ci/github-actions/multi-platfo...
However, for single server deployments, where I don't need Kubernetes, I now exclusively use Quadlets to run apps and I couldn't be happier. It's a much nicer experience that using typical Docker/Podman setup. It feels integrated into the system.
You can mix them. I was using docker-compose with podman instead of docker before switching to quadlets. I still prefer the experience of compose files, but quadlets do integrate much better into systemd.
My usage is fairly basic though and I'm sure mileage varies, but for my basic web dev setup it's been perfect.
[1]: https://orbstack.dev/
”just” is a big statement here. Performance between colima and OrbStack are from different planets.
Apple just released their own runtime so that is also worth inspecting.
FWIW lima (upon which COlima was built) ships with "boot me up a podman": <https://github.com/lima-vm/lima/blob/v1.2.1/templates/podman...> and <https://github.com/lima-vm/lima/blob/v1.2.1/templates/podman...>
I can't think of any stellar reason why colima couldn't also support it, since they even go out of their way to support Incus as a runtime, but I don't currently have the emotional energy to prosecute such a PR
In particular the 1TB VM disk image OrbStack uses wreaks havok with deduplicating backups. Their disk cache also caused me hours of debugging why my assets weren't up-to-date.
Admittedly the OrbStack GUI is super snappy tho.
I think the better long-term approach though is to use systemd user units for deployment, or the more modern approach of using Podman Quadlets. There's a bit of a learning curve, but these approaches are more native to the Podman platform, and learning how systemd services work is a great skill to have.