Here’s a hint: for 99.999% of potential users, including 99.9% of motivated, technically savvy users, if I need to know the directory structure of your software, then you already failed.
I appreciate that you went through all the pain and learning and effort to figure out how to set all this up AND went to the trouble to write down a how to guide.
I hope someone comes later and bundles it up into a script I can launch that will prompt me for the various config options and then set it all up for me.
There’s another thing that no one who advocates for these systems wants to mention: The cost of maintenance. I’m ok with systemd as 98% is outsourced to the maintainers. But I’d be more comfortable if k8s was a more monolithic system a la BSD. At least linux have distros.
It makes absolutely no sense to base this decision on the number of users. We have some applications that don't even have 10 users but still use k8s.
Try to understand the point that was made in the original comment: Kubernetes is a way to actually make infrastructure simpler to understand for a team which maintains lots of different applications, as it can scale from a deployment with just one pod to hundreds of nodes and a silly microservices architecture.
The point is not that every application might need this scalability, no the point is that for a team needing to maintain lots of different applications, some internal, some for customers, some in private datacenters, some in the cloud, Kubernetes can be the single common denominator.
Hell, I'm a hardcode NixOS fan but for most services, I still prefer to run them in k8s just as it is more portable. Today I might be okay having some service sitting on some box, running via systemd. But tomorrow I might want to run this service highly available on some cluster. Using k8s that is simple, so why not do it from the start by just treating k8s as a slighly more powerful docker-compose.