Readit News logoReadit News
raggi · 2 years ago
This appears to suffer from the same mistake as many of these things do in this space: it focuses on making it really easy to run lots of software, but has a very poor story when it comes to making the data and time you put in safe across upgrades and issues. The only documented page on backing up requires taking the entire system down, and there appears to be no guidance or provision for safely handling software upgrades. This sets people up for the worst kind of self-hosting failure: they get all excited about setting up a bunch of potentially really useful applications, invest time and data into them, then get burned really badly when it all comes crashing down in an upgrade or hardware failure, with improper preparation. This is how people move back to saas and never look back again, it's utterly critical to get right, and completely missing here.
everforward · 2 years ago
I'm working on something similar, and the crux of that issue is configurability vs automation. I.e. it's very easy to make backups for a system that users can't configure at all. You just ship the image with some rsync commands or something and done.

Once you start letting people edit the config files, you get into a spot where now you basically need to be able to parse the config files to read file paths. That often means making version-specific parsers for configuration options that are introduced or removed in some versions, or have differing defaults (i.e. in 1.2 if they don't set "storage_path" it's at /x/, but in 1.3 if they don't set it, it defaults to /y/).

That gets to be a lot of work.

Then it gets even worse when the users can edit the Docker config for the images, because all bets are off at that point. The user could do all kinds of weird, fucky shit that infinitely loops naive backup scripts because of host volume mounts or have one of the mounts actually be part of NFS mount on the host with like 200ms of latency so backups just hang for forever and etc.

It's just begging for an infinite series of bugs from people who did something weird with their config and ended up backing up their whole drive instead of the Docker folder, or removing their whole root drive because they mount their host FS root in the backups folder and it got deleted by the "old backup cleanup" script, or who knows what.

At some point, it's easier to just make your own setup where you define the limitations of that setup than it is to use someone else's setup but have to find the limitations on your own.

megous · 2 years ago
> Once you start letting people edit the config files,

That way lies madness...

What can be done instead is to provide your own unified (some schema validated) configuration options, and generate the app specific config files from your source of truth. Then you can know what the user can configure and how to back everything up (and how to do a lot of other things in automated fashion). And you also have a safe upgrade path if format of any underlaying config changes.

apitman · 2 years ago
The one thing you have working on your favor is that the more likely someone is to want to customize, the more likely they are to understand they have to be responsible for backups and updates.

Is your project public? I'm working on one as well and would love to see what you have cooking.

yonixw · 2 years ago
That's exactly how I feel. Not to mention that I am always looking for how to monitor the system, and there is no uniform standard, if it is possible to monitor at all. As if there is a hidden message that apart from monitoring how much disk space is left or CPU everything else is irrelevant. But for such VMs that share many processes, it is exactly the opposite! I must know when there is a problem who exactly is responsible!
ocdtrekkie · 2 years ago
This is one of the reasons I groan every time there's a new selfhosting thing that uses Docker. It's easy to start because all the wiring is there and everyone already makes their apps available with a Dockerfile, but it's not a remotely good solution for selfhosting and people should stop trying to use it that way.
doubled112 · 2 years ago
I think it's the easy start that's the problem.

Docker and Docker Compose is pretty good for self-hosting, except you have to understand enough to realize that it's not as simple as it sounds. You can easily lose data you didn't know you had if you're not paying attention.

"I'll just delete the container and restart" sounds like a great solution, but you just blew away your photos or whatever because you didn't know it created some volume for you.

I have a btrfs volume as the root of all the apps. Each application is in an individual subvolume so they can be snapshotted at upgrade time. The docker-compose.yml file points each app's volume to a relative directory inside that subvolume.

This way I can move them around individually, and all their data is right there. I can back them up pretty easily too.

Works for me and my use case, but you could never expect it to be turn key, and you couldn't hand it off to somebody non-technical.

wmf · 2 years ago
A more detailed explanation from ten years ago: https://sandstorm.io/news/2014-08-19-why-not-run-docker-apps
BrandoElFollito · 2 years ago
In only look at selfhosting things that provide a docker image. This is how it should be: the internals are hidden from your and you just care about the data and the configuration.

If you want to use a monolith you will have conflicts sooner or later. Of you want to use multiple VMs you need to orchestrate them.

If you know how to do the above you are good to go to learn docker which is ultimately much simpler.

apitman · 2 years ago
Unfortunately Sandstorm's approach, ie rewrite all the apps to fit a specific ecosystem, hasn't taken off either. I think docker with a little extra tooling for managing updates and backups is likely to be the solution eventually.
stavros · 2 years ago
I wrote and use Harbormaster (https://harbormaster.readthedocs.io/) for this use case. It doesn't have a UI, but it basically only needs a Compose file to run an app, and all data (for all apps) lives in a data/ directory that you can back up at your leisure.

Everything is auto updated from a single YAML specification (which I usually pin), so the process to update something is "change the version in the config, commit, and the server will pick the change up and deploy it".

It's just a thin layer over "git pull; docker compose restart", but I love it.

BirAdam · 2 years ago
Personal opinion, a home lab is the perfect place to learn how to actually configure things and properly set them up: no docker, no ansible, no salt… take off the training wheels and learn it. Then, learn to write your own playbooks, your own compose files, etc.

Additionally, if people think that learning how to configure and deploy stuff is too tedious and/or too difficult, write software that has better UI, not more layers of configuration and administration.

Final thought, git is better than ansible/salt/chef/puppet, and containers are silly.

jrm4 · 2 years ago
"Containers are silly" might be the literal worst take I've seen in this space, especially if one is at all interested in "onboarding more moderately techy folk" to the idea of homelabs, and self-hosting, which I think is incredibly important.

Once you know Docker, you can abstract away SO MUCH and get things up and running incredibly quickly.

Let's say you have someone who now hates Spotify. Perfect. Before Docker it was a huge pain to try to set up one and hope you get it right.

After Docker? Just TRY ALL OF THEM. Don't like Navidrome/mstream/whatever? Okay, docker-compose down and try the next one.

watermelon0 · 2 years ago
TBH, containers are the wrong way of doing things, but they are the least bad solution of shipping software that we currently have.
goodpoint · 2 years ago
> get things up and running incredibly quickly

Optimizing for the wrong metric.

apitman · 2 years ago
I think this is good advice for technical people, but I also think we need to drastically lower the barrier of entry for people who would benefit from owning their compute and data but don't have the skills or interest necessary.
filmgirlcw · 2 years ago
I agree we need to do that, but I would argue at this point, the best option for the people without the skills/interest is probably buying a Synology. It would be awesome if there was a free OSS solution that people could just follow instructions and deploy on an inexpensive miniPC and not have to worry about anything and have auto-updates and zero problems, but I don't know if that is realistic. If we do ever see it, I feel confident it won't be free.

I see tools like this as a really good middle ground between piecing all the different parts together and managing runbooks and whatnot and buying an off-the-shelf appliance like a Synology or other consumer/prosumer/SMB NAS.

skydhash · 2 years ago
And that's basically defining protocols and let people build interfaces. But that goes against many companies' objectives. Everyone is trying to move you to their database and cloud computing. And most people prefer convenience over ownership or privacy. Installing Gitea on a VPS is basically a walk in the park. Not as easy as clicking a button in cPanel, but I think some things should be learned and not everything has to be "magic". You don't need to be a mechanic to drive a car, but some knowledge is required when someone is overselling engine oil.
planb · 2 years ago
Don’t do this to learn if you really want to use the servers for important data and expose them to the internet. You can shoot your foot with docker containers, but you can shoot your face by blindly installing dozens of packages and their dependencies on a single box.
eternauta3k · 2 years ago
You can keep them separate with Docker. I'm not sure what the workflow looks like, however: you make a simple image with just (say) alpine + apache, you run a shell there, set everything up, and when it works you basically try to replicate all the things you did manually in the Dockerfile? So in the end, you have to do everything twice?
vundercind · 2 years ago
Did that for years. Spend so much more time fiddling with computers than getting any actual benefit from them.

Now all my shit’s in docker, launched by one shell script per service. I let docker take care of restarts. There’s no mystery about where exactly all of the config and important files for any of the services lives.

I barely even have to care which distro I’m running it on. It’s great.

What’s funny is that the benefits of docker, for me, have little to do with its notable technical features.

1. It provides a common cross-distro interface for managing services.

2. It provides a cross-disro rolling-release package manager—that actually works pretty well and has well-maintained, and often official, packages.

3. It strongly encourages people building packages for that package manager to make their config more straightforward and to make locations of config files and data explicit and documented thoroughly in a single place. That is, it forces a lot of shitty software to de-shittify itself at the point I’ll interact with it (via Docker).

xyst · 2 years ago
I personally like learning this way. Have a single server with at least 20 physical cores available. Use qemu to create VMs.

Personally, had nixOS (minimal) installed on the VMs. Scripted the setup (live cd created with my ssh key so I can remotely setup the VM such as disk partitioning). Then had a nix configuration to setup environment.

A bit of a learning curve but the benefit here is repeatable environments. Was even able to learn more about k8s using a small mini cluster. Host machine was the controller node while VMs were nodes.

By injecting latency between nodes to distance between different data center regions (ie, us-east vs us-west), actually able to reproduce some distributed app issues. All of this while not having to give up $$$ to the major server resellers (or "cloud" providers).

No worries about forgetting to tear down the cluster and receiving a surprise bill at the end of the month.

eddd-ddde · 2 years ago
I always just use a systemd or open rc service to run my stuff, pretty much a shell script and voilá. As long as you know to set proper firewall rules and run things as non-root your pretty much ready.
watermelon0 · 2 years ago
Not only does using Docker avoid dependency hell (which is IMO the biggest problem when running different software on a single machine), it has good sandboxing capabilities (out-of-the-box, no messing with systemd-nspawn) and UX that's miles ahead of systemd.
silverwind · 2 years ago
Definitely would recommend to use docker though. Can be just a bash script.

Running complex software with a lot of dependencies bare-metal is recipe for disaster nowadays, unfortunately.

ktosobcy · 2 years ago
I love containers - I have tiny RPi under the desk, run debian in it and everything is in containers - I don't have to deal that some software requires some version and other different. Or if something crashes it brings everything else with it.

I have some space to toy with it, but for the purpose of running something utterly low maintenance, docker and containers are awesome.

Deleted Comment

ghnws · 2 years ago
"Containers are silly"

What a silly take

whalesalad · 2 years ago
umm, homelab is also the perfect place to play with docker, ansible, kube, etc. that's the whole point.

git and ansible are not mutually exclusive. containers are not silly.

you'll come around some day

alex_lav · 2 years ago
In my experience, people that disparage containerization are either forgetting or weren’t around to know about how awful bespoke and project specific build and deploy practices actually were. Not that docker is perfect, but the concept of containerization is so much better than the alternative it’s actually kind of insane.
alex_lav · 2 years ago
I’m curious how UI impacts software configuration and deployment? Are you using a UI to set your compiler flags or something?
BirAdam · 2 years ago
Configuration, deployment, program initialization/start is all still “user interface” just a different part. Most software feels like that part was largely an after thought and not part of the design process.
sp332 · 2 years ago
Why would you write "your own compose files" if you're not running docker?
BirAdam · 2 years ago
My point was, learn it first, then learn docker and Ansible and such.
apitman · 2 years ago
> At its core, Tipi is designed to be easy to use and accessible for everyone

> This guide will help you install Tipi on your server. Make sure your server is secured and you have followed basic security practices before installing Tipi. (e.g. firewall, ssh keys, root access, etc.)

I love to see efforts like this, please keep it up.

But expecting users to learn everything necessary to run a secure server is simply not going to achieve the stated goal of being accessible to everyone.

We need something like an app that you can install on your laptop from the Windows store, with a quick OAuth flow to handle all networking through a service like Cloudflare Tunnel, and automatic updates and backups of all apps.

AdrienPoupa · 2 years ago
I'd argue the easiest way to achieve this is to refrain from opening any ports, and using Tailscale to get remote access.
elashri · 2 years ago
I doubt that with level of accessibility that the GP suggest that would be easy. It would be easy to have integrated firewall management that just expose 443/80 ports for reverse proxy and handle communication with docker networks. Also it can help setup vpn server and disallow accessing the server except via approved client.

Someone suggested cosmos in the comment. I think this is the closest to what I am saying. However I am into self hosting for couple of years now with development experience so I would be biased. That would be probably different for average person without deep knowledge.

apitman · 2 years ago
Tailscale is awesome, but requiring anyone you want to share data or apps with to install Tailscale leaves a lot of simple interactions off the table.

Deleted Comment

WhyNotHugo · 2 years ago
If someone doesn’t want to learn how to secure a server, then they shouldn’t be self hosting anyway.

Or, as a car analogy: if someone doesn’t want to learn how to drive safely, they shouldn’t be driving anyway.

mike_hearn · 2 years ago
I toyed with the idea of creating something like that a year or so ago. I have a company that makes a tool which simplifies desktop development a ton, and that was previously the blocker to people trying to do this which is why there are so many products that claim to be targeted at everyone but start with a Linux CLI.

So, can you make it brainless? Sure. Writing a nice desktop GUI that spins up a VM, logs in and administers it for you is easy. But ... who will buy it?

The problem is that self-hosting isn't something that seems to solve a problem faced by non-technical people. Why do they want it? Privacy is not workable, because beyond most people just not caring, it's turtles all the way down: the moment you outsource administration of the service your data is accessible to those people. Whether that's a natty GUI or a cloud provider or a SaaS, unless it's a machine physically in your home someone can get at your data. And with supply chain attacks not even that is truly private really.

Cost is clearly not viable. Companies give SaaS away for free. Even when they charge, other corps would rather pay Microsoft to store all their supposedly super-confidential internal docs, chats and emails than administer their own email servers.

Usability: no. Big SaaS operations can invest in the best UI designers, so the self-hostable stuff is often derivative or behind.

What's left?

Cyph0n · 2 years ago
I think this is useful for less tech oriented people to get a basic homelab setup.

But I personally find it much more straightforward and maintainable to just use Compose). Virtually every service you would want to run has first-class support for Docker/Podman and Compose.

TechDebtDevin · 2 years ago
These services are cool but I almost always end up doing it myself anyways. Doing it yourself is more fun anyways.

Typically I just make my own one click deploys that fit my preferences. Not knowing how your container starts and runs is a recipe for disaster.

razerbeans · 2 years ago
Anyone have any experience using this? I've been managing most of my homelab infrastructure with a combination of saltstack and docker compose files and I'm curious how this would stack up.
eightysixfour · 2 years ago
I used to run it and generally liked it, but eventually felt limited in the things I could do. At the time it was a hassle to run non-tipi apps behind the traefik instance and eventually I wanted SSO.

I ended up in a similar place with proxmox, docker-compose, and portainer but I have it on my backlog to try a competitor, Cosmos, which says many of the things I want to hear. User auth, bring your own apps and docker configs, etc.

https://github.com/azukaar/Cosmos-Server/

pjerem · 2 years ago
I tried it a few months and it was nice. But I think it lacks a way to configure mounting points for the apps storage.

By default, each app have its own storage folder and not really a useful default in the use case of a home lab : you probably want, idk, Syncthing, Nextcloud and Transmission to be able to access the same folders.

It’s doable but you have to edit the yaml files yourself which I thought removed most of the interest of the project.

TheCleric · 2 years ago
Agreed. I struggled for days in trying to get an external drive mounted into Nextcloud.
sanex · 2 years ago
Ran into the same problem with umbrel. Wanted to use photoprism for a gallery but nextcloud and syncthing to backup the photos. Was easier to just manage the containers myself.
filmgirlcw · 2 years ago
I've been evaluating it alongside Cosmos and Umbrel, in addition to tools I've used before like CapRover. I like it but I don't have any strong feelings yet. I will probably do some sort of writeup after I do more evaluations and tests and play with more things but I haven't had the time to dedicate to it.

If you're already familiar with setting things up Salt/Ansible/whatever and Docker compose, you might not need something like this -- especially if you're already using a dashboard like Dashy or whatever.

The biggest thing is that these types of tools make it a lot easier to set things up -- there are inherent security risks too if you don't know what you are doing, though I argue this is a great way to learn (and it isn't a guarantee that simply knowing how to use Salt or Ansible or another infrastructure as code tool will mean any of the stuff you deploy is any more secure) and a good entryway for people who don't want to do the Synology thing.

I like these sorts of projects because even though I can do most of the stuff manually (I'd obv. automate using Ansible or something), I often don't want to if I just want to play with stuff on a box and the app store nature of these things is often preferable to finding the right docker image (or modifying it) for something. I'm lazy and I like turnkey solutions.

sunshine-o · 2 years ago
> there are inherent security risks too if you don't know what you are doing

It's actually worst. Even if you know what you are doing there is some amount of work and monitoring you need to do to just to follow basic guidelines [0].

What we would actually need is a "self-hosting management" platform or tool which at least help you manage basic security around what you run.

[0] https://cheatsheetseries.owasp.org/cheatsheets/Docker_Securi...

ziofill · 2 years ago
I’ve been using it for a few months on a raspberry pi 4. I installed (via runtipi) pi hole for dns, Netdata for monitoring and Tailscale for remote access. Works great for me and my family, I can stream videos from jellyfin for my kids when we are on the go and all the family devices use pihole.

I tried to do things myself in the past but this is so much easier if you don’t have particular needs.

Cieric · 2 years ago
Does anyone have any recommendations on top of this? I personally run portainer and would like more features like grouping containers post creation and container start order. I also have an issue where my VPN container if updated breaks all containers that depended on it. Portainer handles a lot, but I need the little bit more so I have to look at the panel less. I'm not sure if this would work for me since I build a lot of custom containers and this looks more like it's better for purpose built containers.
exabyte · 2 years ago
Umbrel, citadel, start9, MASH playbook.

Sorry, on mobile right now, but these are great alternative projects

angra_mainyu · 2 years ago
I find terraform + acme provider + docker provider (w/ ssh uri) to be the best combo.

All my images live on a private GitLab registry, and terraform provisions them.

Keeping my infra up-to-date is as simple as "terraform plan -out infra.plan && terraform apply infra.plan" (yes, I know I shouldn't blindly accept, but it's my home lab and I'll accept if I want to).

Note: SSH access is only allowed from my IP address, and I have a one-liner that updates the allowed IP address in my infra's L3 firewall.