Readit News logoReadit News
shrubble · 2 years ago
My contrarian take is that eventually k8s will be recognized as the overcomplication it is; and better methods of managing less than 10,000 VMs will be researched and used.
Aurornis · 2 years ago
People don’t use Kubernetes at home because it’s the easiest tool for the job.

They use it in home labs because it’s a safe and easy environment to learn, practice, and explore.

Most importantly: It’s a low-consequence environment. If you accidentally bring everything down at your homelab, the only person who suffers is you. You don’t get that degree of safety to explore and experiment when you’re using company resources.

snapplebobapple · 2 years ago
Incorrect sir. I am way more concerned about my wife and kids as end users than i am about any employees
JimBlackwood · 2 years ago
Isn’t the result of that research Kubernetes?

What do you expect about the new solution that will be better?

Genuine question, btw. While I see how kubernetes can feel overcomplicated, it has always felt as a consequence of how complicated it is to run such a large number of workloads in a scalable and robust manner.

xav0989 · 2 years ago
I switched my home lab to nomad, which I find much easier to wrangle, but we’ll see what happens with the IBM acquisition.
xena · 2 years ago
I really hope that is the case too, but for now Kubernetes sucked all the oxygen out of the room for everything else :(
mmcnl · 2 years ago
Funny, I am a firm believer of the opposite: Kubernetes is the perfect level of abstraction for deploying applications.
jauntywundrkind · 2 years ago
This is incredibly popular a take, and this anti-k8s is rapidly upvoted almost every time.

The systemd hate has cooled a bit, but it too functions as a sizable attractor for disdain & accusation hurling. Let's look at one of my favorite excerpts from the article, on systemd:

> Fleet was glorious. It was what made me decide to actually learn how to use systemd in earnest. Before I had just been a "bloat bad so systemd bad" pleb, but once I really dug into the inner workings I ended up really liking it. Everything being composable units that let you build up to what you want instead of having to be an expert in all the ways shell script messes with you is just such a better place to operate from. Not to mention being able to restart multiple units with the same command, define ulimits, and easily create "oneshot" jobs. If you're a "systemd hater", please actually give it a chance before you decry it as "complicated bad lol". Shit's complicated because life is complicated.

Shits complicated because life is complicated. In both cases, having encompassing ways to compose connectivity has created a stable base (starting point to expert/advanced capable) that allowed huge communities to bloom. Rather than every person being out there by ourselves, the same tools work well for all users, the same tools are practiced with the same conventions.

Overarching is key to commonality being possible. You could walk up to my computer and run 'systemd cat' on any service on it, and quickly see how stuff was setup (especially on my computers which make heavy use of environment variables where possible); before every distro and to a sizable degree every single program was launched & configured differently, requires plucking through init scripts to see how or if the init script was modified. But everything has a well defined shape and form in systemd, a huge variety of capabilities for controlling launch characteristics, process isolation, ulimits, user/group privileges, special tmp directories is all provided out of the box in a way that means there's one man page to go to, and that's instantly visible with every option detailed, so we don't have to go spelunking.

The Cloud Native paradigm that Kubernetes practices is a similar work of revelation, offering similar batteries included capabilities. Is it confusing having pods, replicasets, and services? Yes perhaps at first. But it's unparalleled that one just POSTs resources one wants to an API-server and let's the system start & keep that running; this autonomic behavior is incredibly freeing, leaving control loops doing what humans have had to shepherd & maintain themselves for decades; a paradigm break turning human intent directly into consistent running managed systems.

The many abstractions/resource types are warranted, they are separate composable pieces that allow so much. Need to serve on a second port? Easy; new service since the service is separate from the deployment. Why are there so many different types? Because computers are complex, because this is a model of what really is. Maybe we can reshuffle to get different views, but most of that complexity will need to stay around, but perhaps in refactores shapes.

And like systemd, Kubernetes with it's Desired State Management and operators creates a highly visible highly explorable system; any practitioner can walk up to any cluster and start gleaning tons of information from it, can easily see it run.

It's a wrong hearted view to think that simpler is better. We should start with essential complexity & figure out simultaneously a) how to leverage and b) how to cut direct paths through our complex capable systems. We gain more by permitting and enabling than by pruning. We gain my by being capable of working at both big and small scales than we gain by winnowing down/down scoping our use cases. The proof is in the pudding. Today there's hundreds of guides one can go through in an hour to setup & get started running some services on k3s. Today there's a colossal communities of homelab operators sharing helm charts & resources (ex: https://github.com/onedr0p/home-ops), the likes of which has vastly outclassed where we have stood before. Being afraid of & shying away from complexity is a natural response, but i want people to show that they see so many of the underlying simplicities & conceptions that we have gotten from kube that do make things vastly simpler than the wild West untamed world we came from, where there weren't unified patterns of API servers & operators, handling different resources but all alike & consistent. To conquer complexity you must understand it, and I think very few of those with a true view of Kubernetes complexity have the sense that there are massive opportunities for better, for simpler. To me, the mission, the goal, the plan should be to better manage & better package Kubernetes to better onboard & help humans through it, to try to walk people into what these abstractions are for & shine lights on how they all mirror real things computers need to be doing.

(Technical note, Kubernetes typically runs 0 vm's, it runs containers. With notable exceptions being snap-in OCI runtimes like Firecracker and Kata which indeed host pods as vms. Kine relies on containers are far more optimizable; works like Puzzlefs and Composefs CSIs can snap-in to allow vastly more memory-and-storage-efficient filesystems to boot. So many wonderful pluggable/snappable layers; CNI for networking too.)

Yasuraka · 2 years ago
I once joined a project which had decided against Kubernetes years prior

For my entire stay there, half of the time was spent on reinventing the wheel, but worse.

There surely are lots of bloated and overly complex projects out there, but I'd say for what Kubernetes does, it's a very elegant solution to a very, very complex problem and not one of those.

PaulHoule · 2 years ago
No. Real suffering in the homelab is getting N 20 year old servers and swapping parts between them to get N-M servers that work. I feel like the project will be successful if I get all the drives wiped and I am within site of that although I discovered the wiping was going to be a process of triage: some drives did not spin up, one drive took 14 hours to wipe whereas a normal drive would take about 30 minutes. My collaborator will use the bad drives for target practice for a black powder rifle that shoots round balls.

Noisy fans take the "home" out of the homelab; the machines are 64 bit Intel but top out at 4GB. The latest version of Ubuntu installs fine but the desktop struggles. I think I'm going to install the desktop Ubuntu again just to see if I can watch YouTube with it but the plan now is install the server edition and give it to my collaborator to run an occasional minecraft server, which might free up a (much more powerful) i3 machine to watch videos from my Jellyfin server on the TV downstairs, something the XBOX ONE oddly can't handle. (No patent licenses for codecs if it's a game console?)

At least I dug out the old VGA-supporting monitors out of mothballs so I'll be ready to play around with the RISC-V and eZ80 SBCs I have which are, at the very least, a lot quieter.

shrubble · 2 years ago
That is retrocomputing, not homelabbing. Look into the sub-$200 Intel N100 based systems with 16GB RAM.
jjbinx007 · 2 years ago
I recently bought 2 of these and they are EXCELLENT! I can leave them running 24/7 without worrying about how much electricity they're using. The performance, flexibility and reliability far exceeds the Raspberry Pis that are confined the to cupboard and they're probably a fair bit faster than my old desktop PCs that I rarely switch on any more.

I've gone uber-minimalist and only have NVME drives attached via USB-3. One's connected via ethernet, the other has a wifi connection. Personally I don't need any more and I've retired my old servers for now.

Aurornis · 2 years ago
I think we need a different word for collecting a lot of old, underpowered computers and tinkering endlessly.

I understand the attraction of playing with old, cheap hardware. However, hardware has come so far that it’s easy to build a 16-core server with a lightly used AMD consumer chip and 64-128GB of RAM for under $1000. It will have more power and use far less energy than these clusters of old machines that I see people assembling.

> Noisy fans take the "home" out of the homelab;

Again, a completely unnecessary thing to suffer. If the goal is a homelab. It’s really easy to make a near-silent PC with modern parts and cooling that will outperform an entire rack of 20 year old PCs. Even 10G switches that are quiet or fan less are common.

I get it. It can be fun. But I don’t think this is homelabbing.

sunshine-o · 2 years ago
So in summary a former NixOS user now use a preconfigured OS dedicated to run one single platform (k8s) and still suffer for having to tinker with everything.

It would have been fun or wise if she had gone back to Nix in the end.

xena · 2 years ago
I still use Nix to build docker images, part of this is to see how bad the rest of the industry really is. It's slightly worse than I imagined it would be.
srid · 2 years ago
> a former NixOS user

Why did the author stop using NixOS? This is the first time I'm hearing about a veteran NixOS user giving up on it.

shepherdjerred · 2 years ago
It mostly seems due to some drama going on in the Nix community rather than some technical reason.

https://xeiaso.net/blog/2024/much-ado-about-nothing/

fallingsquirrel · 2 years ago
The CoreOS diversion was interesting to read. I've been daily driving CoreOS+i3 for the past year (I might be the only one in the world). I thought having a tiny immutable base OS would make the system easier to manage over time, but unfortunately that hasn't been the case. It's been an adventure but I'm ready to give up and switch to something more vanilla.
spicyusername · 2 years ago
I had the same experience.

Immutable OS are great when you have a team of people to manage the tooling and process complexity of deploying and managing them, end users who only want to use software running on those servers, not the servers themselves, and you are bringing servers up and down all the time, then you actually reap the benefits of standardization and infrastructure as code.

But when it's just you, you want to interact directly with the OS, and it's just one device, it's just foot guns all day.

rubenbe · 2 years ago
I've keep running Fedora coreos on my home server. My biggest issue with it, is that it is very cloud oriented and doesn't seem to allow to rerun the provisioning config on an already existing machine. This turns the thing again into a stateful pet instead of a "one cow cattle". Although I do very much like the rollback feature which has allowed me temporarily roll back an update a couple of times
walterbell · 2 years ago
> doesn't seem to allow to rerun the provisioning config on an already existing machine

In theory, a generic existing machine could have been compromised by malware, in which case the configuration may not match the previously provisioned version.

With OS launch integrity to guarantee absence of tampering, and prove that current=expected config+binaries, it could be feasible to rerun provisioning config.

xena · 2 years ago
I've wondered if that would be possible for a while, but I didn't imagine anyone would actually do that. What are the upsides and downsides of doing this? I'd love to read a writeup of how you did that and what you'll miss when you move away.
fallingsquirrel · 2 years ago
I'm no blogger but here's a quick writeup.

# Setup

Setup was a process, no clicking through a nice UI for this one. I had to set up a web server on a second machine to serve the ignition yaml to the primary machine.

It was a very manual process despite CoreOS's promise of automation. There were many issues like https://github.com/coreos/fedora-coreos-tracker/issues/155 where the things I wanted to configure were just not configurable. I had some well-rehearsed post-setup steps to rename the default user from "core" to my name, set keyboard layout, move system dirs to a set of manually-created btrfs subvolumes, etc.

# Usage

The desktop and GUI worked flawlessly. All I had to do was install i3 and lightdm via rpm-ostree. Zero issues, including light 2D gaming like Terraria.

Audio was a pain. My speakers are fine. My mic worked out of the box in ALSA, but Pipewire didn't detect it for some reason, so I had to write some manual pipewire config to add it manually. Also, I had to learn what ALSA and Pipewire are...

I ran just about everything, including GUI apps, in distrobox/arch containers. This was very nice: Arch breaks itself during updates somewhat often and when that happens I can just blow the container away and install pkglist.txt and be back in 5 minutes. I get the benefits of Arch (super fast updates) without the downsides (upgrade brittleness). I plan on keeping distrobox even once I leave.

# Updates

I disabled Zincati (the unattended update service) and instead I ran `rpm-ostree upgrade` before my weekly reboots.

This is the reason I'm leaving. This was supposed to be the smoothest part of CoreOS, but those upgrades failed several times in the past year. To CoreOS's credit my system was never unbootable, but when the upgrades failed I had to do surgery using the unfamiliar rpm-ostree and its lower level ostree to get the system updating again. As of now it's broken yet again and I'm falling behind on updates. I could solve this, I've done it before! But I've had enough. I'm shuffling files to my NAS right now and preparing to hop distros. If anyone wants to try to sell me on NixOS, now's the time ;)

chadsix · 2 years ago
This was a tremendous write-up. I appreciate the detail including your ingressd setup. I agree, though, that this is a pain. It's for this reason that we made Cloud Seeder [1] so you can have hands-free setup of your homelab and IPv6rs for painless ingress [2] </shameless>

[1] https://github.com/ipv6rslimited/cloudseeder

[2] https://ipv6.rs

quectophoton · 2 years ago
I didn't know about IPv6rs, but thanks for supporting raw WireGuard configs[1]! Being able to use just WireGuard without having to install any additional daemons or wrappers is always appreciated.

If only I had known about this service before AWS launched its eu-south-2 region (Spain), I would have seriously considered it. But unfortunately I already have my stuff all setup and working, tunneling through an EC2 instance.

Still, bookmarked. I won't say I'll be using it in the future, but I'll definitely keep it in mind for the next time I need to change/update my homeserver stuff.

[1]: https://ipv6.rs/raw

xena · 2 years ago
I'd set up IPv6 on there, but the problem is that I use Flannel (which is IPv4 only) and my 8 gigabit fiber ISP only gives me IPv4 connectivity. I'll look into more details, but I've slightly given up on IPv6 for now. Maybe I'll set up Calico or something, but IPv6 seems to have been made artificially difficult by everything in the stack. I hate it.
chadsix · 2 years ago
> 8 gigabit fiber ISP only gives me IPv4 connectivity.

Just like my ISP (but not 8 gbit!). Luckily, IPv6rs actually tunnels through IPv4 (or 6) and provides an IPv6 address. You don't need one to start!

I don't know the ins and outs for flannel, but maybe you could setup IPv4 internally and use IPv6 (and an IPv4 Reverse Proxy) for the public internet?

I agree, though, IPv6 on its own can be hard but thanks to WireGuard, tayga (NAT64), and nginx/caddy/etc. (reverse proxy), it's definitely quite usable!

miyuru · 2 years ago
Can you add AAAA records to cdn.xeiaso.net as well.

It seems to have the same IPv4 from fly.io as the main domain, but you forgot to add it to the CDN subdomain.

sgarland · 2 years ago
> What's not fine is how you prevent Ansible from running the same command over and over. You need to make a folder full of empty semaphore files that get touched when the command runs...

> One of my patrons pointed out that I need to use Ansible conditionals in order to prevent these same commands from running over and over.

Yes-ish. As I'm pretty sure OP figured out due to the pre-made roles comment, there exists a `community.general.dnf_config_manager` module that would handle this specific issue. As a general rule in Ansible, as ansible-lint [0] will tell you, is if you're using `ansible.builtin.{command, shell}`, there's a decent chance you're Doing It Wrong (TM).

The biggest problem I have with Ansible (and I say this as someone who uses it extensively with Packer to build VM templates for Proxmox in my homelab) is that, like its underlying Python, there are a dozen ways to do anything. For example, if you wanted to perform a set of tasks depending on arbitrary conditions, you could:

0. Use `when` and rely on things like named host groups

1. Use `when` and rely on manually handling state throughout plays

2. Use handlers

3. Break down the tasks into logically-scoped roles, and manually call roles

4. Do #3, but rely on Ansible's role dependencies instead

5. Use semaphore files / `creates` as OP did

6. Probably something else

[0]: https://github.com/ansible/ansible-lint

notabee · 2 years ago
It's true, the flexibility can be both a boon and a curse. There should be a little more "best practices" info out there that's not too prescriptive. It doesn't help that a lot of pre-made roles on Ansible Galaxy vary widely in style and quality. Certainly no one wants to inherit Ansible code that's nothing but shell and command modules, but sometimes those are crucial gap fillers when an idempotent module isn't available for the task or is missing needed functionality. And even then, specialized (as opposed to general use) modules are only idempotent within themselves and you still sometimes need to check and pass the state of things between tasks and stick that in a registered variable combined with conditionals if the multiple tasks are dependent on each other or require a specific ordering.

I think a good generalized "best practice" is to keep those inter-task dependencies and conditionals to a minimum though. Small chunks or no chunks at all. It's always better to find a way to just run tasks independently with no knowledge of each other. The block module with "rescue" is useful for failing out a host gracefully if there's a bundle of finicky inter-dependent tasks that just have to run together though.

walterbell · 2 years ago
> I ran a poll on Mastodon to see what people wanted me to do. The results were overwhelmingly in favor of Rocky Linux. As an online "content creator", who am I to not give the people what they want?

Truman would be proud!

> the NAS. It has all of our media and backups on it. It runs Plex and a few other services, mostly managed by docker compose.

Does the NAS run NixOS?

xena · 2 years ago
Yes, it runs NixOS and I am too cowardly to bother changing that any time soon. It's got everything on a giant ZFS array and most distros have poor ZFS support.
deadbunny · 2 years ago
> and most distros have poor ZFS support.

This is the complete opposite of my experience. For most "server" style distros (IE not arch/arch derivatives) you just install the zfs modules and forget about it. Ubuntu even has them pre baked on their kernel.

Arch gets complicated because it's a rolling release so the zfs module supported kernel versions gets out of sync with the latest kernel version which can prevent system updates due to irresolvable requirements which can go on for days/weeks as they play catch-up with eachother but you can just install the lts kernel and mostly avoid that issue.

For (mostly) every other distro they are a lot more coordinated in their releases so the zfs module will work with the newer kernel without any issues.

Other than that I can't think of what poor support even means, once it's installed it works. You can even have zfs on root for most distros.

wint3rmute · 2 years ago
After running NixOS for 6+ months on my homelab and also re-using part of the configuration on my work machine, I feel the same way as Xe each time I'm interacting with a non-declarative OS. There's just no simple way to share configuration between machines or to automagically clean things up after making changes.

Ansible feels like a thin layer of ice upon a deep ocean of the OS state, hiding in a multitude of non-tracked configuration files. It is simply not enough to build a layer of YAML around an OS which is imperative by nature.

Unfortunately, I can see the downsides of NixOS as well, being radically different from what we usually expect in a Linux distribution, adopting it in a already established environment will no doubt be hard. Steep learning curve, idiosyncracies of the Nix language (although after reading parts of the Nix thesis[1], I find it much more understandable and deeply thought out), just explaining Nix to people who don't have much experience with the functional way of doing things, let alone taking the functional approach all the way to defining an entire operating system - all of this sounds like a tough barrier to cross.

And yet, the desire to keep things reproducible and declarative (not to mention going back in time) persists once you've had the taste of NixOS.

[1] https://edolstra.github.io/pubs/phd-thesis.pdf

jt2190 · 2 years ago
I’m picking this nit:

> When is a build reproducible?

> “A build is reproducible if given the same source code, build environment and build instructions, any party can recreate bit-by-bit identical copies of all specified artifacts.”

> Neither Nix or NixOS gives you these guarantees.

This really makes me question whether all of the quirkiness of Nix is worth it if it can’t actually “pay off” with true reproducibility.

[1] “NixOS is not reproducible (2022) https://linderud.dev/blog/nixos-is-not-reproducible/

[2] “non reproducible issues in NixOS” https://github.com/orgs/NixOS/projects/30

Cyph0n · 2 years ago
Nonetheless, Nix/NixOS is more reproducible than the majority of other build systems and distros out of the box. But yes, if this is a hard requirement, you’ll be better off with a different choice.

Keep in mind that this is but one of the features NixOS provides. I would say the config-driven approach to OS management is extremely powerful.

As an example, I could bring up my homelab’s external reverse proxy on a generic VPS in a few minutes over SSH using a single command. This includes SSH keys, Telegraf, Nginx with LetsEncrypt certs, and automatic OS upgrades. No Ansible needed :)

See: https://github.com/nix-community/nixos-anywhere

Yotsugi · 2 years ago
It isn't worth it, if you care about freedom and configurability, Gentoo exists.

>reproducibility

would like to see people reproduce software that embeds build timestamp into the binary.

walterbell · 2 years ago
Does Guix offer guarantees of build reproducibility?