Readit News logoReadit News
erulabs · a year ago
Well I think it's neat. The bit I find most provoking is the "if you already have Kubernetes..." premise. I find myself having a hard time not wanting to shove everything into the Kubernetes framework simply to avoid having to document what solutions I've chosen. `kubectl get all` gives me an overview of a project in a way that is impossible if every single project uses a different or bespoke management system.

"simple/complex" is not the right paradigm. The real SRE controversy is "unique/standard". Yes, the standard approach is more complex. But it is better _in practice_ to have a single approach, rather than many individually-simpler, but in-aggregate-more-complex approaches.

Kubernetes is never the perfect solution to an engineering problem, but it is almost always the most pragmatic solution to a business problem for a business with many such problems.

akdor1154 · a year ago
Yeah k8s is great. It gives you an infinite rope generator to let you hang yourself with ever increasing complexity, but with a bit of restraint you can orchestrate pretty much anything in a simple or at least standard way.

I'd take a stack of yaml over a stack of bespoke aws managed service scripts any day.

nopurpose · a year ago
Speaking of rope. Right this moment GKE clusters can't provision large volumes (~4TiB) because their CSI driver receive OOMKill when formatting volume. Problem was reported back in Apr and still not fixed.
ownagefool · a year ago
This is a great response and pretty much sums it up.

With k8s you can easily install a bunch of helm charts and get a bespoke platform that becomes a full time role for multiple people.

There are pros and cons against this approach but you if you're worried k8s is complex, just use the cloud native integrations.

cqqxo4zV46cp · a year ago
Not a day goes by where there isn’t some greybeard having a huge sulk about how nobody wants to use his collection of esoteric bash scripts that nobody else will ever understand, but HE does.
Spivak · a year ago
Is the mountain of k8s code on top of your cloud of choice not strictly more complex than the cloud services it's orchestrating? Like I think k8s is good software but to hear that you're not choosing it because it's good engineering is surprising. To me that's the only reason you choose it, because you want or need the control-loop based infrastructure management that you can't get on say AWS alone.
zipmapfoldright · a year ago
That's the same with.. the Linux kernel? I'd wager that most services people write have less complexity than the the kernel it runs on. We choose to do that not because we need its full feature set, but because it gives us useful abstractions.
marcinzm · a year ago
There’s always a mountain of code on top of the cloud provider.
politelemon · a year ago
> But it is better _in practice_ to have a single approach, rather than many individually-simpler, but in-aggregate-more-complex approaches.

Very much depends on the point of view. It's great from an SRE point of view but not necessarily application/developers, who are being constrained to a platform's idiosyncracies and expressions of platform egos.

The individually simpler solutions are only complex from a high level, ivory tower, or middle management perspective, not from the perspective of people who have to use and manage the application itself.

ownagefool · a year ago
I think just running a process and exposing a port is fine, but the second you get into running a bunch of services together, or caring about environments, the k8s abstraction is simpler.

In the last 6 months my job has been to get production ready vault instances on azure. There's a bunch of complex, unreliable and not very fun APIs here.

Much like AWS, there isn't really a StatefulSet ( PetSet ) abstraction. To you need to write a bunch of bespoke logic where you figure which IP addresses, names, IDs, and Disks you need to attach to a new VM.

Whilst iterating, the azure APIs are eventually consistent that cause all sort of niggly problems. Resources that are created don't get added to TF when there's a failure.

I create a new vault on a PR and it takes 20 minutes to deploy.

The problem took a couple of months and I created a bunch of code.

On k8s I can just deploy the helm chart in 2 minutes. The abstractions are cleaner, it's more reliable, and way more fun.

At the end of the azure project, the team agreed to do a AKS poc, where we gave the task to the junior of the team, and the entire thing was done in a week.

If you're not doing this type of work, maybe you don't need k8s. But if you're not doing ephemeral test environments, do you actually not see them as a positive, or is it an effort thing? Because it takes me no real effort.

thesnide · a year ago
If you only have a hammer, everything looks like a nail.

Then, it mostly works, so management makes "economy of scales" and removes all screw drivers, since they need more skills anyway.

Now, you are only using nails.

Until someone else rediscovers screws and starts another hype cycle.

TeMPOraL · a year ago
Except in manufacturing, this actually works. Economies of scale are real and trump pretty much every other consideration, and they're what enable pretty much everything we have around us that isn't biological.
paulddraper · a year ago
The bit I find the most provoking is calling k8 "standard."

And not a collection of strung-together controllers.

windlep · a year ago
I've been self-hosting a lot of things on a home kubernetes cluster lately, though via gitops using flux (apparently this genre is now home-ops?). I was kind of expecting this article to be along those lines, using the fairly popular gitops starting template cluster-template: https://github.com/onedr0p/cluster-template

I set one of these up on a few cheap odroid-h4's, and have quite enjoyed having a fairly automated (though quite complex of course) setup, that has centralized logging, metrics, dashboards, backup, etc. by copying/adapting other people's setups.

Instead of weechat, I went with a nice web based irc client (the lounge) to replace my irccloud subscription. kubesearch makes it easy to find other people's configs to learn from (https://kubesearch.dev/hr/ghcr.io-bjw-s-helm-app-template-th...).

Deleted Comment

gclawes · a year ago
I really wish The Lounge supported something like a PostgreSQL/MySQL backend. Having to keep state in files on a persistent volume is a pain for any app, it's so much nicer when I can just connect to a DB _elsewhere_. The *arr media apps recently added support for PostgreSQL
windlep · a year ago
Definitely, while I have volsync backing it up, and the PV is replicated for local availability.... still annoying.
zipmapfoldright · a year ago
TIL about Talos (https://github.com/siderolabs/talos, via your github/onedr0p/cluster-template link). I'd been previously running k3s cluster on a mixture of x86 and ARM (RPi) nodes, and frankly it was a bit of a PiTA to maintain.
johntash · a year ago
Talos is great. I'd recommend using Omni (from the same people) to manage Talos. I was surprised how easy it was to add new machines with full disk encryption managed by remote keys.
nyolfen · a year ago
cannot praise talos highly enough, it makes so much annoying crap easy
AdamJacobMuller · a year ago
Kubevirt is great, but, I'm not sure why you wouldn't just run weechat inside a container.

There's nothing so special about weechat that it wouldn't work and you can just exec into the container and attach to tmux.

Running tmux/screen inside a container is definitely cursed, but, it works surprisingly well.

xena · a year ago
Author of the article here. One of the main reasons is that I want to update weechat without rebuilding the container, taking advantage of weechat's soft upgrade method: https://weechat.org/files/doc/weechat/stable/weechat_user.en...

And at that point, it may as well just be a normal Ubuntu server.

AdamJacobMuller · a year ago
Ah I didn't know wechat could do that, but, I remember that from my irssi days.

I would personally consider that a bit of an anti-pattern (I would always want to have software upgrades tied to the container) but that makes sense!

tw04 · a year ago
So why not just make it a VM and skip k8s altogether?
kristianpaul · a year ago
Why not use Persistent Volumes and Nix container?
okasaki · a year ago
You can do that in a container, no need for a VM.

    docker run -itd ubuntu:24.04
    ...
    docker exec -it df36 /bin/bash
    ...
    root@df365a3d2257:/# apt update
    ...
    root@df365a3d2257:/# apt upgrade
    ...

dijit · a year ago
I can't imagine a worse combination than Kubernetes and stateful connections.
Joker_vD · a year ago
It only hurts when you actually have meaningful load and then suddenly needs to switch. Especially if the "servlets" that those stateful connections are connected to require some heavy-ish work on startup, so you're vulnerable to the "thundering herd" scenario.

But the author only uses it to keep alive a couple of IRC connections (which don't send you history or anything on re-connects) and to automatically backup their "huge" chat logs (seriously, 5 GiB is not huge, and if it's text then it can be compressed down to about 2 GiB — unless it's already compressed?).

dilyevsky · a year ago
You dont have to roll all the pods at the same time - there are built-in controls to avoid doing that and it’s the default. You will have to diy this if you’re using something else so, in fact, tp is wrong that k8s is somehow a bad fit for this use case
johntash · a year ago
It's only a problem if your nodes go up/down often, or you have other things causing pods to be pre-empted/etc.

If you have a static number of nodes and don't have to worry too much about things autoscaling, I don't see why it couldn't be really stable?

dijit · a year ago
You don’t?

Check out how services, load balancers and the majority of CNI actually work then.

Kubernetes was designed for stateless connections and it shows in many places.

If you want it to do stateful connections you could use something like Agones which intentionally bypasses a huge amount of kubernetes to use it only as a scheduler essentially.

Tiberium · a year ago
Not a single mention of Quassel in the article or in the comments, which is honestly surprising. It's a client-server architecture IRC client specifically made to make it easy to save logs and persist IRC sessions, since you can host the server part on an actual server/VPS and connect to it from all of your different devices.
tredre3 · a year ago
Weechat can also be used in a client/server architecture. It can run headless and expose a relay protocol (full weechat control and state) and/or an irc server (traditional bouncer).

Though, ironically, there are no CLI clients for its relay protocol, only for desktop/web/android.

xena · a year ago
I'd use Quassel, but I have custom weechat scripts. I can't easily implement those in Quassel.
Tiberium · a year ago
Fair enough, it's just that Quassel immediately came to mind when talking about persistent IRC logs/sessions :)
paulddraper · a year ago

  curl 'https://news.ycombinator.com/item?id=41332427' | grep -i just
(16)

bravetraveler · a year ago
This is funny but six them are in one comment, another four are in two replies :P

Need tables or something

jeanlucas · a year ago
Is IRC still a thing? I mean seriously, do communities hang around there? I stopped using in 2016.
torvald · a year ago
Yes and yes.
muzani · a year ago
Older social media tends to rot as they age, and there's always a few people who don't leave. Nearly all the IRC communities I used to hang around in have gone completely rotten. But they're still there.
gaws · a year ago
> Is IRC still a thing?

Yes.

> I mean seriously, do communities hang around there?

Yes. They've been around for years.

__turbobrew__ · a year ago
Is your homelab geographically distributed? Because if it is not then you aren’t going to get much better durability than a single host. I bet this was an interesting project but just backing up your files to S3 or some other offsite storage is a lot simpler and much more durable to real failures.
wilted-iris · a year ago
Yep, mine is and I'm sure some others' are as well. Truly overkill but it's a fun hobby project.
ninkendo · a year ago
I’m trying to resist the urge to move all my homelab setup to kubernetes too, mainly also because I don’t want to have to remember every dumb thing I did to customize my server and want everything to be deployable from a git repo, in case I need to rebuild it, etc.

But I’ve had the same Linux box for over 15 years now (through various hardware changes, I’ve kept the same Ubuntu install since circa 2008, using do-release-upgrade for major updates and making sure to keep up with security updates) and I’m not sure it’s really worth it to optimize for easy recovery from a fresh install. I back up my homedir and etc and some other important data, and if I had to rebuild my OS I’m sure I’d do it far cleaner this time and wouldn’t want to fully do things the way I had them anyway.

Even as a hobby it just doesn’t seem worth it to move away from the “pets, not cattle” model of servers when there’s just one or two of them (technically my router is separate but it’s a very simple router so there’s not much to do there)

johntash · a year ago
What do you use for distributed storage (if anything)? Storage has always been my biggest headache when trying to make geographically-redundant clusters of any kind.
__turbobrew__ · a year ago
Awesome