After some work with kubernetes, i must really say, helm is a complexity hell. I'm sure it has much features but many aren't needed but increase the complexity nonetheless.
Also, please fix the "default" helm chart template, it's a nightmare of options and values no beginner understands. Make it basic and simple.
Nowadays i would very much prefer to just use terraform for kubernetes deployments, especially if you use terraform anyway!
Helm is my example of where DevOps lost it's way. The insanity of multiple tiers on templating an invisible char scoped language... it blows my mind that so many of us just deal with it
Nowadays I'm using CUE in front of TF & k8s, in part because I have workloads that need a bit of both and share config. I emit tf.json and Yaml as needed from a single source of truth
The problem with Kubernetes, Docker and anything CNCF related is what happens when everyone and their dog tries to make a business out of an OS capability with venture capital.
I've been trying to apply CUE to my work, but the tooling just isn't there for much of what I need yet. It also seems really short-sighted that it is implemented in Go which is notoriously bad for embedding.
I don't think I've ever seen a Helm template that didn't invoke nightmares. Probably the biggest reason I moved away from Kubernetes in the first place.
We have several Helm charts we've written at my job and they are very pleasant to use. They are just normal k8s templates with a couple of values parameterized, and they work great. The ones people put out for public consumption are very complex, but it isn't like Helm charts have to be that complex.
Infrastructure as code should from the beginning have been through a strict typed language with solid dependency and packaging contract.
I know that there are solutions like CDK and SST that attempt this, but because the underlying mechanisms are not native to those solutions, it's simply not enough, and the resulting interfaces are still way too brittle and complex.
I mean terraform provides this but using it doesn't give a whole lot of value, at least IME. I enforce types but often an upstream provider implementation will break that convention. It's rarely the fault of the IAC itself and usually the fault of the upstream service when things get annoying.
I don't think I want to use kubernetes (or anything that uses it) again. Nightmare of broken glass. Back in the day Docker Compose gave me 95% of what I wanted and the complexity was basically one file with few surprises.
If you can confidently get it done with docker-compose, you shouldn't even think about using k8s IMO. Completely different scales.
K8s isn't for running containers, it's for implementing complex distributed systems: tenancy/isolation and dynamic scaling and no-downtime service models.
I only whish terraform was more recognized by upstream projects, like postgres, tailscale, ingress operators.
A one-time adoption from kubectl yaml or helm to terraform is doable - but syncing upstream updates is a chore.
If terraform (or another rich format) was popular as source of truth - then perhaps helm and kubectl yaml could be built from a terraform definition, with benefits like variable documentation, validation etc.
I've embraced kustomize and I like it. It's simple enough and powerful enough for my needs. A bit verbose to type out all the manifests, but I can live with it.
This is what I've done too. Just enough features easily available to handle everything i've ever needed in the simple deployments I use. Secrets, A/B configuration, even "dynamic reload" of a Deployment for Configmap changes.
Incidentally, Terraform is the only way I want to use Helm at all. Although the Terraform provider for Helm is quite cumbersome to use when you need to set values.
Helm is sort of like a docker (or maybe docker compose) for k8s, in terms of a helm chart is a prepackaged k8s "application" that you can ship to your cluster. It got very popular very quickly because of the ease of use, and I think that was premature which affects its day-to-day usability.
It's a client-side preprocessor essentially. The K8s cluster knows nothing about Helm as it just receives perfectly normal YAMLs generated by Helm on the client.
Helm is truly a fractal of design pain. Even the description as a "package manager" is a verifiable lie - it's a config management tool at best.
Any tool that encourages templating on top of YAML, in a way that prevents the use of tools like yamllint on them, is a bad tool. Ansible learned this lesson much earlier and changed syntax of playbooks so that their YAML passes lint.
Additionally, K8s core developers don't like it and keep inventing things like Kustomize and similar that have better designs.
Seriously. I’ve lost at least 100 hours of my life debugging whitespace in templated yaml. I shudder to think about the total engineering time wasted since yaml’s invention.
Helm, and a lot of devops tooling, is fundamentally broken.
The core problem is that it is a templating language and not a fully functional programming language, or at least a DSL.
This leads us to the mess we are in today. Here is a fun experiment: Go open 10 helm charts, and compare the differences between them. You will find they have the same copy-paste bullshit everywhere.
Helm simply does not provide powerful enough tools to develop proper abstractions. This leads to massive sprawl when defining our infrastructure. This leads to the DevOps nightmare we have all found ourselves in.
I have developed complex systems in Pulumi and other CDKs: 99% of the text just GOES AWAY and everything is way more legible.
You are not going to create a robust solution with a weak templating language. You are just going to create more and more sprawl.
Maybe the answer is a CDK that outputs helm charts.
So many people complaining about Helm but I'll share my 2 experiences. At my last 2 companies we shipped Helm charts for administrators to easily deploy our stuff.
It worked fine and was simple enough which is what the goal was. But then people came along wanting all sorts of customisations to make the chart configurable to work in their environments. The charts ended up getting pretty unwieldy.
Helm is a product that serves users who like customization to the nth-degree. But everyone else hates it.
Personally, I would prefer it if the 'power users' just got used to forking and maintaining their own charts with all the tweaks they want. The reason they don't do that of course is that it's harder to keep up with updates - maybe that's the problem that needs solving.
I recently learned about Helmfile's support for deep declarative patching of rendered charts, without requiring full forks with value-template-wiring. It's been a gamechanger!
In your context, it might help certain clients. It does require that the upstream commit to not changing its architecture, but if the upstream is primarily bumping versions and adding backwards-compatible features, and if you document all the patches you're recommending in the wild, it might be an effective tool.
Helm shines when you’re consuming vendor charts (nginx-ingress, cert-manager, Prometheus stack). It’s basically a package manager for k8s. Add a repo, pin a version, set values, and upgrade/rollback as one unit. For third-party infra, the chart’s values.yaml provides a fairly clean and often well documented interface
Yeah, I agree. Creating and maintaining helm charts sucks, but using them (if they are properly made and exposes everything you want to edit in the values.yaml) is a great experience with gitops tools such as FluxCD or helmfile.
I used to be a team that hosted internal enterprise services and this was the main reason we used helm. Someone wrote charts for these self-hosted applications.
(Not all of them were written in a sane manner, but that's just how it goes)
I have several Docker hosts in my home lab as well as a k3s cluster and I'd really like to use k3s as much as possible. But when I want to figure out how to deploy basically any new package they say here are the Docker instructions, but if you want to use Kubernetes we have a Helm chart. So I invariably end up starting with the Docker instructions and writing my own Deployment/StatefulSet, Service, and Ingress yaml files by hand.
Helm is the number 1 reason I'm looking to leave behind my DevOps/SRE job. Basically every job or project I accept involves working with helm in some capacity and I'm just tired of working with mostly garbage helm charts, especially big meta-charts or having to fork a chart to add a config parameter value override somewhere. Debugging broken chart installs or incomplete upgrades is also nothing but pain. Most helm charts remind me of working with ansible-galaxy roles around ~2015.
Been using bjw-s' common library chart (& its app-template companion) [1] for my homelab and it improved my experience with helm by a lot, since you only have to edit the values.yaml without doing weird text templating. Hope he gets more funding for maintainence so it can be used for more "production" systems.
> Helm is the number 1 reason I'm looking to leave behind my DevOps/SRE job.
A few years ago, the startup I worked at folded - just as the new CTO's mandate to move everything to K8s with Helm was coming into effect. Having to scramble for a new job sucked of course, but in retrospect, I honestly have good feelings associated with the whole debacle: A) I learned a lot about Helm, B) I no longer needed to work with Helm, and C) I'm now quite sure that I don't want to be part of any engineering org that makes the decision to use it.
This is not exactly a criticism of these technologies, but simply me discovering that I'm simply utterly incompatible with it. Whether it's a failing with the Cloud Native Stack, or a personal failing of mine, it doesn't matter - everyone's better off when I stay far away from it.
Came here to feel the temperature of the comments, and unsurprisingly, most folks seem to have plenty of gripes with Helm.
A Helm chart is often a poorly documented abstraction layer which often makes it impossible to relate back the managed application's original documentation to the Helm chart's "interface". The number of times I had to grep through the templates to figure out how to access a specific setting ...
I don't understand why having to "grep through the templates" is so bad. Oh, I get it, you just want to know what knobs are available for tweaking, and in a well-designed chart those will all be segregated in values files, with overrides specified on the command-line as needed. And so that's what documentation is for, and if a chart does not surface certain knobs from the product, well, yeah, you'll have to modify the chart if you want it to.
What is the essence of the complaint here? That chart authors do poor jobs? That YAML sucks (it does! it so so does!)? Just that charting provides an abstraction you'd rather not have? (If so, why not just... not use Helm?) Something else?
As said, that I often cannot relate the managed application's documentation to the Helm chart's interface?
Reason for it can vary ... poor Helm chart documentation, poor Helm chart design, Helm chart not in sync with application releases, ... The consequence is that I often need to grep through its templates and logic to figure out how to poke the chart's interface to achieve what I want. I don't think that's reasonable to say that's part of the end-user experience.
This. Almost every chart try to be helpful and hide the upstream configuration of the application. Inevitably, you will sooner or later need to change a config. Now it’s not enough to read the documentation of the application, you also need to map this parameter into whatever values the helm chart translated it to. I wouldn’t even call it an abstraction, since it’s only read in a single location, it’s just a dumb and pointless translation. Total nonsense.
Also, please fix the "default" helm chart template, it's a nightmare of options and values no beginner understands. Make it basic and simple.
Nowadays i would very much prefer to just use terraform for kubernetes deployments, especially if you use terraform anyway!
Nowadays I'm using CUE in front of TF & k8s, in part because I have workloads that need a bit of both and share config. I emit tf.json and Yaml as needed from a single source of truth
I've been trying to apply CUE to my work, but the tooling just isn't there for much of what I need yet. It also seems really short-sighted that it is implemented in Go which is notoriously bad for embedding.
I know that there are solutions like CDK and SST that attempt this, but because the underlying mechanisms are not native to those solutions, it's simply not enough, and the resulting interfaces are still way too brittle and complex.
K8s isn't for running containers, it's for implementing complex distributed systems: tenancy/isolation and dynamic scaling and no-downtime service models.
A one-time adoption from kubectl yaml or helm to terraform is doable - but syncing upstream updates is a chore.
If terraform (or another rich format) was popular as source of truth - then perhaps helm and kubectl yaml could be built from a terraform definition, with benefits like variable documentation, validation etc.
Gets the job done.
I’d love to dig a bit.
If you used helm + terraform before, you'll have no problem understanding the terraform kubernetes provider (as opposed to the helm provider).
You can install, update, and remove an app in your k8s cluster using helm.
And you release a new version of your app to a helm repository.
Dead Comment
Any tool that encourages templating on top of YAML, in a way that prevents the use of tools like yamllint on them, is a bad tool. Ansible learned this lesson much earlier and changed syntax of playbooks so that their YAML passes lint.
Additionally, K8s core developers don't like it and keep inventing things like Kustomize and similar that have better designs.
but we don't have tons of infra so no idea how it would run for big thousands-of-employees corps.
Helm, and a lot of devops tooling, is fundamentally broken.
The core problem is that it is a templating language and not a fully functional programming language, or at least a DSL.
This leads us to the mess we are in today. Here is a fun experiment: Go open 10 helm charts, and compare the differences between them. You will find they have the same copy-paste bullshit everywhere.
Helm simply does not provide powerful enough tools to develop proper abstractions. This leads to massive sprawl when defining our infrastructure. This leads to the DevOps nightmare we have all found ourselves in.
I have developed complex systems in Pulumi and other CDKs: 99% of the text just GOES AWAY and everything is way more legible.
You are not going to create a robust solution with a weak templating language. You are just going to create more and more sprawl.
Maybe the answer is a CDK that outputs helm charts.
You say you want a functional DSL? Well, jq is a functional DSL!
It worked fine and was simple enough which is what the goal was. But then people came along wanting all sorts of customisations to make the chart configurable to work in their environments. The charts ended up getting pretty unwieldy.
Helm is a product that serves users who like customization to the nth-degree. But everyone else hates it.
Personally, I would prefer it if the 'power users' just got used to forking and maintaining their own charts with all the tweaks they want. The reason they don't do that of course is that it's harder to keep up with updates - maybe that's the problem that needs solving.
https://helmfile.readthedocs.io/en/latest/advanced-features/...
In your context, it might help certain clients. It does require that the upstream commit to not changing its architecture, but if the upstream is primarily bumping versions and adding backwards-compatible features, and if you document all the patches you're recommending in the wild, it might be an effective tool.
(Not all of them were written in a sane manner, but that's just how it goes)
[1]: https://github.com/bjw-s-labs/helm-charts/tree/main
See here for more examples on how people are using this chart:
https://kubesearch.dev/#app-template
A few years ago, the startup I worked at folded - just as the new CTO's mandate to move everything to K8s with Helm was coming into effect. Having to scramble for a new job sucked of course, but in retrospect, I honestly have good feelings associated with the whole debacle: A) I learned a lot about Helm, B) I no longer needed to work with Helm, and C) I'm now quite sure that I don't want to be part of any engineering org that makes the decision to use it.
This is not exactly a criticism of these technologies, but simply me discovering that I'm simply utterly incompatible with it. Whether it's a failing with the Cloud Native Stack, or a personal failing of mine, it doesn't matter - everyone's better off when I stay far away from it.
Dead Comment
A Helm chart is often a poorly documented abstraction layer which often makes it impossible to relate back the managed application's original documentation to the Helm chart's "interface". The number of times I had to grep through the templates to figure out how to access a specific setting ...
What is the essence of the complaint here? That chart authors do poor jobs? That YAML sucks (it does! it so so does!)? Just that charting provides an abstraction you'd rather not have? (If so, why not just... not use Helm?) Something else?
As said, that I often cannot relate the managed application's documentation to the Helm chart's interface?
Reason for it can vary ... poor Helm chart documentation, poor Helm chart design, Helm chart not in sync with application releases, ... The consequence is that I often need to grep through its templates and logic to figure out how to poke the chart's interface to achieve what I want. I don't think that's reasonable to say that's part of the end-user experience.
PS: I have no gripes with YAML