Our team regularly faced the need to create almost identical charts, so when we had 14 identical microservices in one project, we came up with a chart format that essentially became a prototype of nxs-universal-chart. It turned out to be more relevant than we even thought! When we needed to prepare CI/CD for 60 almost identical projects for a customer, we reduced the preparation time for release from 6 hours to 1. Basically, that’s how the idea of nxs-universal-chart became a real thing that everyone can use now!
The main advantages of such chart that we would like to highlight: -Reducing time to prepare deployment -You’re able to generate any manifests you may need -It compatible with multiple versions of k8s -Ability to use go-templates as your values
In the latest release we’ve added a few features like cert-manager custom resources support! Any other information and details you can find on GitHub: https://github.com/nixys/nxs-universal-chart We’re really looking forward to improving our universal-chart so we’d love to see any feedback, contributions or report any issues you encounter! Please join our chat in telegram if you want to discuss something about this repo or ask any questions: https://t.me/nxs_universal_chart_chat
You say that it "reduces time to prepare deployment"? How? What would be a before and after scenario where this actually saves time?
If you would like to do re-use of any service you would just put them inside their own first-class chart in which you'd write the templates directly, rather than going through this layer of indirection, and then copy-paste the small usage portion in your parent chart.
But deploying systems to specialized VMs and load balancers and networking devices had too many disparate pieces and was impossible to unify, and developers didn't want to have to talk to all of those experts to get the configurations they needed, and anyway, resources were wasted on all these specialized pieces of hardware, so the Kubernetes API was created to encapsulate all the pieces into straightforward and consistent APIs any developer could understand that allowed services to be binpacked to achieve maximum efficiency.
But the Kubernetes API was too complicated with so many complicated interrelated concepts in one big monolith of an API, and developers did not want to have to think about how to configure all the individual pieces and dependencies, so people created Helm charts to allow that one dude on the team who understood the infrastructure to hide the complexity.
But Helm charts were too obfuscated, and no one could tell what was going on inside them, and reliability suffered and it became too risky to use black boxes to configure your deployment, so now there's a universal chart that exposes every Kubernetes API option for easy visibility.
But obviously, this new universal chart has far too many options, when all I want to do is deploy my application without having to think about every detail. So I'm looking forward to the upcoming packaging system that wraps this universal chart into something with less visible complexity.
Needs an actual program. Transformations to declarative data only get you so far, and ultimately confuse everyone involved.
A universal helm chart like this allows a lot of defaults to be set, with the values containing only the necessary changes to get the code running in the cluster (and small adjustments, as needed). When you need to add/change/remove something from all services, this can all be done through the universal chart, greatly reducing maintenance friction of getting changes through to all services from one location. In a large enough organization, there can be hundreds of services with almost exactly the same chart with the same set of things.
I think creating a helm chart makes sense if you're planning to publish it and have it used by multiple, different entities that don't own the project (think about any open-source projects really). However for internal, closed-source services, the same entity owns the code, the chart, the values, and the deployment. It adds a lot of boilerplate and there's really little reason to maintain and publish a chart as an artifact for each service.
I want the values.yaml to express business logic, like feature flags or external endpoint configuration, so that the Delivery team can easily read and update it.
The complex infrastructural components should derive from the business config and deploy the needed infrastructure to support it.
Still, I wish I had this tidy list of working kubernetes yaml templates years ago, when we started rolling our charts where I work. I'm definitely checking out some of the resources to see if there is some cool parameter I can steal, thanks for sharing.
As another commenter said:
> So you write the whole configuration of the helm chart in your values.yaml?
Well that is a mistake, but that aside.
Helm is an ultra low budget operator for "any" Kubernetes asset where "all" the changes to that asset are caused by people. While this chart has little hope, Helm charts can be pure business logic.
For example, will infrastructure values ever be changed once deployed? I've seen changes to resource requests, replicas, domain names (for marketing driven concerns), and complex upgrades from non-clustered to clustered applications.
Can some of this be expressed as a "business" parameter in a values.yaml? Yes. Charts should ship with Keda ScaledObjects, and then it becomes translatable into a business definition like, "what is the limit on latency you want?" instead of "what is the limit of memory usage you want?"
This chart should provide an example ScaledObject, and maybe all sorts of stuff from the ecosystem that is valuable. A useful chart would homogenize (or be very opinionated) about OIDC, for example, too. That is business -> infrastructure.
The strength of the Kubernetes ecosystem is in translating business concerns like "transactions per second" into code. Infrastructure is a major business concern.
[1] https://www.devspace.sh/component-chart/docs/introduction
[2] https://www.devspace.sh/
On a separate topic, one great thing about helm is it's declarative+state nature, much like terraform. I love using helm because it makes sure you don't leave leftover yaml components in your cluster anytime you make a mistake when you delete projects or rename yaml files in your local directory.
One thing that I don't like about kubectl apply, that doesn't get fixed "just by" using helm, is that it will leave changes that someone could have run manually through kubectl imperative commands. I didn't check the repo but if it implements all defaults in the template, that could solve this issue partly
Previosly I used Terraform for this but am starting a new project and would like to avoid TF for management of actual k8s resources, despite it having some advantages.
For secrets I'm a big fan of https://external-secrets.io/latest/ paired with a cloud vault, which allowed me to offload secret production/maintainance to the resource teams.
I tried using TF to manage kube manifests, I hated it all around and moved away immediately, so I agree with you.
I haven't found a good alternative to Helm. Pulumi is probably the best if you wanted to just create manifests their k8s provider is great, but we ultimately want to shift left the kubernetes manifests and helm is pretty ok for that.
It takes some "waaaaa?!" to change the mental model away from text generation and substitution, and I'll be straight that their docs could really use a LOT more CONCRETE examples, but in the end it does as advertised using only kubectl
Then, to address the stateful bit of helm (e.g. helm ls) if one is already kustomize friendly then flux is good about using CRDs to track what things have been deployed: https://github.com/fluxcd/flux2#readme (Apache 2)
Kustomize is extremely limited and opinionated, which is great if your deployments have only kustomize-approved differences between them, but it has no way to write business-aware config files and at my typical 100 deployments/chart is was becoming megaduplicated.