How do you people even keep up with this? I'm going back to cybersecurity after trying DevOps for a year, it's not for me. I miss my sysadmin days, things were simple back then and worked. Maybe I'm just getting old and my cognitive abilities are declining. It seems to me that the current tech scene doesn't reward simple.
It's exactly why taking a trip through the ops/infra side is so important for people - you learn why LTS-style engineering is so important. You learn to pick technologies that are stable, reliable, well-supported by a large-enough people who are conservative in their approach, for anything foundational, because the alternative is migration pain again and again.
I also feel like we as an industry should steer towards a state of "doneness" for OSS solutions. As long as it works, it's fine to keep using technologies that are only sparsely maintained.
I often find myself trying to tell people that KISS is a good thing. If something is somewhat complex it will be really complex after a few years and a few rotations of personnel.
At least in the golden days of job hopping, not migrating was a way to hobble that job hopping and decrease your income growth prospects. Now that engineers are staying put more it's likely we'll start seeing what you're saying.
Though now AI slop is upon us so we'll probably be even worse off for a while.
It's a much less of a deal than it seems. Yeah, it is a popular project that has been around for a while, but this is just another day at work. Things evolve, there are migration paths no matter if you want to stay with ingresses or move on...
Kubernetes is promoting Gateway API for a while now. It's in GA for 2 years already (while Ingress was in GA quite late, 2020/K8s 1.19?).
Sun-setting ingress-nginx was not exactly a secret.
The whole Ingress in k8s is marked in docs as "frozen" for a while as well. There are no radical steps yet, but it's clear that Gateway API is something to get interested in.
Meanwhile Nginx Gateway Fabric [1] (which implements gateway API) is there, still uses nginx under the hood and remains opensource. They even have a "migration tool" to convert objects [3].
There are still a few months of support and time to move on to a different controller. Kubernetes still continues support for ingress so if you want to switch and keep using Ingress, there are other controllers [2].
> It seems to me that the current tech scene doesn't reward simple.
A deal with the devil was made. The C suite gets to tell a story that k8s practices let you suck every penny out of the compute you already paid for. Modern devs get to do constant busy work adding complexity everywhere, creating job security and opportunities to use fun new toys. "Here's how we're using AI to right size our pods! Never mind the actual costs and reliability compared to traditional infrastructure, we only ever need to talk about the happy path/best case scenarios."
This just seems like sensationalist nonsense spoken by someone who hasn’t done a second of Ops work.
Kubernetes is incredibly reliable compared to traditional infrastructure. It eliminates a ton of the configuration management dependency hellscape and inconsistent application deployments that traditional infrastructure entails.
Immutable containers provide a major benefit to development velocity and deployment reliability. They are far faster to pull and start than deploying to VMs, which end up needing some kind of annoying deployment pipeline involving building images or having some kind of complex and failure-prone deployment system.
Does Kubernetes have its downsides? Yeah, it’s complex overkill for small deployments or monolithic applications. But to be honest, there’s a lot of complexity to configuration management on traditional VMs with a lot of bad, not-so-gracefully aging tooling (cough…Chef Software)
And who is really working for a company that has a small deployment? I’d say that most medium-sized tech companies can easily justify the complexity of running a kubernetes cluster.
Networking can be complex with Kubernetes, but it’s only as complex as your service architecture.
These days there are more solutions than ever that remove a lot of the management burden but leave you with all the benefits of having a cluster, e.g., Talos Linux.
If you were working in the orgs targeted by k8s, I think it was generally more of a mess. Think about managing a park of 100~200 servers with home made bash scripts and crappy monitoring tools and a modicum of dashboards.
Now, k8s has engulfed a lot more than the primary target, but smaller shops go for it because they'r also hoping to hit it big someday I guess. Otherwise, there will be far easier solutions at lower scale.
You can manage and reason about ~2000+ servers without Kubernetes, even with a relatively small team, say about 100 - 150, depending on what kind of business you're in. I'd recommend either Puppet, Ansible (with AWX) and/or Ubuntu Landscape (assuming that your in the Ubuntu ecosystem).
Kubernetes is for rather special case environments. I am coming around to the idea of using Kubernetes more, but I still think that if you're not provisioning bare-metal worker nodes, then don't bother with Kubernetes.
The problem is that Kubernetes provides orchestration which is missing, or at least limited, in the VM and bare-metal world, so I can understand reaching for Kubernetes, because it is providing a relatively uniform interface for your infrastructure. It just comes at the cost of additional complexity.
Generally speaking I think people need to be more comfortable with build packages for their operating system of choice and install applications that way. Then it's mostly configuration that needs to be pushed and that simplifies things somewhat.
> Otherwise, there will be far easier solutions at lower scale.
Which solutions do you have in mind?
- VPS with software installed on the host
- VPS(s) with Docker (or similar) running containers built on-host
- Server(s) with Docker Swarm running containers in a registry
- Something Kubernetes like k3s?
In a way there's two problems to solve for small organisations (often 1 server per app, but up to say 3): the server, monitoring it and keeping it up to date, and the app(s) running on each server and deploying and updating them. The app side has more solutions, so I'd rather focus on the server side here.
Like the sibling commenter I strongly dislike the configuration management landscape (with particular dislike of Ansible and maintaining it - my takeaway is never use 3rd party playbooks, always write your own). As often for me these servers are set up, run for a bit and then a new one is set up and the app redeployed to that (easier than an OS upgrade in production) I've gone back to a bash provisioning script, slightly templated config files and copying them into place. It sucks, but not as much as debugging Ansible has.
Even after the bash script era, I don’t think the configuration management landscape gets enough discredit for how bad it is. I never felt like it stopped feeling hacked together and unreliable.
E.g., Chef Software, especially after its acquisition, is just a dumpster fire of weird anti-patterns and seemingly incomplete, buggy implementations.
Ansible is more of the gold standard but I actually moved to Chef to gain a little more capability. But now I hate both of them.
When I just threw this all in the trash in my HomeLab and went to containerization it was a major breath of fresh air and resulted in getting a lot of time back.
For organizations, of the best parts about Kubernetes is that it’s so agnostic so that you can drop in replacements with a level of ease that is just about unheard of in the Ops world.
If you are a small shop you can just start with something simpler and more manageable like k3s or Talos Linux and basically get all the benefits without the full blown k8s management burden.
Would it be simpler to use plain Docker, Docker Swarm, Portainer, something like that? Yeah, but the amount of effort saved versus your ability to adapt in the future seems to favor just choosing Kubernetes as a default option.
I managed 1000+ VMs without k8s with an orchestrator that less code than most k8s manifests I've had to work with since.
I fully accept that there are sizes and complexities where k8s is a reasonable choice, and sometimes it's a reasonable choice because it's easier to hire for, but the bar should be a lot higher than what it currently is.
It's a reason why I'm putting together alternatives for those of my clients who wants to avoid the complexity.
> If you were working in the orgs targeted by k8s, I think it was generally more of a mess. Think about managing a park of 100~200 servers with home made bash scripts and crappy monitoring tools and a modicum of dashboards.
We have Configuration Management systems like Puppet in mature enough state for over a decade now.
I haven't installed server manually or "with handmade scripts" in good 12 years by now.
We have park of around 100-200 servers and actually managing hardware is tiny part of it
> Now, k8s has engulfed a lot more than the primary target, but smaller shops go for it because they'r also hoping to hit it big someday I guess. Otherwise, there will be far easier solutions at lower scale.
K8S is popular because it gives developers a lot of power to deploy stuff, without caring much at underlying systems, without bothering ops people too much. Cloud-wise there is a bunch of native ways to just run a few containers that don't involve it but onprem it is nice way to get a bit faster iteration cycle on infrastructure, even if complexity cost is high.
It is overkill for I'd imagine most stuff deployed in K8S and half of deployments are probably motivated by resume padding rather than actual need.
I think you underestimate what can be done with actual code because the devops industry seems entirely code averse and seem to prefer a "infrastructure as data" paradigm instead and not even using good well tested/understood formats like sql databases or even object storage but seems to lean towards more fragile formats like yaml.
yes the possix shell is not a good language which is why thinks like perl, python and even php or C got widely used but there is a intermediate layer with tools like fabric(https://www.fabfile.org/) solving a lot of the problems with the fully homegrown without locking you into the "Infrastructure as(manually edited) Data" paradigm that only really works for problems of big scale and low complexity which is exactly the opposite of what you see in many enterprise environments.
> Think about managing a park of 100~200 servers with home made bash scripts and crappy monitoring tools and a modicum of dashboards.
Not even that. One repository I checked this week had some commits which messages were like "synchronize code with what is on production server". Awesome. And that's not counting the number of hidden adhoc cronjobs on multiple servers.
Also as a dev I like having a pool of "compute" where I can decide to start a new project whenever instead of having to ask some OPS team for servers, routing, DNS config.
/r/kubernetes had this announcement up about five mins after it dropped at Kubecon. It's a huge deal. So many tutorials and products used ingress-nginx for basic ingress, so them throwing in the towel (but not really) is big news.
That said, (a) the Gateway API supercedes Ingress and provides much more functionality without much more complexity, and (b) NGINX and HAproxy have Gateway controllers.
To generally answer your question, I use HN, /r/devops and /r/kubernetes to stay current. I'm also working on a weekly blog series wherein I'll be doing an overview and quick start guide for every CNCF project in their portfolio. There's hundreds (thousands?) of projects in the collection, so it will keep me busy until I retire, probably :)
> /r/kubernetes had this announcement up about five mins after it dropped at Kubecon. It's a huge deal. So many tutorials and products used ingress-nginx for basic ingress, so them throwing in the towel (but not really) is big news.
I was one of those whose first reaction was surprise, because ingress was the most critical and hardest aspect of a kubernetes rollout to implement and get up and running on a vanilla deployment. It's what cloud providers offer out of the box as a major selling point to draw in customers.
But then I browsed through the Gateway API docs, and it is a world of difference. It turns a hard problem that requires so many tutorials and products to help anyone get something running into a trivially solvable problem. The improvements on their security model is undoubtedly better and alone clearly justifies getting rid of ingress.
Change might be inconvenient, but you need change to get rid of pain points.
The Ingress API has been on ice for like 5 years. The core Kubernetes API doesn't change that much, at least these days. There's an infinite number of (questionable) add-ons you can deploy in your cluster, and I think that's mostly where folks get stuck in the mud.
But the Gateway API has only been generally available for two years now. And the last time I checked, most managed K8S solutions recommend the Ingress API while Gateway support is still experimental.
Things weren't simpler. The complexity was simply not visible because different teams/department were all doing a small part of what now a single team is doing with Kubernetes. Yes, for that single team it is more complex. But now it's 1 team that does it all, instead of 5 separate teams responsible for development, storage, networking, disaster recovery, etc.
I feel the same, especially the feeling old and jaded part, but I disagree that things were easier. Systems such as Kubernetes are not worse than trying to administer a zillion servers and networks by hand in the late '90s (or with tools like Puppet and Ansible a bit later), let alone HA shenanigans; neither are they a magical solution, more of a side-step and necessary evolution of scale.
There is a wild-grow of 80% solved problems in the Kubernetes space though, and especially the DevOps landscape seems to be plagued by half-solutions at the moment.
I think part of the complexity arises from everything being interconnected services instead of simple stand-alone software binaries. Things talking with other things, not necessarily from the same maker or ecosystem.
I don't understand decisions such as these though, retiring de facto standards such as Ingress NGINX. I can't name a single of our customers at $WORKPLACE that's running something else.
Honestly, a lot of the Hacker News discourse every single time anything having to do with Kubernetes comes up reads like uninformed annoyed griping from people who have barely or not used it. Kubernetes itself has been around since 2014. ingress-nginx was the original example of how to implement an Ingress controller. Ingress itself is not going away, which seems to a misconception of a lot of replies to your comment. A lot of tutorials use this because a lot of tutorials simply copied the Kubernetes upstream documentation's own tutorials, which used toy examples of how to do things, including ingress-nginx itself, which was meant to be a toy example of how to implement an Ingress controller.
Nonetheless, it was around a full decade before they finally decided to retire it. It's not like this is something they introduced, advertised as the ideal fit for all production use cases, and then promptly changed their minds. It's been over a decade.
Part of the problem here is the Kubernetes devs not really following their own advice, as annotations are supposed to be notes that don't implement functionality, but ingress-nginx allowed you to inject arbitrary configuration with them, which ended up being a terrible idea in the main use Kubernetes is really meant for, which is you're an organization running a multi-tenant platform offering application layer services to other organizations, which it is great for, but Hacker News with its "everything is either a week one startup or a solo indy dev" is blind to for whatever reason.
Nonetheless, they still kept it alive for over a decade. Hacker News also has the exact wrong idea about who does and should use Kubernetes. It's not FAANGs, which operate at a scale way too big for it and do this kind of thing using in-house tech they develop themselves. Even Google doesn't use it. It's more for the Home Depots and BMWs of the world, organizations which are large-scale but not primarily software companies, running thousands if not millions of applications in different physical locations run by different local teams, but not necessarily serving planet-scale web users. They can deal with changing providers once every ten years. I would invite everyone who thinks this is unmanageable complexity to try dipping their toes into the legal and accounting worlds that Fortune 500s have to deal with. They can handle some complexity.
I like devops. It means you get to get ahead of all the issues that you could potentially find in cybersecurity. Sure it's complicated, but at least you'll never be bored. I think the hardest part is that you always feel like you don't have enough time to do everything you need to.
DevOps teams are always running slightly behind and rarely getting ahead of technical debt because they are treated as cost centers by the business (perpetually understaffed) and as “last minute complicated requests that sound simple enough” and “oops our requirements changed” dumping grounds for engineering teams.
Plus, the ops side has a lot of challenges that can really be a different beast compared to the application side. The breadth of knowledge needed for the job is staggering and yet you also need depth in terms of knowing how operating systems and networks work.
If your infrastructure can justify the complexity of Kubernetes, keeping up with Kubernetes native software is extremely easy comparing to anything else I have dealt with. I had some horror story managing nginx instances on 3 servers with ansible. To me that's much harder than working with ingress controllers in Kubernetes.
Replacing an ingress controller in Kubernetes is also a well documented practice, with minimum or even zero downtime if you want to.
Generally, if your engineering team can reasonably keep things simple, it's good. However, business needs to grow and infrastructure needs to scale out. Sometimes trying too hard to be simple is, in my experience, how things become unmanageably complex.
I find well-engineered complexity to be much more pleasant to work with.
I once installed some kubernetes based software by following the instructions and watching many unicode/ascii-art animations on the commandline. I've also learned that the 8 in k8s stands for 8 letters: 'ubernete'. I've decided that D4s is not for me.
We don't, I focus mainly on backend, DevOps happens because in many small teams someone has to have multiple roles, and I end up taking DevOps responsibilities as well.
One thing that I push for nowadays, after a few scars is managed platforms.
Not just that, but technologies which took me many months or even years to become and expert at, the latest generation of engineers seem to be able to pick up in weeks. It's scary how fast the world is moving.
i prefer current era where i never have to ssh to debug a node. if a node is misbehaving or even needs a patch i destroy it. one command, works every time.
ingress-nginx is older than 5-7 years tough. In that time frame you would’ve needed to update your Linux system, which gets hairy most often as well.
The sad thing is just that the replacement is just not there and gateway api has a lot of drawbacks that might get fixed in the next release (working with cert manager)
In my experience, many teams keep up with this by spending a lot of time keeping up with this and less time developing the actual product. Which, you probably guessed it, results in products much shittier than what we had 10 or 20 years ago.
But hey, it keeps a lot of people busy, which means it also keeps a lot of managers and consultants and trainers busy.
I don't really get this mentality targing K8s specifically nowadays - perhaps that was true in the early days but I'm managing several clusters that are all a few years old at this point. Cluster services like Cilium, Traefik, etc are all managed through ArgoCD the same as our applications... every so often I go through the automated PRs for infra services, check for breaking changes and hit merge. They go to dev/staging/prod as tests pass.
I think services take me literally half an hour a month or so to deal with unless something major has changed, and a major K8s version upgrade where I roll all nodes is a few hours.
If people are deploying clusters and not touching them for a year+ then like any system you're going to end up with endless tech debt that takes "significant planning" to upgrade. I wouldn't do a distro upgrade between Ubuntu LTS releases without expecting a lot of work, in fact I'd probably just rebuild the server(s) using tool of choice.
> What is missing is an open source orchestrator that has a feature freeze and isn't Nomad or docker swarm.
Running Docker Swarm in production, can't really complain, at least for scales where you need a few steps up from a single node with Docker Compose, but not to the point where you'd need triple digits of nodes. I reckon that's most of the companies out there. The Compose specification is really simple and your ingress can be whatever web server you prefer configured as a reverse proxy.
Docker is not for production. Nomad at scale in practice needs a lot of load-bearing Bash scripts around it: for managing certs, for external DNS, you need Consul for service discovery, Vault for secrets.
At that point, is Nomad still simple? If you're going to take on all of the essential complexity of deploying software at scale, just do it right and use Kubernetes.
Source: running thousands of containers in production.
1) ingress still works but is on the path to deprecation. It's a super popular API, so this process will take a lot of time. That's why service meshes have been moving to Gateway API. Retiring ingress-nginx, the most popular ingress controller, is a very loud warning shot.
Ingress as defined by Kubernetes is really restricted if you need to do rewriting, redirecting and basically all the stuff we've been doing in pre-Kubernetes times. Nginx Ingress Controller worked around that by supporting a ton of annotations which basically were ingested into nginx.conf, to the point that any time you had a need everyone just assumed you were using nginx-ingress and recommended an annotation or two.
In a way, it was a necessity, since Ingress was all you'd get and without stuff like rewriting, doing gradual Kubernetes migrations would have been much more difficult to impossible. For that reason, every ingress controller tried to go a similar, but distinctly different way, with vastly incompatible elements, failing to gain traction. In a way I'm thankful they didn't try to reimplement nginx annotations (apart from one attempt I think), since we would have been stuck with those for foreseeable future.
Gateway API is the next-gen standardized thing to do ingress, pluggable and upgradable without being bound to a Kubernetes version. It delivers _some_ of the most requested features for Ingress, extending on the ingress concept quite a bit. While there is also quite a bit of mental overhead and concepts only really needed by a handful of people, just getting everyone to use one concept is a big big win for the community.
Ingress might not be deprecated, but in a way it was late to the party back in the day (OpenShift still has Route objects from that era because ingress was missing) and has somewhat overstayed its welcome. You can redefine Ingress in terms of Gateway API and this is probably what all the implementers will do.
We have been building an ingress Nginx compatibility layer in Traefik that supports the most used ingress Nginx annotations. You should definitely give it a try as it makes Traefik a drop-in replacement to ingress Nginx, without touching your existing ingress resources.
Your feedback will be super useful to make it better
To be fair, this is not the first time we'e heard about this, https://github.com/kubernetes/ingress-nginx/issues/13002 exists since March. However I also thought that the timeline to a complete project halt would be much longer considering the prevalence of the nginx ingress controller. Might also mean that InGate is dead, since it's not mentioned in this post and doesn't seem to be close to any kind of stable release.
It's not a service shutting down though. It will still work fine for a while and it there is a critical security patch required, the community might still be able to add it.
> Let people time to move out, 6 month is not enough.
Did you actually contribute? Either by donations or code? If not, Beggars can't be choosers. You are not entitled to free maintainence for open source software you use.
In my Docker Swarm clusters I just use a regular Apache2 image in front of everything, since mod_md is good enough for Let's Encrypt and it doesn't have the issue with "nginx: [emerg] host not found in upstream" that Nginx did when some containers are not available and are restarting (and none of that "nginx: [emerg] "proxy_redirect default" cannot be used with "proxy_pass" directive with variables" stuff either).
From the cases where I've used Kubernetes, the Nginx based ingress controller was pretty okay. I wonder why we never got Ingress Controllers for Kubernetes that are made with something like Apache2 under the hood, given how many people out there use it and how the implementation details tend to be useful to know anyways. Looking at the names in the new list of Gateway https://gateway-api.sigs.k8s.io/implementations/ in very much seems it's once more a case of NIH, although it's nice that LiteSpeed and Traefik and HAProxy are there.
Though now AI slop is upon us so we'll probably be even worse off for a while.
Kubernetes is promoting Gateway API for a while now. It's in GA for 2 years already (while Ingress was in GA quite late, 2020/K8s 1.19?).
Sun-setting ingress-nginx was not exactly a secret.
The whole Ingress in k8s is marked in docs as "frozen" for a while as well. There are no radical steps yet, but it's clear that Gateway API is something to get interested in.
Meanwhile Nginx Gateway Fabric [1] (which implements gateway API) is there, still uses nginx under the hood and remains opensource. They even have a "migration tool" to convert objects [3].
There are still a few months of support and time to move on to a different controller. Kubernetes still continues support for ingress so if you want to switch and keep using Ingress, there are other controllers [2].
[1] https://gateway-api.sigs.k8s.io/implementations/#nginx-gatew...
[2] https://gateway-api.sigs.k8s.io/implementations/#gateway-con...
[3] https://docs.nginx.com/nginx-gateway-fabric/install/ingress-...
But the point is this, it worked, it does work and will, if given developer time continue to work.
I now need to schedual in time to test the changes, then adjust the metrics and alerting that we have.
For no gain.
It just feels like kuberenetes is carbon fibre programming.
A deal with the devil was made. The C suite gets to tell a story that k8s practices let you suck every penny out of the compute you already paid for. Modern devs get to do constant busy work adding complexity everywhere, creating job security and opportunities to use fun new toys. "Here's how we're using AI to right size our pods! Never mind the actual costs and reliability compared to traditional infrastructure, we only ever need to talk about the happy path/best case scenarios."
Kubernetes is incredibly reliable compared to traditional infrastructure. It eliminates a ton of the configuration management dependency hellscape and inconsistent application deployments that traditional infrastructure entails.
Immutable containers provide a major benefit to development velocity and deployment reliability. They are far faster to pull and start than deploying to VMs, which end up needing some kind of annoying deployment pipeline involving building images or having some kind of complex and failure-prone deployment system.
Does Kubernetes have its downsides? Yeah, it’s complex overkill for small deployments or monolithic applications. But to be honest, there’s a lot of complexity to configuration management on traditional VMs with a lot of bad, not-so-gracefully aging tooling (cough…Chef Software)
And who is really working for a company that has a small deployment? I’d say that most medium-sized tech companies can easily justify the complexity of running a kubernetes cluster.
Networking can be complex with Kubernetes, but it’s only as complex as your service architecture.
These days there are more solutions than ever that remove a lot of the management burden but leave you with all the benefits of having a cluster, e.g., Talos Linux.
Dead Comment
If you were working in the orgs targeted by k8s, I think it was generally more of a mess. Think about managing a park of 100~200 servers with home made bash scripts and crappy monitoring tools and a modicum of dashboards.
Now, k8s has engulfed a lot more than the primary target, but smaller shops go for it because they'r also hoping to hit it big someday I guess. Otherwise, there will be far easier solutions at lower scale.
Kubernetes is for rather special case environments. I am coming around to the idea of using Kubernetes more, but I still think that if you're not provisioning bare-metal worker nodes, then don't bother with Kubernetes.
The problem is that Kubernetes provides orchestration which is missing, or at least limited, in the VM and bare-metal world, so I can understand reaching for Kubernetes, because it is providing a relatively uniform interface for your infrastructure. It just comes at the cost of additional complexity.
Generally speaking I think people need to be more comfortable with build packages for their operating system of choice and install applications that way. Then it's mostly configuration that needs to be pushed and that simplifies things somewhat.
Which solutions do you have in mind?
- VPS with software installed on the host
- VPS(s) with Docker (or similar) running containers built on-host
- Server(s) with Docker Swarm running containers in a registry
- Something Kubernetes like k3s?
In a way there's two problems to solve for small organisations (often 1 server per app, but up to say 3): the server, monitoring it and keeping it up to date, and the app(s) running on each server and deploying and updating them. The app side has more solutions, so I'd rather focus on the server side here.
Like the sibling commenter I strongly dislike the configuration management landscape (with particular dislike of Ansible and maintaining it - my takeaway is never use 3rd party playbooks, always write your own). As often for me these servers are set up, run for a bit and then a new one is set up and the app redeployed to that (easier than an OS upgrade in production) I've gone back to a bash provisioning script, slightly templated config files and copying them into place. It sucks, but not as much as debugging Ansible has.
E.g., Chef Software, especially after its acquisition, is just a dumpster fire of weird anti-patterns and seemingly incomplete, buggy implementations.
Ansible is more of the gold standard but I actually moved to Chef to gain a little more capability. But now I hate both of them.
When I just threw this all in the trash in my HomeLab and went to containerization it was a major breath of fresh air and resulted in getting a lot of time back.
For organizations, of the best parts about Kubernetes is that it’s so agnostic so that you can drop in replacements with a level of ease that is just about unheard of in the Ops world.
If you are a small shop you can just start with something simpler and more manageable like k3s or Talos Linux and basically get all the benefits without the full blown k8s management burden.
Would it be simpler to use plain Docker, Docker Swarm, Portainer, something like that? Yeah, but the amount of effort saved versus your ability to adapt in the future seems to favor just choosing Kubernetes as a default option.
I fully accept that there are sizes and complexities where k8s is a reasonable choice, and sometimes it's a reasonable choice because it's easier to hire for, but the bar should be a lot higher than what it currently is.
It's a reason why I'm putting together alternatives for those of my clients who wants to avoid the complexity.
We have Configuration Management systems like Puppet in mature enough state for over a decade now.
I haven't installed server manually or "with handmade scripts" in good 12 years by now.
We have park of around 100-200 servers and actually managing hardware is tiny part of it
> Now, k8s has engulfed a lot more than the primary target, but smaller shops go for it because they'r also hoping to hit it big someday I guess. Otherwise, there will be far easier solutions at lower scale.
K8S is popular because it gives developers a lot of power to deploy stuff, without caring much at underlying systems, without bothering ops people too much. Cloud-wise there is a bunch of native ways to just run a few containers that don't involve it but onprem it is nice way to get a bit faster iteration cycle on infrastructure, even if complexity cost is high.
It is overkill for I'd imagine most stuff deployed in K8S and half of deployments are probably motivated by resume padding rather than actual need.
yes the possix shell is not a good language which is why thinks like perl, python and even php or C got widely used but there is a intermediate layer with tools like fabric(https://www.fabfile.org/) solving a lot of the problems with the fully homegrown without locking you into the "Infrastructure as(manually edited) Data" paradigm that only really works for problems of big scale and low complexity which is exactly the opposite of what you see in many enterprise environments.
Not even that. One repository I checked this week had some commits which messages were like "synchronize code with what is on production server". Awesome. And that's not counting the number of hidden adhoc cronjobs on multiple servers.
Also as a dev I like having a pool of "compute" where I can decide to start a new project whenever instead of having to ask some OPS team for servers, routing, DNS config.
That said, (a) the Gateway API supercedes Ingress and provides much more functionality without much more complexity, and (b) NGINX and HAproxy have Gateway controllers.
To generally answer your question, I use HN, /r/devops and /r/kubernetes to stay current. I'm also working on a weekly blog series wherein I'll be doing an overview and quick start guide for every CNCF project in their portfolio. There's hundreds (thousands?) of projects in the collection, so it will keep me busy until I retire, probably :)
I was one of those whose first reaction was surprise, because ingress was the most critical and hardest aspect of a kubernetes rollout to implement and get up and running on a vanilla deployment. It's what cloud providers offer out of the box as a major selling point to draw in customers.
But then I browsed through the Gateway API docs, and it is a world of difference. It turns a hard problem that requires so many tutorials and products to help anyone get something running into a trivially solvable problem. The improvements on their security model is undoubtedly better and alone clearly justifies getting rid of ingress.
Change might be inconvenient, but you need change to get rid of pain points.
Yet they are retiring a core Ingress that has been around for almost as long as Kubernetes has.
Kubernetes is a gift.
There is a wild-grow of 80% solved problems in the Kubernetes space though, and especially the DevOps landscape seems to be plagued by half-solutions at the moment.
I think part of the complexity arises from everything being interconnected services instead of simple stand-alone software binaries. Things talking with other things, not necessarily from the same maker or ecosystem.
I don't understand decisions such as these though, retiring de facto standards such as Ingress NGINX. I can't name a single of our customers at $WORKPLACE that's running something else.
Nonetheless, it was around a full decade before they finally decided to retire it. It's not like this is something they introduced, advertised as the ideal fit for all production use cases, and then promptly changed their minds. It's been over a decade.
Part of the problem here is the Kubernetes devs not really following their own advice, as annotations are supposed to be notes that don't implement functionality, but ingress-nginx allowed you to inject arbitrary configuration with them, which ended up being a terrible idea in the main use Kubernetes is really meant for, which is you're an organization running a multi-tenant platform offering application layer services to other organizations, which it is great for, but Hacker News with its "everything is either a week one startup or a solo indy dev" is blind to for whatever reason.
Nonetheless, they still kept it alive for over a decade. Hacker News also has the exact wrong idea about who does and should use Kubernetes. It's not FAANGs, which operate at a scale way too big for it and do this kind of thing using in-house tech they develop themselves. Even Google doesn't use it. It's more for the Home Depots and BMWs of the world, organizations which are large-scale but not primarily software companies, running thousands if not millions of applications in different physical locations run by different local teams, but not necessarily serving planet-scale web users. They can deal with changing providers once every ten years. I would invite everyone who thinks this is unmanageable complexity to try dipping their toes into the legal and accounting worlds that Fortune 500s have to deal with. They can handle some complexity.
Plus, the ops side has a lot of challenges that can really be a different beast compared to the application side. The breadth of knowledge needed for the job is staggering and yet you also need depth in terms of knowing how operating systems and networks work.
Replacing an ingress controller in Kubernetes is also a well documented practice, with minimum or even zero downtime if you want to.
Generally, if your engineering team can reasonably keep things simple, it's good. However, business needs to grow and infrastructure needs to scale out. Sometimes trying too hard to be simple is, in my experience, how things become unmanageably complex.
I find well-engineered complexity to be much more pleasant to work with.
One thing that I push for nowadays, after a few scars is managed platforms.
But hey, it keeps a lot of people busy, which means it also keeps a lot of managers and consultants and trainers busy.
What is missing is an open source orchestrator that has a feature freeze and isn't Nomad or docker swarm.
I think services take me literally half an hour a month or so to deal with unless something major has changed, and a major K8s version upgrade where I roll all nodes is a few hours.
If people are deploying clusters and not touching them for a year+ then like any system you're going to end up with endless tech debt that takes "significant planning" to upgrade. I wouldn't do a distro upgrade between Ubuntu LTS releases without expecting a lot of work, in fact I'd probably just rebuild the server(s) using tool of choice.
Running Docker Swarm in production, can't really complain, at least for scales where you need a few steps up from a single node with Docker Compose, but not to the point where you'd need triple digits of nodes. I reckon that's most of the companies out there. The Compose specification is really simple and your ingress can be whatever web server you prefer configured as a reverse proxy.
At that point, is Nomad still simple? If you're going to take on all of the essential complexity of deploying software at scale, just do it right and use Kubernetes.
Source: running thousands of containers in production.
Deleted Comment
But I'd love LTS release chain that keeps config same for at least 2-3 years.
And I do not understand it:
1. Ingress still works, it's not deprecated.
2. There a lot of controllers, which supports both: Gateway API and Ingress (for example Traefik)
So, how Ingress Nginx retiring related / affects switch to Gateway API?
2) see (1).
In a way, it was a necessity, since Ingress was all you'd get and without stuff like rewriting, doing gradual Kubernetes migrations would have been much more difficult to impossible. For that reason, every ingress controller tried to go a similar, but distinctly different way, with vastly incompatible elements, failing to gain traction. In a way I'm thankful they didn't try to reimplement nginx annotations (apart from one attempt I think), since we would have been stuck with those for foreseeable future.
Gateway API is the next-gen standardized thing to do ingress, pluggable and upgradable without being bound to a Kubernetes version. It delivers _some_ of the most requested features for Ingress, extending on the ingress concept quite a bit. While there is also quite a bit of mental overhead and concepts only really needed by a handful of people, just getting everyone to use one concept is a big big win for the community.
Ingress might not be deprecated, but in a way it was late to the party back in the day (OpenShift still has Route objects from that era because ingress was missing) and has somewhat overstayed its welcome. You can redefine Ingress in terms of Gateway API and this is probably what all the implementers will do.
Deleted Comment
Sad to see such a core component die, but I guess now everyone has to migrate to gateways.
ingress ngnix. ngnix ingress.
https://traefik.io/blog/transition-from-ingress-nginx-to-tra...
Did you actually contribute? Either by donations or code? If not, Beggars can't be choosers. You are not entitled to free maintainence for open source software you use.
From the cases where I've used Kubernetes, the Nginx based ingress controller was pretty okay. I wonder why we never got Ingress Controllers for Kubernetes that are made with something like Apache2 under the hood, given how many people out there use it and how the implementation details tend to be useful to know anyways. Looking at the names in the new list of Gateway https://gateway-api.sigs.k8s.io/implementations/ in very much seems it's once more a case of NIH, although it's nice that LiteSpeed and Traefik and HAProxy are there.