Readit News logoReadit News
kragniz commented on Podman: A Daemonless Container Engine   podman.io/... · Posted by u/lobo_tuerto
lima · 5 years ago
Great idea - let's rewrite the most security-critical piece of container tech in plain C!

Red Hat literally took a memory safe Go program and rewrote it in C for performance, in 2020.

kragniz commented on The AWS Controllers for Kubernetes   aws.amazon.com/blogs/cont... · Posted by u/bdcravens
013a · 5 years ago
At least with many Kubernetes controllers that have been built in the past (I haven't tried ACK), they don't respond to or attempt to "live-correct" changes in configuration which are made outside of Kubernetes (e.g. via the AWS UI). They act on events that are fired in the cluster when a resource they monitor changes in the cluster.

So, again I haven't tried ACK, but they don't help to correct the drift that you outline as a problem. They will take action to correct the drift if the internal resource changes, but how often do you update the desired configuration of an S3 bucket? For me, almost never.

In essence, the controller acts less like a live daemon always keeping things in sync, and more like Terraform in a CI pipeline. And, given you've probably got all your Kubernetes YAMLs inside a git repo anyway, all you've accomplished by deploying one of these things is trading a predictable, easy to debug step in a CI pipeline for a live running service inside your cluster that is in perpetual beta, could stop running, have bugs, could get evicted, uses cluster resources, doesn't talk to Github automatically, etc.

In fact, you'll often times even get drift upon internal resource updates, in situations where the controller is too spooked to make a change that could be destructive. We've seen this with the ALB Ingress Controller, where it never deletes ALBs, even if the entire underlying service, ingress, etc are deleted.

(Edit): To be clear: I think the direction of "specify everything in kubeyaml" could end up being a win for the infra world. If we could throw out a terraform+kubeyaml system for just "everything in kubeyaml", that feels like a simplification. But, I'm not convinced that the best way to get that kubeyaml into AWS is via live-running code inside clusters, especially since the complexity of AWS means its literally never going to work right (they can't even get CloudFormation to work right, and that's a managed service). A live-running controller is necessary for some things, like ALB Ingress, due to how quickly those changes need to be made (updating the ALB with new IP/ports of containers). But, for other things like S3/SQS/etc, I'm less sure.

kragniz · 5 years ago
I haven't checked the ACK code to see what they do, but this is what the resync period is for in controllers.

Here's a comment with some discussion: https://github.com/kubernetes/kubernetes/pull/75423/files

u/kragniz

KarmaCake day3419September 14, 2011
About
https://github.com/kragniz

Email me at: hn@kragniz.eu.

[ my public key: https://keybase.io/kragniz; my proof: https://keybase.io/kragniz/sigs/ioCHYQFrfuQJALvNH5ezmFxSIdz-pKP0G9HSY1OZol8 ]

View Original