I currently serve as vice-chair of the board. I joined right before COVID and my focus has been helping get policies and processes in place to continue to help us scale. We've seen pretty rapid growth post COVID and are seeing ~10 new members a month.
Happy to answer any questions (when I wake up).
Be sure to stop and say hi If you're ever in Denver, CO!
And to be honest, this is part of the problem. We use confusing (and sometimes conflicting) terminology to describe both authentication (identifying somebody) and authorization (making sure you have the right permissions to do something).
More information: https://stackoverflow.com/a/1087071/19020
That should let you close your laptop and open it in a few days without any big issue, even if S0ix continues to suck.
On 11th Gen and 12th Gen, one of the other major drivers of s0ix drain we have seen is SSDs with firmware issues that keep them in higher power states in suspend. Updating SSD firmware is challenging on Linux, so if you are unable to do that, there is also a workaround to change a kernel parameter which we have seen result in <1%/hour drain on 11th Gen: https://guides.frame.work/Guide/Ubuntu+22.04+LTS+Installatio...
I've turned just about every knob and kernel parameter I can, only use the USB C expansion cards, kernel is 5.18.12, and my Samsung 980 Pro is on the latest firmware (5B2QGXA7) so I look forward to what the 12th Gen board can do.
> During the Q&A an attendee asked about Kubernetes CI. The audience member said that CI for the project cost "between $100,000 and $200,000 a month"
wtf?! That's a crazy amount of money. At 10,000 PRs a month, that's $10 to $20 per PR(!!)
Here's our GCP spend for the past month: https://imgur.com/a/VVJTSKx. Note that does not include a separate AWS cluster that we are migrating jobs too.
A large chunk of this comes from the nature of distributed tests. We need to reproduce the environment, spin up compute, etc. We do have a large problem with flaky tests on the project as well. Whether that's timeouts, memory/cpu consumption creep over time, loads of other things. We talk about how one day we'd like to get to the granulairty of being able to go to a SIG and say, "this flaky test of yours is costing the project $x in retries. Please dedicate some resources to fix it".
How we distribute the artifacts is a whole different conversation. The container world is unique in that voluntary mirrors are not as possible as with linux packages and other binaries.
If this space interests you please join us at either [SIG K8s Infra](https://github.com/kubernetes/community/tree/master/sig-k8s-...) or [SIG Testing](https://github.com/kubernetes/community/blob/master/sig-test...)!