Readit News logoReadit News
__turbobrew__ · 2 months ago
I feel like etcd is one of the few use cases where Intel Optane would actually make sense. I build and run several bare metal clusters with over 10k nodes and etcd is by and large the biggest pain for us. Sometimes an etcd node just randomly stops accepting any proposals which halts the entire cluster until you can remove the bad etcd node.

From what I remember, GKE has implemented an etcd shim on top of spanner as a way to get around the scalability issues, but unfortunately for the rest of us who do not have spanner there aren’t any great options.

I feel like at a fundamental level that pod affinity, antiaffinity, and topology spreads are not compatible with very large clusters due to the complexity explosion in large clusters.

Another thing to consider is that the larger a cluster becomes, the larger the blast radius is. I have had clusters of 10k nodes spectacularly fail due to code bugs within k8s. Sharding total compute capacity compute capacity into multiple isolated k8s clusters reduces the likelihood that a software bug is going to take down everything as you can carefully upgrade only a single cell at a time with bake periods between each cell.

jen20 · 2 months ago
AFAIK all the hyperscalers have replaced etcd for their managed Kubernetes services [1], [2], [3] - though Azure is the least clear about what they actually use currently.

[1]: https://aws.amazon.com/blogs/containers/under-the-hood-amazo...

[2]: https://cloud.google.com/blog/products/containers-kubernetes...

[3]: https://azure.microsoft.com/en-us/blog/a-cosmonaut-s-guide-t...

jbnorth · 2 months ago
While I can’t speak for the others, AWS doesn’t replace all of etcd. Only the raft consensus layer which is replaced with Journal which is an internal AWS service.
fowl2 · 2 months ago
Interestingly the public of Azure’s etcd-compatible service was withdrawn before exiting preview.

[1] https://learn.microsoft.com/en-us/answers/questions/154061/a...

femiagbabiaka · 2 months ago
Yep, every cluster approaching 10k I know of has either pared back etcd's durability guarantees or rewritten and replaced it in some manner. Actually the post goes into detail about doing this exactly, the Alibaba paper they reference says about the same.

> Sharding total compute capacity compute capacity into multiple isolated k8s clusters reduces the likelihood that a software bug is going to take down everything as you can carefully upgrade only a single cell at a time with bake periods between each cell.

Yeah, I've been meaning to try out something like Armada to simplify things on the cluster-user side. Cluster-providers have lots of tools to make managing multiple clusters easier but if it means having to rewrite every batch job..

jamesblonde · 2 months ago
Is it throughput and latency that are the etcd bottlenecks? Our database, RonDB, is an in-memory open-source database (a fork of MySQL Cluster). We have scaled it to 100m reads/sec on AWS hardware (not even top of the line). Might be an interesting project to implement an open-source etcd shim on top of it?

Reference: https://www.rondb.com/post/100m-key-lookups-sec-with-rest-ap...

davidgl · 2 months ago
See https://github.com/k3s-io/kine, k3s uses this to shim etcd to MySQL, Postgres and sqlite
nonameiguess · 2 months ago
The setting is configurable, but by default, etcd's Raft implementation requires a voting node to write to disk before it makes a vote, as in actually flushing to disk, not just writing to the file cache. Since you need a majority vote before a client can get a response, this is why it's strongly recommended you use the fastest possible disks, keep the nodes geographically close to each other, and etcd's default storage is only 2GB per node.

All in all, it was a poor choice for Kubernetes to use this as its backend in the first place. Apparently, Google uses its own shim, but there is also kine, which was created a long time ago for k3s and allows you to use a RDBMS. k3s used sqlite as its default originally, but any API equivalent database would work.

We should keep in mind etcd was meant to literally be the distributed /etc directory for CoreOS, something you would read from often but perform very few writes to. It's a configuration store. Kubernetes deciding to also use it for /var was never a great idea.

torginus · 2 months ago
It,s nice to know that the upper bound of the resiliency of a k8s cluster is the amount of redundancy etcd has - which is in essence a horizontally scaled monolith.
jeffinhat · 2 months ago
This is an awesome experiment and write up. I really appreciate the reproducibility.

I would like to see how moving to database that scales write throughput with replicas would behave, namely FoundationDB. I think this will require more than an intermediary like kine to be efficient, as the author illustrates the apisever does a fair bit of its own watching and keeping state. I also think there's benefit, at least for blast radius, to shard the server by api group or namespace.

I think years ago this would have been a non starter with the community, but given AWS has replaced etcd (or at least aspects) with their internal log service for their large cluster offering, I bet there's some appetite for making this interchangable and bringing and open source solution to market.

I share the authors viewpoint that for modern cloud based deployments, you're probably best avoiding it and relying on VMs being stable and recoverable. I think reliability does matter if you want to actually realize the "borg" value and run it on bare metal across a serious fleet. I haven't found the business justification to work on that though!

ymelnyk · 2 months ago
Here you go, Kine FoundationDB backend https://github.com/melgenek/f8n

To be honest, I was building it with the purpose of matching the Etcd scale, but making foundationdb a multitenant data store.

But with the recent craze of scalability , I'll be investing time into understanding how far foundationdb can be pushed as a K8s data store. Stay tuned.

jeffinhat · 2 months ago
Awesome, I will!

It would be great to see where the limits are with this approach.

I think at some point, you need to go deeper into the apiserver for scale than an API compatible shim, but this is just conjecture and not real data.

pluc · 2 months ago
> Early on in this project, I asked ChatGPT “I want to scale Kubernetes to 1 million nodes. What types of problems would I need to overcome?”

click

breakingcups · 2 months ago
It's a shame the author led with something that carries about the same authority as a horoscope, since the rest of the article is actually quite interesting.
tbrockman · 2 months ago
If you have sufficient knowledge in the subject matter you're questioning ChatGPT about, you can fairly reliably discern complete bullshit from something plausibly true that warrants additional investigation (which I'd say is more useful than your typical horoscope). In isolation it seems worth the gamble to me, so long as you don't view it as much more than consulting the tea leaves.
wppick · 2 months ago
If you don't need the isolation of of k8s then don't forget about erlang, which is another option to scale up to 1 million functions. Obviously k8s containers (which are fundamentally just isolated processes) and erlang processes are not interchangeable things, but when thinking about needing in the order of millions of processes erlang is pretty good prior art
theptip · 2 months ago
This is 1m nodes, you typically run tens or hundreds of pods per node, each with one or more containers. So more like 100m+ functions if I follow the Erlang analogy correctly?
reactordev · 2 months ago
This is not analogous. It’s just someone beating the Erlang drum. You can’t PyTorch in Erlang.
fcarraldo · 2 months ago
I don’t think there are very many k8s clusters running 100s of pods per node. The default maximum is 110. You can, of course, scale beyond this, but you’ll run into etcd performance issues, IP space issues, max connection, IOPS and networking limitations for most use cases.

At 1M nodes I’d still expect an average of a dozen or so pods per node.

sally_glance · 2 months ago
Agree this is a consideration if your only workload is an existing or greenfield ErlangVM-compatible project.

From what I know basically everyone approaching this scale with k8s has different problems to solve, namely multi-tenancy (shared hosting/internal plattform providers) and compatibility with legacy or standard software.

czhu12 · 2 months ago
If anyone is looking for a gentler, Heroku like onramp to Kubernetes, its exactly why I built Canine [1].

In retrospect, at my previous company, what we really needed in the early days was something that was Heroku-like (don't make me think about infra (!)) but could be easily added to and scaled up over time, as our service grew. We eventually grew to about 10M users, using the site monthly, and had to do a huge effort to migrate to Kubernetes.

Canine's philosophy is: full Kubernetes, with a deployment layer on top. If you ever out grow it, just dump Canine entirely, and work directly with the Kubernetes system it's operating. It even gives you all the K8s YAML config needed to offboard.

It's also similar to how the dev infra works at Airbnb (where I worked before that) -- Kubernetes underneath, a user friendly interface on top.

rootlocus · 2 months ago
You're not the first one to think that k9 is a good name for a kubernetes related technology https://k9scli.io
kawsper · 2 months ago
That's really impressive and an interesting experiment.

I was about to say that Nomad did something similar, but that was 2 million Docker containers across 6100 nodes, https://www.hashicorp.com/en/c2m

vebgen · 2 months ago
This is an absolutely incredible technical deep-dive. The section on replacing etcd with mem_etcd resonates with challenges we've been tackling at a much smaller scale building an AI agent system.

A few thoughts:

*On watch streams and caching*: Your observation about the B-Tree vs hashmap cache tradeoff is fascinating. We hit similar contention issues with our agent's context manager - switched from a simple dict to a more complex indexed structure for faster "list all relevant context" queries, but update performance suffered. The lesson about O(1) writes vs O(log n) reads being the wrong tradeoff for high-write workloads is universal.

*On optimistic concurrency for scheduling*: The scatter-gather scheduler design is elegant. We use a similar pattern for our dual-agent system (TARS planner + CASE executor) where both agents operate semi-independently but need coordination. Your point about "presuming no conflicts, but handling them when they occur" is exactly what we learned - pessimistic locking kills throughput far worse than occasional retries.

*The spicy take on durability*: "Most clusters don't need etcd's reliability" is provocative but I suspect correct for many use cases. For our Django development agent, we keep execution history in SQLite with WAL mode (no fsync), betting that if the host crashes, we'd rather rebuild from Git than wait on every write. Similar philosophy.

The mem_etcd implementation in Rust is particularly interesting - curious if you considered using FoundationDB's storage engine or something similar vs rolling your own? The per-prefix file approach is clever for reducing write amplification.

Fantastic work - this kind of empirical systems research is exactly what the community needs more of. The "what are the REAL limits" approach vs "conventional wisdom says X" is refreshing.

up2isomorphism · 2 months ago
“Perhaps my spiciest take from this entire project: most clusters don’t actually need the level of reliability and durability that etcd provides.”

This assumption is completely out of touch, and is especially funny when the goal is to build an extra large cluster.

itsnowandnever · 2 months ago
etcd is also the entire point of k8s. that it's a single self-contained framework and doesn't require an external backer service. there is no kubernetes without etcd. much of the "secret sauce" of kubernetes is the "watch etcd" logic that "watches" desired state and does the cybernetic loop to bring the observed state adhere to the desired state.
trenchpilgrim · 2 months ago
The API and controller loops are the point of k8s. etcd is an implementation detail and lots of clusters swap it out for something else like sqlite. I'm pretty sure that GCP and Azure are using Spanner or Cosmos instead of etcd for their managed offerings.
jauntywundrkind · 2 months ago
The API server is the thing. It so happens that the API server can mostly be a thin shell over etcd. But etcd itself while so common is not sacrosanct.

https://github.com/k3s-io/kine is a reasonably adequate substitute for etcd. sqlite, MySQL, PostgreSQL can also be substituted in. Etcd is from the ground up built to be more scale-out reliable, and that rocks to have baked in. But given how easy it is to substitute etcd out, I feel like we are at least a little off if we're trying to say "etcd is also the entire point of k8s" (the APIserver is)

Deleted Comment

geoctl · 2 months ago
Is it? I honestly kinda believe that etcd is probably the weakest point in vanilla k8s. It is simply unsuitable for heavy write environments and causes lots of consistency problems under heavy write loads, it's generally slow, it has value size constraints, it offers very primitive querying, etc... Why not replace etcd altogether with something like Postgres + Redis/NATS?
cyberax · 2 months ago
> etcd is also the entire point of k8s. that it's a single self-contained framework and doesn't require an external backer service. there is no kubernetes without etcd.

Sorry, this is just BS. etcd is a fifth wheel in most k8s installations. Even the largest clusters are better off with something like a large-ish instance running a regular DB for the control plane state storage.

Yes, etcd theoretically protects against any kind of node failures and network partitions. But in practice, well, nobody really cares about the control plane being resilient against meteorite strikes and Cthulhu rising from the deeps.

kevin_nisbet · 2 months ago
I'm with you, I think most people might think they don't need this reliability, until they do. I'm sure there is some subset of clusters where the claim is correct.

But from the article, turning off fsync and expecting to only lose a few ms of updates. I've tried to recover etcd on volumes that lied about fsync and experienced a power outage, and I don't think we managed to recover it. There might be more options now to recover and ignore corrupted WAL entries, but at that time it was very difficult and I think we ended up just reinstalling from scratch. For clusters where this doesn't matter or the SLOs for recovery account for this, I'm totally onboard, but only if you know what you're doing.

And similar the point from the article that "full control plane data loss isn’t catastrophic in some environments" is correct, in the sense of what the author means by some environments. Because I don't think it's limited to those that are management by gitops as suggested, but where there is enough resiliency and time to redeploy and do all the cleanup.

Anyways, like much advice on the internet, it's not good or bad, just highly situational, and some of the suggestions should only be applied if the implications are fully understood.