This is certainly interesting, and is something I often wonder about. In our case we mostly run Kubernetes clusters on EX130 servers (Xeon, 24c, 256GB). In this situation there are a lot of processes running, for which increased cores-count and memory availability seems worth it. Particularly when we have the cost of 10-25g private networking for each server, lower node counts seems to come out being more economical.
But, but with fewer processes I can totally believe this works out to be the better option. Thank you for the write-up!
Thanks for the comment! Yeah, if you are using the servers dedicated to yourself and considering the larger server packs more, it definitely makes sense.
In our case, though, if we provide 1/48 of 10Gbit network, it really doesn't work for our end customers. So, we're trying to provide the VMs from smaller but more up-to-date lineup.
It is surprisingly hard to keep a modern CPU core properly saturated unless you are baking global illumination, searching primes or mining crypto currency. I/O and latency will almost always dominate at scale. Moving information is way more expensive than processing it.