> Once network IO became the main bottleneck, the faster CPU mattered less.
It is surprisingly hard to keep a modern CPU core properly saturated unless you are baking global illumination, searching primes or mining crypto currency. I/O and latency will almost always dominate at scale. Moving information is way more expensive than processing it.
Wouldn't the cores be mostly waiting for RAM to dereference a zillion pointers? The cores would still show up in `top` as busy, but wouldn't actually be doing much.
Big machines already are more like clusters (NUMA), where access to memory outside a core's domain is much slower. I suspect compute will be more and more dispersed within RAM.
Lol, transputers getting their 5-yearly re-mention.
In most of the ways that matter, Transputers weren’t too early. And if you built one again now, you’d find little to no market for it (as per efforts by several startups and various related research efforts).
Sources: many conversations with the architect of Transputer, and various of the engineers that designed the hardware and Occam, and watching the Inmos 40th Anniversary lectures on YouTube (channel: @ednutting). Also being in the processor design startup space.
>It is surprisingly hard to keep a modern CPU core properly saturated
Modern PS5 developments already shows SSD I/O is getting faster than CPU core can keep up. It is also not true when CPU is still the limiting factor on Web Server.
Some major game engines still have large single thread bottlenecks. They aren't fundamental to the problem space though, more just from legacy engine design decisions.
Being single thread bottlenecked doesn't mean they are actually saturating a CPU core, it may likely be waiting for data from RAM for a lot if not most of the time.
AMD’s fastest consumer CPUs are a great value for small servers. If you’re doing just one task (like in this article) the clock speed is a huge benefit.
The larger server grade parts start to shine when the server is doing a lot of different things. The extra memory bandwidth helps keep the CPU fed and the higher core count reduces the need for context switching because your workloads aren’t competing as much.
The best part about the AMD consumer CPUs is that you can even use ECC RAM if you get the right motherboard.
Asrock Rack and Supermico sell AM4/AM5 motherboards with support for ECC UDIMMs. Other vendors might state official support on workstation-class motherboards, and in general it might work even on motherboards without official support.
"Consumer" shouldn't mean garbage. Between random bit flips in an environment where you have 16 GiB of RAM or more (common in gaming setups now) and Rowhammer, ECC should be the standard. It's only not so that chip and RAM vendors can bin and charge a premium for the good stuff.
I have 2 Supermicro H13SAE-MF with Ryzen 9's and ECC UDIMM RAM. It's not registered or LR ECC RAM like a mainstream server though, and not as fancy as Chipkill-like ECC systems. This particular board also accepts EPYC 4004 / 4005 series. I'll probably replace the Ryzens with EPYC 4585PX once they get old enough and cheap enough on the secondary markets. These boxes are 100 Gbps network test nodes.
(I'm currently in the midst of refanning my CSE847-JBOD with bigger, quieter fans and swapping PSUs.)
This is certainly interesting, and is something I often wonder about. In our case we mostly run Kubernetes clusters on EX130 servers (Xeon, 24c, 256GB). In this situation there are a lot of processes running, for which increased cores-count and memory availability seems worth it. Particularly when we have the cost of 10-25g private networking for each server, lower node counts seems to come out being more economical.
But, but with fewer processes I can totally believe this works out to be the better option. Thank you for the write-up!
Thanks for the comment! Yeah, if you are using the servers dedicated to yourself and considering the larger server packs more, it definitely makes sense.
In our case, though, if we provide 1/48 of 10Gbit network, it really doesn't work for our end customers. So, we're trying to provide the VMs from smaller but more up-to-date lineup.
> In this situation there are a lot of processes running, for which increased cores-count and memory availability seems worth it.
It's always the workload type. For the mixed environments (some with a heavy constant load while the other have only some occasional spikes) the increase of RAM per node is the most important part what allowed us to actually decrease the node count. The whole racks with multiple switches was replaced by a single rack with a modest amount of servers and a single stacked switch.
If you are running engineering jobs (HPC) like electrical simulation for chip design the only two thingd you care about are the CPU clock speed and memory RW speed.
It's unfortunate that we can only have 16 core CPUs running at 5+ GHz. I would have loved to have a 32 or 64 core Ryzen 9. The software we use charge per core used, so 30% less performance is that much extra cost, which is easily an order of magnitude higher than a flagship server CPU. These licenses cost millions per year for couple 16 core seats.
So, at the end, CPU speed is determining how fast abd economically chips are developed.
>These licenses cost millions per year for couple 16 core seats
The ROI on hiring a professional overclocker to build, tune and test a workstation is probably at least break even. As long as the right checksums are in place, extreme OC is just a business writeoff.
I had a conversation like this with a business that had been around for decades and suddenly grew 100x because some market they were in “took off”. They had built up decades of integration with a legacy database that was single threaded and hence they couldn’t scale it.
Given the urgency and the kind of money involved, I offered to set up a gaming PC for them using phase change cooling. Sadly they just made the staff work longer hours to catch up with the paperwork.
Unfortunately, the page's numbers are represented in a sloppy way. A benchmark number with a dollar sign. Different job counts. Lacking documentation. I wouldn't trust this data too much.
From a cloud perspective I guess that would make sense. But if you are actually owning the hardware you would be looking at Performance per Watt over single and multiple core and its balance across I/O both in Sequel and Random. Because at the end of the day you are still limited by power budget. And single core boost are not sustainable over a long period of time especially in a multiple core CPU scenario.
On that note I cant wait to see 256 Core Zen6c later this year. We will soon be able to buy a server with 512 Core, 1024 vCPU / Thread. 2TB of Memory, x TB of SSD all inside 1U.
Yes, you're right but we tried to keep the workloads less cache dependent.
Also, EPYC's PCIe advantage doesn't hold for the Hetzner provided server setup unfortunately because the configurator allows the same number of devices to be attached to both servers.
It is surprisingly hard to keep a modern CPU core properly saturated unless you are baking global illumination, searching primes or mining crypto currency. I/O and latency will almost always dominate at scale. Moving information is way more expensive than processing it.
Transputers just came 30+ years too early.
In most of the ways that matter, Transputers weren’t too early. And if you built one again now, you’d find little to no market for it (as per efforts by several startups and various related research efforts).
Sources: many conversations with the architect of Transputer, and various of the engineers that designed the hardware and Occam, and watching the Inmos 40th Anniversary lectures on YouTube (channel: @ednutting). Also being in the processor design startup space.
Modern PS5 developments already shows SSD I/O is getting faster than CPU core can keep up. It is also not true when CPU is still the limiting factor on Web Server.
But I figure it is a broad field, so I'm curious what you're doing and if it is the best use of time and energy
I'm also assuming that the generative AI model wouldn't run on your machine well and need to be elsewhere
The larger server grade parts start to shine when the server is doing a lot of different things. The extra memory bandwidth helps keep the CPU fed and the higher core count reduces the need for context switching because your workloads aren’t competing as much.
The best part about the AMD consumer CPUs is that you can even use ECC RAM if you get the right motherboard.
https://www.asrock.com/mb/AMD/X870%20Taichi%20Creator/index....
Asus has options as well such as https://www.asus.com/motherboards-components/motherboards/pr...
I think it was more rare when AM5 first came out, there were a bunch of ECC supported consumer boards for AM4 and threadripper.
(I'm currently in the midst of refanning my CSE847-JBOD with bigger, quieter fans and swapping PSUs.)
But, but with fewer processes I can totally believe this works out to be the better option. Thank you for the write-up!
In our case, though, if we provide 1/48 of 10Gbit network, it really doesn't work for our end customers. So, we're trying to provide the VMs from smaller but more up-to-date lineup.
It's always the workload type. For the mixed environments (some with a heavy constant load while the other have only some occasional spikes) the increase of RAM per node is the most important part what allowed us to actually decrease the node count. The whole racks with multiple switches was replaced by a single rack with a modest amount of servers and a single stacked switch.
It's unfortunate that we can only have 16 core CPUs running at 5+ GHz. I would have loved to have a 32 or 64 core Ryzen 9. The software we use charge per core used, so 30% less performance is that much extra cost, which is easily an order of magnitude higher than a flagship server CPU. These licenses cost millions per year for couple 16 core seats.
So, at the end, CPU speed is determining how fast abd economically chips are developed.
The ROI on hiring a professional overclocker to build, tune and test a workstation is probably at least break even. As long as the right checksums are in place, extreme OC is just a business writeoff.
https://blackcoretech.com/
Given the urgency and the kind of money involved, I offered to set up a gaming PC for them using phase change cooling. Sadly they just made the staff work longer hours to catch up with the paperwork.
Time is money. Or the inverse of money. Ufff, my head hurts.
On that note I cant wait to see 256 Core Zen6c later this year. We will soon be able to buy a server with 512 Core, 1024 vCPU / Thread. 2TB of Memory, x TB of SSD all inside 1U.
That said, at such low core count the primary Epyc advantage is PCIe lanes no?
Also, EPYC's PCIe advantage doesn't hold for the Hetzner provided server setup unfortunately because the configurator allows the same number of devices to be attached to both servers.