"5000 Erlangs" - oh, they meant 5000 instances of some Erlang interpreter.
Not Erlang as a unit of measure.[1] One voice call for one hour is one Erlang.
Neat! I always thought the name of the Erlang programming language just meant “Ericsson Language”, since this programming language was invented for Ericsson. Never knew there was anything more than that to the name!
According to Robert Virding at an unnamed bar in Berlin ~3 years ago they just wanted to be like Pascal in terms of picking a mathematician. But Ericsson Language certainly helped sell it internally, I'm sure.
I was aware an Erlang being a unit though I'd forgotten what it measured. I Need to have my fun when giving titles to these things. Hope it fell within bearable tolerances.
But if you are looking at a hosted erlang VM for a capex of one dollar then these folks are onto something
Cores really are the only way to escape the broken moores law - and this does look like a real step in the important direction. Less LLMs more tiny cores
The article is about 5000 Erlang nodes (BEAM VMs), not processes - a single BEAM instance can efficiently handle millions of lightweight processes, making this even more impressive from a density perspective.
I really like the manycores approach, but we haven’t seen it come to fruition — at least not on general purpose machines. I think a machine that exposes each subset of cores as a NUMA node and doesn’t try to flatten memory across the entire set of cores might be a much more workable approach. Otherwise the interconnect becomes the scaling limit quickly (all cores being able to access all memory at speed).
Erlang, at least the programming model, lends itself well to this, where each process has a local heap. If that can stay resident to a subsection of the CPU, that might lend itself better to a reasonably priced many core architecture.
> think a machine that exposes each subset of cores as a NUMA node and doesn’t try to flatten memory across the entire set of cores might be a much more workable approach. Otherwise the interconnect becomes the scaling limit quickly (all cores being able to access all memory at speed).
Epyc has a mode where it does 4 numa nodes per socket, IIRC. It seems like that should be good if your software is NUMA aware or NUMA friendly.
But most of the desktop class hardware has all the cores sharing a single memory controller anyway, so if you had separate NUMA nodes, it wouldn't reflect reality.
Reducing cross core communication (NUMA or not) is the key to getting high performance parallelism. Erlang helps because any cross process communication is explicit, so there's no hidden communication as can sometimes happen in languages with shared memory between threads. (Yes, ets is shared, but it's also explicit communication in my book)
> Erlang, at least the programming model, lends itself well to this, where each process has a local heap.
That loosely describes plenty of multithreaded workloads, perhaps even most of them. A thread that doesn't keep its memory writes "local" to itself as much as possible will run into heavy contention with other threads and performance will suffer a lot. It's usual to try and write multithreaded workloads in a way that tries to minimize the chance of contention, even though this may not involve a literal "one local heap per core".
Paraphrasing the late great Joe Armstrong, the great thing about Erlang as opposed to just about any other language is that every year the same program gets twice as fast as last year.
Manycores hasn't succeeded because frankly the programming model of essentially every other language is stuck in 1950. I, the program, am the entire and sole thing running on this computer, and must manually manage resources to match its capabilities. Hence async/await, mutable memory, race checkers, function coloring, all that nonsense. If half the effort spent straining to get the ghost PDP-11 ruling all the programming languages had been spent on cleaning up the (several) warts in the actor model and its few implementations, we'd all be driving Waymos on Jupiter by now.
> Erlang, at least the programming model, lends itself well to this, where each process has a local heap. If that can stay resident to a subsection of the CPU, that might lend itself better to a reasonably priced many core architecture.
I tend to agree.
Where it gets -really- interesting to think about, are concepts like 'core parking' actors of a given type on specific cores; e.x. 'somebusinessprocess' actor code all happens on a specific fixed set of cores and 'account' actors run on a different fixed set of cores, versus having all the cores going back and forth between both.
Could theoretically get a benefit due to instruction cache being very consistent per core, giving benefits due to the mechanical sympathy (I think Disruptors also take advantage of this).
On the other hand, it may not be as big a benefit, in the sense that cross process writes are cross core writes and those tend to lead to their own issues...
Who knows what will really happen, but there have been rumours of significant core-count bumps in Ryzen 6, which would edge the mainstream significantly closer to manycore.
I found out that Ampere is into edge and telco usage way after we got connected to do this work actually. I've been an Elixir dev and through that connected to Erlang for 7-ish years.
But I will certainly try to leverage my telco-connection to get to play with more of their kit if I can.
Azul did something like this back in the ‘10s for Java. But it’s one of those products for when you’ve put all you eggs in one basket and you need the biggest basket money can buy. Sort of like early battery backed storage. T was only fit for WAL writing on mission critical databases because one cost more than a car.
Erlang handles heavy load VERY well, between work stealing schedulers and soft realtime via reduction counting (any program can be interrupted and stopped after any instruction and resumed transparently)
[1] https://en.wikipedia.org/wiki/Erlang_(unit)
Does 1 Animat convert to metric nitpicks?
You know you're successful once you're added to: https://www.theregister.com/Design/page/reg-standards-conver...
But if you are looking at a hosted erlang VM for a capex of one dollar then these folks are onto something
Cores really are the only way to escape the broken moores law - and this does look like a real step in the important direction. Less LLMs more tiny cores
(And that also includes hosting, egress, power, etc).
https://www.hetzner.com/dedicated-rootserver/rx170/
in practice you can't though
Erlang, at least the programming model, lends itself well to this, where each process has a local heap. If that can stay resident to a subsection of the CPU, that might lend itself better to a reasonably priced many core architecture.
Epyc has a mode where it does 4 numa nodes per socket, IIRC. It seems like that should be good if your software is NUMA aware or NUMA friendly.
But most of the desktop class hardware has all the cores sharing a single memory controller anyway, so if you had separate NUMA nodes, it wouldn't reflect reality.
Reducing cross core communication (NUMA or not) is the key to getting high performance parallelism. Erlang helps because any cross process communication is explicit, so there's no hidden communication as can sometimes happen in languages with shared memory between threads. (Yes, ets is shared, but it's also explicit communication in my book)
That loosely describes plenty of multithreaded workloads, perhaps even most of them. A thread that doesn't keep its memory writes "local" to itself as much as possible will run into heavy contention with other threads and performance will suffer a lot. It's usual to try and write multithreaded workloads in a way that tries to minimize the chance of contention, even though this may not involve a literal "one local heap per core".
Manycores hasn't succeeded because frankly the programming model of essentially every other language is stuck in 1950. I, the program, am the entire and sole thing running on this computer, and must manually manage resources to match its capabilities. Hence async/await, mutable memory, race checkers, function coloring, all that nonsense. If half the effort spent straining to get the ghost PDP-11 ruling all the programming languages had been spent on cleaning up the (several) warts in the actor model and its few implementations, we'd all be driving Waymos on Jupiter by now.
I tend to agree.
Where it gets -really- interesting to think about, are concepts like 'core parking' actors of a given type on specific cores; e.x. 'somebusinessprocess' actor code all happens on a specific fixed set of cores and 'account' actors run on a different fixed set of cores, versus having all the cores going back and forth between both.
Could theoretically get a benefit due to instruction cache being very consistent per core, giving benefits due to the mechanical sympathy (I think Disruptors also take advantage of this).
On the other hand, it may not be as big a benefit, in the sense that cross process writes are cross core writes and those tend to lead to their own issues...
fun to think about.
But I will certainly try to leverage my telco-connection to get to play with more of their kit if I can.
In other words, nepobaby fault tolerance
Just being able to star that many instances is not that exciting until we know what they can do.
However BEAM is not the only factor in this process. the entire hardware platform as well.
This is after all a lot about that nice and huge cpu.
I mean when you have all 5000 started why not let the do some work? Stress test it with a few real life scenarios for 48h and let us see some number.