It's the most high-influence, low-exposure essay I've ever read. As far as I'm concerned, this dude is a silent prescient genius working quietly for DARPA, and I had a sneak peak into future science when I read it. It's affected my thinking and trajectory for the past 8 years
This is literally a borderline crank article. "A new framework for physics and computing" turns out to be quantum annealing for SAT lol
The explanations about quantum mechanics are also imprecise and go nowhere towards the point of the article. Add a couple janky images and the "crank rant" impression is complete.
To be honest, I don't expect much in the way of sorting through these fuzzy and high dimensional topics, from someone like yourself who gravitates toward formal logic system as a way to understand the world. I would expect someone from your world to dismiss such things
Yup. The framing of “matter spontaneously reorganizes into better energy-dissipating paths” is pretty cool, but it’s not really adding any information. As soon as you ask how, you’ll need all the same fields you already had. Thermodynamic computing is cool and I know of at least one place working on it, but they also kinda act like nick land lunatics about what’s ultimately just a cool potential technology for an efficient computer for some tasks. It makes me worry retroactively that the tech is actually useless and they just take a lot of adderall.
I will say that the philosophical remarks are pretty obtuse and detract from the post. For example...
"Physics–and more broadly the pursuit of science–has been a remarkably successful methodology for understanding how the gears of reality turn. We really have no other methods–and based on humanity’s success so far we have no reason to believe we need any."
Physics, which is to say, physical methods have indeed been remarkably successful...for the types of things physical methods select for! To say it is exhaustive not only begs the question, but the claim itself is not even demonstrable by these methods.
The second claim contains the same error, but with more emphasis. This is just off-the-shelf scientism, and scientism, apart from what withering refutations demonstrate, should be obviously self-refuting. Is the claim that "we have no other methods but physics" (where physics is the paradigmatic empirical science; substitute accordingly) a scientific claim? Obviously not. It is a philosophical claim. That already refutes the claim.
Thus, philosophy has entered the chat, and this is no small concession.
Show us a real example of something that your putative non physics science can do that physics cannot, in a way that would be comprehensible to a sufficiently open minded science.
It seems unlikely you could suggest a concrete alternative to physics which explains observable phenomena as well and making generalizable predictions. Showing this would move your theoretical philosophy. In the meantime the rest of us will stick to physics because nobody has a coherent alternative which explains our observations.
I’m not sure I understand what you’re trying to say. It’s not really questionable that science and math are the only things to come out of philosophy or any other academic pursuit that have actually shown us how to objectively understand reality.
Now physics vs other scientific disciplines sure. Physicists love to claim dominion just like mathematicians do. It is generally true however that physics = math + reality and that we don’t actually have any evidence of anything in this world existing outside a physical description (eg a lot of physics combined = chemistry, a lot of chemistry = biology, a lot of biology = sociology etc). Thus it’s reasonable to assume that the chemistry in this world is 100% governed by the laws of physics and transitively this is true for sociology too (indeed - game theory is one way we quantifiably explain the physical reality of why people behave the way they due). We also see this in math where different disciplines have different “bridges” between them. Does that mean they’re actually separate disciplines or just that we’ve chosen to name features on the topology as such.
Man, this article is incredible. So many ideas resonate with me, but I never can't formulate them. Thanks for sharing, all my friends have to read this.
I'd be interested to learn who paid for this machine!
Did Sandia pay list price? Or did SpiNNcloud Systems give it to Sandia for free (or at least for a heavily subsidsed price)? I conjecture the latter. Maybe someone from Sandia is on the list here and can provide detail?
SpiNNcloud Systems is known for making misleading claims, e.g. their home page https://spinncloud.com/ lists DeepMind, DeepSeek, Meta and
Microsoft as "Examples of algorithms already leveraging dynamic sparsity", giving the false impression that those companies use SpiNNcloud Systems machines, or the specific computer architecture SpiNNcloud Systems sells.
Their claims about energy efficiency (like "78x more energy efficient than current GPUs") seem sketchy. How do they measure energy consumption and trade it off against compute capacities: e.g. a Raspberry Pi uses less absolute energy than a NVIDIA Blackwell but is this a meaningful comparison?
I'd also like to know how to program this machine. Neuromorphic computers have so far been terribly difficult to program. E.g. have JAX, TensorFlow and PyTorch been ported to SpiNNaker 2? I doubt it.
I don't know, but just wanted to say that my son got a job there as a mechanical engineer, and I couldn't be more proud. He can't tell me much because of classified status, but I can tell he loves his job and who he works with. Just sending praise to Sandia
Deep Mind (Google’s reinforcement learning lab), Deep Seek (Alibaba’s LLM initiative), Deep Crack (EFF’s DES cracker), Deep Blue (IBM’s chess computer), and Deep Thought (Douglas Adams’ universal brain) all set the stage...
So naturally, this thing should be called Deep Spike, Deep Spin, Deep Discount, or -- given its storage-free design -- Deep Void.
If it can accelerated nested 2D FORTRAN loops, you could even call it Deep DO DO, and the next deeper version would naturally be called Deep #2.
JD Vance and Peter Thiel could gang up, think long and hard hard, go all in, and totally get behind vigorously pushing and fully funding a sexy supercomputer with more comfortably upholstered, luxuriously lubricated, passively penetrable cushioned seating than even a Cray-1, called Deep Couch. And the inevitable jealous break-up would be more fun to watch than the Musk-Trump Bromance!
I question how viable these architectures are when considering that accurate simulation of a spiking neural network requires maintaining strict causality between spikes.
If you don't handle effects in precisely the correct order, the simulation will be more about architecture, network topology and how race conditions resolve. We need to simulate the behavior of a spike preceding another spike in exactly the right way, or things like STDP will wildly misfire. The "online learning" promise land will turn into a slip & slide.
A priority queue using a quaternary min-heap implementation is approximately the fastest way I've found to serialize spikes on typical hardware. This obviously isn't how it works in biology, but we are trying to simulate biology on a different substrate so we must make some compromises.
I wouldn't argue that you couldn't achieve wild success in a distributed & more non-deterministic architecture, but I think it is a very difficult battle that should be fought after winning some easier ones.
Artem Kirsanov provides some insights into the neurochemistry and types of neurons in his latest analysis [0] of distinct synaptic plasticity rules that operate across dendritic compartments. When simulating neurons in a more realistic approach, the timing can be deterministic.
I see "storage-free"... and then learn it still has RAM (which IS storage) ugh.
John Von Neumann's concept of the instruction counter was great for the short run, but eventually we'll all learn it was a premature optimization. All those transistors tied up as RAM just waiting to be used most of the time, a huge waste.
In the end, high speed computing will be done on an evolution of FPGAs, where everything is pipelined and parallel as heck.
The article hides the truth which is that it has no direct attached durable storage, but is connected via a Fabric to existing HPC hardware, which almost certainly loads data to the Spinnaker and stores results.
At the end of the day, processors really just load data, process, and store back to durable data (or generate some visible side effect).
Thermodynamic Computing https://knowm.org/thermodynamic-computing/
It's the most high-influence, low-exposure essay I've ever read. As far as I'm concerned, this dude is a silent prescient genius working quietly for DARPA, and I had a sneak peak into future science when I read it. It's affected my thinking and trajectory for the past 8 years
The explanations about quantum mechanics are also imprecise and go nowhere towards the point of the article. Add a couple janky images and the "crank rant" impression is complete.
Deleted Comment
To be honest, I don't expect much in the way of sorting through these fuzzy and high dimensional topics, from someone like yourself who gravitates toward formal logic system as a way to understand the world. I would expect someone from your world to dismiss such things
Dead Comment
Deleted Comment
"Physics–and more broadly the pursuit of science–has been a remarkably successful methodology for understanding how the gears of reality turn. We really have no other methods–and based on humanity’s success so far we have no reason to believe we need any."
Physics, which is to say, physical methods have indeed been remarkably successful...for the types of things physical methods select for! To say it is exhaustive not only begs the question, but the claim itself is not even demonstrable by these methods.
The second claim contains the same error, but with more emphasis. This is just off-the-shelf scientism, and scientism, apart from what withering refutations demonstrate, should be obviously self-refuting. Is the claim that "we have no other methods but physics" (where physics is the paradigmatic empirical science; substitute accordingly) a scientific claim? Obviously not. It is a philosophical claim. That already refutes the claim.
Thus, philosophy has entered the chat, and this is no small concession.
It seems unlikely you could suggest a concrete alternative to physics which explains observable phenomena as well and making generalizable predictions. Showing this would move your theoretical philosophy. In the meantime the rest of us will stick to physics because nobody has a coherent alternative which explains our observations.
Now physics vs other scientific disciplines sure. Physicists love to claim dominion just like mathematicians do. It is generally true however that physics = math + reality and that we don’t actually have any evidence of anything in this world existing outside a physical description (eg a lot of physics combined = chemistry, a lot of chemistry = biology, a lot of biology = sociology etc). Thus it’s reasonable to assume that the chemistry in this world is 100% governed by the laws of physics and transitively this is true for sociology too (indeed - game theory is one way we quantifiably explain the physical reality of why people behave the way they due). We also see this in math where different disciplines have different “bridges” between them. Does that mean they’re actually separate disciplines or just that we’ve chosen to name features on the topology as such.
Dead Comment
Did Sandia pay list price? Or did SpiNNcloud Systems give it to Sandia for free (or at least for a heavily subsidsed price)? I conjecture the latter. Maybe someone from Sandia is on the list here and can provide detail?
SpiNNcloud Systems is known for making misleading claims, e.g. their home page https://spinncloud.com/ lists DeepMind, DeepSeek, Meta and Microsoft as "Examples of algorithms already leveraging dynamic sparsity", giving the false impression that those companies use SpiNNcloud Systems machines, or the specific computer architecture SpiNNcloud Systems sells. Their claims about energy efficiency (like "78x more energy efficient than current GPUs") seem sketchy. How do they measure energy consumption and trade it off against compute capacities: e.g. a Raspberry Pi uses less absolute energy than a NVIDIA Blackwell but is this a meaningful comparison?
I'd also like to know how to program this machine. Neuromorphic computers have so far been terribly difficult to program. E.g. have JAX, TensorFlow and PyTorch been ported to SpiNNaker 2? I doubt it.
So naturally, this thing should be called Deep Spike, Deep Spin, Deep Discount, or -- given its storage-free design -- Deep Void.
If it can accelerated nested 2D FORTRAN loops, you could even call it Deep DO DO, and the next deeper version would naturally be called Deep #2.
JD Vance and Peter Thiel could gang up, think long and hard hard, go all in, and totally get behind vigorously pushing and fully funding a sexy supercomputer with more comfortably upholstered, luxuriously lubricated, passively penetrable cushioned seating than even a Cray-1, called Deep Couch. And the inevitable jealous break-up would be more fun to watch than the Musk-Trump Bromance!
Sounds like the big brother of Dev Null? :)
If you don't handle effects in precisely the correct order, the simulation will be more about architecture, network topology and how race conditions resolve. We need to simulate the behavior of a spike preceding another spike in exactly the right way, or things like STDP will wildly misfire. The "online learning" promise land will turn into a slip & slide.
A priority queue using a quaternary min-heap implementation is approximately the fastest way I've found to serialize spikes on typical hardware. This obviously isn't how it works in biology, but we are trying to simulate biology on a different substrate so we must make some compromises.
I wouldn't argue that you couldn't achieve wild success in a distributed & more non-deterministic architecture, but I think it is a very difficult battle that should be fought after winning some easier ones.
[0] https://www.youtube.com/watch?v=9StHNcGs-JM
There are some counterintuitive design choices that emerge, and it inevitably leads to vector machines with arbitrary order Tensor capabilities.
Have a wonderful day =)
- 152 cores per chip, equivalent to ~128 CUDA cores per SM
- per-chip SRAM (20 MB) equivalent to SM high-speed shared memory
- per-board DRAM (96 GB across 48 chips) equivalent to GPU global memory
- boards networked together with something akin to NVLink
I wonder if they use HBM for the DRAM, or do anything like coalescing memory accesses.
John Von Neumann's concept of the instruction counter was great for the short run, but eventually we'll all learn it was a premature optimization. All those transistors tied up as RAM just waiting to be used most of the time, a huge waste.
In the end, high speed computing will be done on an evolution of FPGAs, where everything is pipelined and parallel as heck.
Oh... 138240 Terabytes of RAM.
Ok.
So a paltry 2,304 GB RAM
https://cointelegraph.com/news/neuromorphic-computing-breakt...
At the end of the day, processors really just load data, process, and store back to durable data (or generate some visible side effect).
But at in this case, one wouldn't subject to macro-scale nonlinear effects arising from the uncertainty principle when trying to "restore" the system.