After reading through all comments as of 2024/05/11 I (as a professor at some major university) am quite surprised that not one single comment has asked the obvious question (instead of dishing out loads of (partial) "textbook knowledge" about brain functions, the difference between mammals and birds, AI and LLM etc.), which would be: what do all those strange structures and objects do which we know nothing about whatsoever? Have a look:
Sure, if you think of it as an egg, instead of as a galaxy of electrons and atoms so dense as to have structure big enough for us to give it the label "egg shaped object".
As a complete outsider who doesn't know what to look for, the dendrite inside soma (dendrite from one cell tunnelling through the soma of another) was the biggest surprise.
The interactive visualization is pretty great. Try zooming in on the slices and then scrolling up or down through the layers. Also try zooming in on the 3D model. Notice how hovering over any part of a neuron highlights all parts of that neuron:
If someone did this experiment with a crow brain I imagine it would look “twice as complex” (whatever that might mean). 250 million years of evolution separates mammals from birds.
I wonder if we manage to annotate this much level of detail about our brain, and then let (some variant of the current) models train on it, will those intrinsically end up generalizing a model for intelligence?
> The 3D map covers a volume of about one cubic millimetre, one-millionth of a whole brain, and contains roughly 57,000 cells and 150 million synapses — the connections between neurons.
This is great and provides a hard data point for some napkin math on how big a neural network model would have to be to emulate the human brain. 150 million synapses / 57,000 neurons is an average of 2,632 synapses per neuron. The adult human brain has 100 (+- 20) billion or 1e11 neurons so assuming the average rate of synapse/neuron holds, that's 2.6e14 total synapses.
Assuming 1 parameter per synapse, that'd make the minimum viable model several hundred times larger than state of the art GPT4 (according to the rumored 1.8e12 parameters). I don't think that's granular enough and we'd need to assume 10-100 ion channels per synapse and I think at least 10 parameters per ion channel, putting the number closer to 2.6e16+ parameters, or 4+ orders of magnitude bigger than GPT4.
There are other problems of course like implementing neuroplasticity, but it's a fun ball park calculation. Computing power should get there around 2048: https://news.ycombinator.com/item?id=38919548
Or you can subscribe to Geoffrey Hinton's view that artificial neural networks are actually much more efficient than real ones- more or less the opposite of what we've believed for decades- that is that artificial neurons were just a poor model of the real thing.
Quote:
"Large language models are made from massive neural networks with vast numbers of connections. But they are tiny compared with the brain. “Our brains have 100 trillion connections,” says Hinton. “Large language models have up to half a trillion, a trillion at most. Yet GPT-4 knows hundreds of times more than any one person does. So maybe it’s actually got a much better learning algorithm than us.”
GPT-4's connections at the density of this brain sample would occupy a volume of 5 cubic centimeters; that is, 1% of a human cortex. And yet GPT-4 is able to speak more or less fluently about 80 languages, translate, write code, imitate the writing styles of hundreds, maybe thousands of authors, converse about stuff ranging from philosophy to cooking, to science, to the law.
I mean, Hinton’s premises are, if not quite clearly wrong, entirely speculative (which doesn't invalidate the conclusions about efficienct that they are offered to support, but does leave them without support) GPT-4 can produce convincing written text about a wider array of topics than any one person can, because it's a model optimized for taking in and producing convincing written text, trained extensively on written text.
Humans know a lot of things that are not revealed by inputs and outputs of written text (or imagery), and GPT-4 doesn't have any indication of this physical, performance-revealed knowledge, so even if we view what GPT-4 talks convincingly about as “knowledge”, trying to compare its knowledge in the domains it operates in with any human’s knowledge which is far more multimodal is... well, there's no good metric for it.
Hinton is way off IMO. Amount of examples needed to teach language to an LLM is many orders of magnitude more than humans require. Not to mention power consumption and inelasticity.
LLM does not know math as well as a professor, judging from the large number of false functional analysis proofs I have had it generate will trying to learn functional analysis. In fact the thing it seems to lack is what makes a proof true vs. fallacious, as well as a tendency to answer false questions. “How would you prove this incorrectly transcribed problem” will get fourteen steps with 8 and 12 obviously (to a student) wrong, while the professor will step back and ask what am I trying to prove.
Computation is really integrated through every scale of cellular systems. Individual proteins are capable of basic computation which are then integrated into regulatory circuits, epigenetics, and cellular behavior.
The calculation is intentionally underestimating the neurons, and even with that the brain ends up having more parameters than the current largest models by orders of magnitude.
Yes the estimation is intentionally modelling the neurons simpler than they are likely to be. No, it is not “missing” anything.
That may or may not still be too simple a model. Cells are full of complex nano scale machinery and not only might it me plausible some of it is involved in the processes of cognition, I'm aware of at least one study which identified some nano scale structures directly involved in how memory works in neurones. Not to mention a lot of what's happening has a fairly analogue dimension.
I remember an interview with one neurologist who stated humanity has for centuries compared the functioning of the brain to the most complex technology devised yet. First it was compared to mechanical devices, then pipes and steam, then electrical circuits, then electronics and now finally computers. But he pointed out, the brain works like none of these things so we have to be aware of the limitations of our models.
Based on the stuff I've read, it's almost for sure too simple a model.
One example is that single dendrites detect patterns of synaptic activity (sequences over time) which results in calcium signaling within the neuron and altered spiking.
There's a lot of in-neuron complexity, I'm sure there is some cross-synapse signaling (I mean, how can it not exist? There's nothing stopping it.), and I don't think the synapse behavior can be modeled as just more signals.
Yes and no on order of magnitude required for decent AI, there is still (that I know of) very little hard data on info density in the human brain. What there is points at entire sections that can sometimes be destroyed or actively removed while conserving "general intelligence".
Rather than "humbling" I think the result is very encouraging: It points at major imaging / modeling progress, and it gives hard numbers on a very efficient (power-wise, size overall) and inefficient (at cable management and probably redundancy and permanence, etc) intelligence implementation. The numbers are large but might be pretty solid.
We may not get there. Doing some more back of the envelope calculations, let's see how much further we can take silicon.
Currently, TSMC has a 3nm chip. Let's halve it until we get to the atomic radius of silicon of
0.132 nm. That's not a good value because we're not considering crystal latice distances, Heisenberg uncertainty, etc., but it sets a lower bound. 3nm -> 1.5nm -> 0.75 nm -> 0.375nm -> 0.1875nm. There is no way we can get past 3 more generations using Silicon. There's a max of 4.5 years of Moore's law we're going to be able to squeeze out. That means we will not make it past 2030 with these kind of improvements.
I'd love to be shown how wrong I am about this, but I think we're entering the horizontal portion of the sigmoidal curve of exponential computational growth.
As important and impressive a result as this is, I am reminded of the cornerstone problem of neuroscience, which goes something like this: if we knew next to nothing about processors but could attach electrodes to the die, would we be able to figure out how processors execute programs and what those programs do, in detail, just from the measurements alone? And now scale that up several orders of magnitude and introduce sensitivity to timing of arrival for signals, and you got the brain. Likewise ok, you have petabytes of data now, but will we ever get closer to understanding, for example, how cognition works? It was a bit of a shock for me when I found out (while taking an introductory comp neuroscience course) that we simply do not have tractable math to model more than a handful neurons in time domain. And they do actually operate in time domain - timings are important for Hebbian learning, and there’s no global “clock” - all that the brain does is a continuous process.
I just read that article and enjoyed it. Thanks for sharing! I don’t think the author was arguing biological processes can’t be reverse engineered, but rather that the tools and approaches typically used by biology researchers may not be as effective as tools and approaches used by engineers.
Right. The arguments for the study of A.I. were that you will not discover the principles of flight by looking at a birds feather under an electron microscope.
It’s fascinating, but we aren’t going to understand intelligence this way. Emergent phenomenon are part of complexity theory, and we don’t have any maths for it. Our ignorance in this space is large.
When I was young, I remember a common refrain being “will a brain ever be able to understand itself?”. Perhaps not, but the drive towards understanding is still a worthy goal in my opinion. We need to make some breakthroughs in the study of complexity theory.
Particle physics works in a similar way, but instead of attaching electrodes, you shoot at them with guns and then analyze trajectories of the fragments.
The cheap monkey headset works in a similar way: monkeys just essentially continue to analyze trajectories of medieval cannon balls in the LHC and to count potatoes in the form of bytes.
>> The sample was immersed in preservatives and stained with heavy metals to make the cells easier to see.
Try experimenting with immersing your brain in preservatives and staining with heavy metals to see how would you be able to write the comment similar to the above.
No wonder that monkey methods continue to unveil monkey cognition.
Note the part where the biologists tell him to make an electron microscope that's 1000X more powerful. Then note what technology was used to scan these images.
I think it's actually "What you should do in order for us to make
more rapid progress is to make the electron microscope
100 times better" and the state of art at the time was "it can only resolve about
10 angstroms" or I guess 1nm. So 100x better would be 0.1 angstrom / 0.01 nm.
Is there a name for the somewhat uncomfortable feeling caused by seeing something like this? I wish I could better describe it. I just somehow feel a bit strange being presented with microscopic images of brain matter. Is that normal?
For me the disorder of it is stressful to look at. The brain has poor cable management.
That said I do get this eerie void feeling from the image. My first thought was to marvel how this is what I am as a conscious being in terms of my "implementation", and it is a mess of fibers locked away in the complete darkness of my skull.
There is also the morose feeling from knowing that any image of human brain tissue was once a person with a life and experiences. It is your living brain looking at a dead brain.
Is it the shapes, similar to how patterns of holes can disturb some people? Or is it more abstract, like "unknowable fragments of someone's inner-most reality flowed through there"? Not that I have a name for it either way. The very shape of it (in context) might represent an aspect of memory or personality or who knows what.
> "unknowable fragments of someone's inner-most reality flowed through there"
It's definitely along these lines. Like so much (everything?) that is us happens amongst this tiny little mesh of connections. It's just eerie, isn't it?
Sorry for the mundane, slightly off-topic question. This is far outside my areas of knowledge, but I thought I'd ask anyhow. :)
It makes me think humans aren't special, and there is no soul, and consciousness is just a bunch of wires like computers. Seriously, to see the ENTIRETY of human experience, love and tragedy and achievement, are just electric potentials transmitted by those wiggly cells, just extinguishes any magic I once saw in humanity.
If the wires make consciousness then there is consciousness. The substrate is irrelevant and has no bearing on the awesomeness of the phenomena of knowing, experiencing and living.
I dunno, the whole of human experience is what I expect of a system composed of 100,000,000,000,000 entities, with quintillions of interconnections, interacting together simultaneously on a molecular level. Happiness, sadness, love and hate can (obviously) be described and experienced with this level of complexity.
I'd be much more horrified to see our consciousness simplified to anything smaller than that, which is why any hype for AGI because we invented chatbots is absolutely laughable to me. We just invented the wheel and now hope to drive straight to the Moon.
Anyway, you are seeing a fake three dimensional simplification of a four+ dimensional quantum system. There is at least one unseen physical dimension in which to encode your "soul"
I’m not religious but it’s as close to a spiritual experience as I’ll ever have. It’s the feeling of being confronted with something very immediate but absolutely larger than I’ll ever be able to comprehend
When I did fetal pig dissection, nothing bothered me until I got to the brain. I dunno what it is, maybe all those folds or the brain juice it floats in, but I found it disconcerting.
https://h01-release.storage.googleapis.com/gallery.html
I count seven.
I'm particularly fond of the "Egg shaped object with no associated processes". :)
What if it's a "wireless" device?
Deleted Comment
As a complete outsider who doesn't know what to look for, the dendrite inside soma (dendrite from one cell tunnelling through the soma of another) was the biggest surprise.
http://h01-dot-neuroglancer-demo.appspot.com/#!gs://h01-rele...
To think that’s one single millimeter of our brain and look at all those connections.
Now I understand why crows can be so smart walnut sized brain be damned.
What an amazing thing brains are.
Possibly the most complex things in the universe.
Is it complex enough to understand itself though? Is that logically even possible?
If someone did this experiment with a crow brain I imagine it would look “twice as complex” (whatever that might mean). 250 million years of evolution separates mammals from birds.
Deleted Comment
LLMs that work at a very crude level of string tokens and emit probabilities.
the sheer number of things that work in co-ordination to make biology work!
In-f*king-credible !
This is great and provides a hard data point for some napkin math on how big a neural network model would have to be to emulate the human brain. 150 million synapses / 57,000 neurons is an average of 2,632 synapses per neuron. The adult human brain has 100 (+- 20) billion or 1e11 neurons so assuming the average rate of synapse/neuron holds, that's 2.6e14 total synapses.
Assuming 1 parameter per synapse, that'd make the minimum viable model several hundred times larger than state of the art GPT4 (according to the rumored 1.8e12 parameters). I don't think that's granular enough and we'd need to assume 10-100 ion channels per synapse and I think at least 10 parameters per ion channel, putting the number closer to 2.6e16+ parameters, or 4+ orders of magnitude bigger than GPT4.
There are other problems of course like implementing neuroplasticity, but it's a fun ball park calculation. Computing power should get there around 2048: https://news.ycombinator.com/item?id=38919548
Quote:
"Large language models are made from massive neural networks with vast numbers of connections. But they are tiny compared with the brain. “Our brains have 100 trillion connections,” says Hinton. “Large language models have up to half a trillion, a trillion at most. Yet GPT-4 knows hundreds of times more than any one person does. So maybe it’s actually got a much better learning algorithm than us.”
GPT-4's connections at the density of this brain sample would occupy a volume of 5 cubic centimeters; that is, 1% of a human cortex. And yet GPT-4 is able to speak more or less fluently about 80 languages, translate, write code, imitate the writing styles of hundreds, maybe thousands of authors, converse about stuff ranging from philosophy to cooking, to science, to the law.
The human brain does what it does using about 20W. LLM power usage is somewhat unfavourable compared to that.
Humans know a lot of things that are not revealed by inputs and outputs of written text (or imagery), and GPT-4 doesn't have any indication of this physical, performance-revealed knowledge, so even if we view what GPT-4 talks convincingly about as “knowledge”, trying to compare its knowledge in the domains it operates in with any human’s knowledge which is far more multimodal is... well, there's no good metric for it.
And yet somehow it's also infinitely less useful than a normal person is.
Pdf: “Protein molecules as computational elements in living cells - Dennis Bray” https://www.cs.jhu.edu/~basu/Papers/Bray-Protein%20Computing...
The calculation is intentionally underestimating the neurons, and even with that the brain ends up having more parameters than the current largest models by orders of magnitude.
Yes the estimation is intentionally modelling the neurons simpler than they are likely to be. No, it is not “missing” anything.
I remember an interview with one neurologist who stated humanity has for centuries compared the functioning of the brain to the most complex technology devised yet. First it was compared to mechanical devices, then pipes and steam, then electrical circuits, then electronics and now finally computers. But he pointed out, the brain works like none of these things so we have to be aware of the limitations of our models.
Based on the stuff I've read, it's almost for sure too simple a model.
One example is that single dendrites detect patterns of synaptic activity (sequences over time) which results in calcium signaling within the neuron and altered spiking.
So we might need significantly less brain matter for general intelligence.
Rather than "humbling" I think the result is very encouraging: It points at major imaging / modeling progress, and it gives hard numbers on a very efficient (power-wise, size overall) and inefficient (at cable management and probably redundancy and permanence, etc) intelligence implementation. The numbers are large but might be pretty solid.
Don't know about upload though...
We may not get there. Doing some more back of the envelope calculations, let's see how much further we can take silicon.
Currently, TSMC has a 3nm chip. Let's halve it until we get to the atomic radius of silicon of 0.132 nm. That's not a good value because we're not considering crystal latice distances, Heisenberg uncertainty, etc., but it sets a lower bound. 3nm -> 1.5nm -> 0.75 nm -> 0.375nm -> 0.1875nm. There is no way we can get past 3 more generations using Silicon. There's a max of 4.5 years of Moore's law we're going to be able to squeeze out. That means we will not make it past 2030 with these kind of improvements.
I'd love to be shown how wrong I am about this, but I think we're entering the horizontal portion of the sigmoidal curve of exponential computational growth.
The car's engine, transmission and wheels, require no muscles or nerves
On the second point, the failure of Openworm to model the very well-mapped-out C. elegans (~0.3k neurons) says a lot.
It’s fascinating, but we aren’t going to understand intelligence this way. Emergent phenomenon are part of complexity theory, and we don’t have any maths for it. Our ignorance in this space is large.
When I was young, I remember a common refrain being “will a brain ever be able to understand itself?”. Perhaps not, but the drive towards understanding is still a worthy goal in my opinion. We need to make some breakthroughs in the study of complexity theory.
Yes we figured out how to build aircraft.
But it can not be compared to a bird flying. Neither in terms of efficiency or elegance.
The same argument holds for "AI" too. We don't understand a damn thing about neural networks.
There's more - we don't care to understand them as long as it's irrelevant to exploiting them.
Try experimenting with immersing your brain in preservatives and staining with heavy metals to see how would you be able to write the comment similar to the above.
No wonder that monkey methods continue to unveil monkey cognition.
I think we all do every day
Note the part where the biologists tell him to make an electron microscope that's 1000X more powerful. Then note what technology was used to scan these images.
We have made some progress it seems. Googling I see "up to 0.05 nm" for transmission electron microscopes and "less than 0.1 nanometers" for scanning. https://www.kentfaith.co.uk/blog/article_which-electron-micr...
For comparison the distance between hydrogen nuclei in H2 is 0.074 nm I think.
You can see the shape of molecules but it's still a bit fuzzy to see individual atoms https://cosmosmagazine.com/science/chemistry/molecular-model...
That said I do get this eerie void feeling from the image. My first thought was to marvel how this is what I am as a conscious being in terms of my "implementation", and it is a mess of fibers locked away in the complete darkness of my skull.
There is also the morose feeling from knowing that any image of human brain tissue was once a person with a life and experiences. It is your living brain looking at a dead brain.
It's definitely along these lines. Like so much (everything?) that is us happens amongst this tiny little mesh of connections. It's just eerie, isn't it?
Sorry for the mundane, slightly off-topic question. This is far outside my areas of knowledge, but I thought I'd ask anyhow. :)
If the wires make consciousness then there is consciousness. The substrate is irrelevant and has no bearing on the awesomeness of the phenomena of knowing, experiencing and living.
I'd be much more horrified to see our consciousness simplified to anything smaller than that, which is why any hype for AGI because we invented chatbots is absolutely laughable to me. We just invented the wheel and now hope to drive straight to the Moon.
Anyway, you are seeing a fake three dimensional simplification of a four+ dimensional quantum system. There is at least one unseen physical dimension in which to encode your "soul"