There is also another model about memory which I totally love. In that memory is not stored but brain is more like an antenna which can retrieve information from the space time coordinates where events actually took place.
Because everything is constantly moving around something each event in the universe probably has unique location, a real physical place where it happened.
So, the idea is that as we’re moving through space we’re leaving a trail as the events are (lack of better word) “printed” in the fabric of space. Then as apparently Space is not empty at all but filled with Planck scale micro wormholes entangling all things into this one universal neural network, then our brains should be very well capable of tracing back our unique trails through space and retrieve information through those micro wormholes.
As developer I think it would make most sense to only save references rather than trying store all events inside everyones brains. That would be huge waste of resources and just damn stupid.
And probably the way we humans have built our computers also reflects how the universe really works, because after all we are bits of the universe doing whatever the universe is doing.
This is quantum woo fueled neo cartesian dualist bullocks. Sorry not sorry. For starters, if events in space and any point in time are accessible as "memories", the laws of physics that enable that shouldn't constrain the only application to memory. The same physical mechanism would have to enable telepathy and even time travel. Secondly, it violates the whole "no hidden variables" results from recent Bell inequality QM experiments. Third of all, it requires long term stable room temperature entanglement basically everywhere (HAHAHA LOL NO). Fourth, it flies in the face of what we do know about how neurons compute things vis a vis signal spikes and activation potentials. Fifth, it fails to account for the numerous, reproducible experiments demonstrating the creation of erroneous memories made from whole cloth. Need I go on?
I can't believe this comment isn't buried. Its one thing to engage in a bit of rank speculation outside your specialty, but this is straight-up crackpot science.
This would be a much more valuable and informative comment if it wasn’t so wrapped in self-aware rude delivery. Saying so as someone who is prone to make that mistake.
> As developer I think it would make most sense to only save references rather than trying store all events inside everyones brains. That would be huge waste of resources and just damn stupid.
> And probably the way we humans have built our computers also reflects how the universe really works, because after all we are bits of the universe doing whatever the universe is doing.
I think this is way too simplistic. Just like planes don't fly like birds, computers don't work like "the universe"
We don't know much about the brain at all and starting to think about it in terms of "computer" logic seems very wrong to me. Computers are our very crude and poor attempt at replicating brains/intelligence, by definition they're built on ultra simplified principles that our own brains came up with, using computer terms to define the brain/universe is closing a loop that doesn't exist
It only sounds like a waste because you(we)'re constrained by our own brains, the universe doesn't care about our ability to understand or make sense of it. Something might sound unoptimised to you(us) but might be the only way allowed by physics.
Naming and defining computer parts using human/brain related terms was an error imho, it confuses people and make them think we are able to replicate (or even understand) things that we have no clue about, they're very poor analogies.
> That would be huge waste of resources and just damn stupid.
Isn't the entire universe a huge waste of resource and just damn stupid ?
Yeah I always like to think like this sometimes, but as far as we know a "reference" doesn't make sense in the brain - it doesn't have access to external stuff so in the brain context it would be like neurons connecting to a single neuron. Aside from something like a URL a computer can't reference anything outside of itself and I don't think we have a brain equivalent
> “printed” in the fabric of space. Then as apparently Space is not empty at all but filled with Planck scale micro wormholes entangling all things into this one universal neural network, then our brains should be very well capable of tracing back our unique trails through space and retrieve information through those micro wormholes.
That's fun enough if that's intended as some form of recreational speculative fiction/sci-fi. I'm all about that.
But you said "model", without qualification, so its hard for me to tell from your comment the extent to which you want to put this forward as something that stands a chance of really being true.
In Buddhist philosophy the mind is a sixth sense organ, perceiving thoughts. The Lankavatara Sutra from Mahayana buddhism goes into some detail as to how karma is formed, past life memories and the like, in the “storehouse consciousness” which is basically encoded into the world around us by our actions. I thought your post was very beautiful and similar in spirit to a lot of ancient philosophy on consciousness.
That's potentially a good starting point for a sci-fi setting.
That's unlikely to have any resemblance to how brains work though. For one, we would be lacking the mechanism to do such a 'retrieval'. We can also influence memory formation.
Say you binge drink. You are temporarily unable to form memories. We can pinpoint the exact area affected. Unless alcohol is somehow able to influence the micro-wormholes, it shouldn't affect memories at all.
> As developer I think it would make most sense to only save references rather than trying store all events inside everyones brains.
Keep in mind that this comment is in no way even a tiny bit compatible with our current understanding of how the brain works. Treat it as a good sci-fi plot.
This is a very interesting idea. My criticism would be that our memories are such imperfect recollections of the real thing that its hard for me not to see them as a "copy" rather than a "pointer". Signal loss through the fabric of space?
Do you have any scientific references to back this up? I agree that there could be something to your theory.
References make sense to me as well, but I also agree with the conclusion of this article, that memories are distributed. If you put the two together, then I think it works well. Those could be distributed references, that when fired together, activate a specific memory somewhere else in space-time.
This reminds me of the the Stephen's Baxter book "the light of other days". A wormhole device was used to capture video from any spacetime coordinates. It was a nice book.
I've never really loved contextual fear conditioning as the default for "memory". Another way of describing this result is that "even a mild trauma scars the entire brain."
In all seriousness, though, I'd say that there's a broad dispute in the field between those that believe that memory involves dynamic neural activity involving a multiplexed circuit of neurons that encode many different memories and those (currently led by the Tonegawa lab) that think that memories are associated with individual neurons.
I'll look into the why a group believes memories are associated with individual neurons as my personal intuition (edit: and this article as well) tells me the opposite.
For example, if I try to remember the name of an actor, I can have hints from my brain telling me "their name starts with an _m_" and "it's a man who played in a movie from the 90's", etc.
A memory feels like it cannot be isolated, it always feels like a composition of different elements that, when put together, describe one thing, or many. The more elements (in that case maybe single neurons, or very small group of neurons) are activated the clearer the memory. A Venn diagram of sort where the overlap gets smaller and smaller. This would explain the "this makes me think of…" process, since a certain number of these elements from one memory will overlap with the elements of another one.
This is completely personal and completely unscientific. And maybe this is neuroscience 101… In that case sorry for stating the obvious.
Do we actually know how the memory is coded specifically? Is it a specific firing of a neuron (or a neuron ensemble), is it the specific answer you get from exciting the neuron(s)? Or any combination thereof?
IANA neuroscientist, so please correct me if I'm wrong or oversimplifying:
IIRC one of the main cellular mechanisms thought to underlie memory is LTP(Long Term Potentiation) in glutamatergic neurons. There are different kinds of glutamate receptors, but we're interested in the two subtypes of ion channel based(ionotropic) Glut receptors, AMPA and NMDA.
AMPA is sort of your main receptor for propagating signals: it's activated first.
NMDA is much more complicated in that it requires binding both glutamate and another neurotransmitter, glycine, for the ion channel to open. But this ion channel can also be blocked by Mg²+ ions, which for reasons that currently escape me, is removed when the neuron depolarizes. Once NMDA is open, it has the downstream effect of upregulating the AMPA receptor, making more sensitive to future transmission, hence serving as a kind of "memory" of previous signals. I think the open question is more about understanding how memory as we know it emerges out of networks of these neurons, and less about the basic cellular mechanisms. And this is probably only one mechanism of LTP, then you have its opposite, Long Term Depression, which is also involved.
Of course, in science the answer is always more complicated than what can be gleaned from the hand-wavey explanations of some programmer on HN :)
Their paper assume in the premise the conclusion..
All depend on the accuracy of their fluorescent genetic memory correlate. It's not because this genetic marker is necessary for memory encoding that it is parcimonious and only affect memory encoded region.
An in depth presentation of the properties of the gene would solidify or weaken their findings, as for now since I'm lazy, I will suspend my judgment.
Their study looks not specifically at a storage site, but that along with the sites for processing a new memory for storage, and processes for recalling a memory from storage.
When you see something, that visual input is processed in a certain area that, IIRC, is the same area that is fired when recalling a visual memory.
When a memory is recalled, it is processed by the same regions that interpreted it initially upon first experiencing the stimulus. Compare that to a computer pulling a png from storage, loading into memory, calling necessary drivers to present onscreen, etc.
I remember telling some people that this is how it works back in 2003 or so.
I read a book that said the architecture of the brain was like a polyhedron where the vertexes represented different processing modes (visual, auditory, linguistic, emotional, ...) and that bundles of fibers that go down into the white matter and connect processing areas in the grey matter of the cortex.
If you think about a "dog", those connecting fibers activate images of the dog, the sounds the dog makes, the motor program to pet the dog, the feeling of the fur, etc...
Agree. Our experience of the world isn't a screen painted with what the world looks like. But instead is a collection of affordances and expectations.
Have you ever picked up an empty milk jug and yanked it super fast? Did you choose to yank it. Or did the program called pick up heavy milk jug run instead of empty milk jug. Feeling the confusion over how light the jug was causes the program "confusion" to run to help you look for an explanation about why this object is so different from what you expected.
The reasoning program creates the feeling of resolution once you arrive at, "I believed it was full but it wasn't", but prior to that resolution, if you really pay attention, you are just standing there for a moment puzzled about why your arm is moving so fast.
cells activated by naturally recalling the unpleasant memory
The maps highlighted many regions expected to participate in memory, but also many that were not
This feels like a study from decades ago, as if it were done in complete ignorance of the fact brains store event descriptors and entity descriptors separately. "Program data" and "character data" to use an analogy. Of course that sort of unselective "memory make marker go brr" analysis will show up all over the place because the brain is touching a dozen or more different types of data at once. If you want usefully specific results you have to use usefully specific methods.
Disclaimer: I know brain != deep learning neural nets. We do have a lot of evidence that the brain is _some_ type of network with analogue qualities.
Does it even make sense to say that a memory is stored somewhere in a specific region, if the brain is an analogue network? A property of analogue networks is that all nodes make a contribution, even if many of the contributions are infinitesimally small. The equivalent for deep learning is that information is stored in the weights and any given output is a function of all the weights. Some weights are more important than others in producing the output, but the point still stands.
If I take pretrained imagenet and just make it wider with nodes with random weights and biases, I can still feed the network an image and get a reasonably correct label as an output. In this example I could obviously point at the nodes from the original imagenet and say that those do the image recognition, we know the rest of the network doesn't contribute anything but noise.
On a technical level of course the whole network contributed to the output. In everyday reasoning and language however we usually focus on the parts that matter to a reasonable degree and ignore the rest. A sack of rice falling over in 2005 might have contributed to the 2008 financial crisis. With the world being an analog network of particles it even seems obvious that that sack of rice must have had some infinitesimal influence one way or the other, it's just more practical to ignore it.
I don't have any clear answers for you, but one interesting note here is that a recent paper showed that it took a deep neural net to be able to simulate the "IO" of a single cortical neuron. So that should give you some idea of the complexity involved compared to artificial neural nets, Re: your disclaimer.
> A temporally convolutional DNN with five to eight layers was required to capture the I/O mapping of a realistic model of a layer 5 cortical pyramidal cell (L5PC).
> When NMDA receptors were removed, a much simpler network (fully connected neural network with one hidden layer) was sufficient to fit the model.
Some types of artificial neural networks are biologically plausible. The layered structure of the cerebral cortex reflects the layers of an artificial network.
The difference being of course that as opposed to a mere 16 bit parameter, a cortical neuron is a complex "nanomechanical" machine capable of significant computation in its own right.
Because everything is constantly moving around something each event in the universe probably has unique location, a real physical place where it happened.
So, the idea is that as we’re moving through space we’re leaving a trail as the events are (lack of better word) “printed” in the fabric of space. Then as apparently Space is not empty at all but filled with Planck scale micro wormholes entangling all things into this one universal neural network, then our brains should be very well capable of tracing back our unique trails through space and retrieve information through those micro wormholes.
As developer I think it would make most sense to only save references rather than trying store all events inside everyones brains. That would be huge waste of resources and just damn stupid.
And probably the way we humans have built our computers also reflects how the universe really works, because after all we are bits of the universe doing whatever the universe is doing.
Peace
I can't believe this comment isn't buried. Its one thing to engage in a bit of rank speculation outside your specialty, but this is straight-up crackpot science.
> And probably the way we humans have built our computers also reflects how the universe really works, because after all we are bits of the universe doing whatever the universe is doing.
I think this is way too simplistic. Just like planes don't fly like birds, computers don't work like "the universe"
We don't know much about the brain at all and starting to think about it in terms of "computer" logic seems very wrong to me. Computers are our very crude and poor attempt at replicating brains/intelligence, by definition they're built on ultra simplified principles that our own brains came up with, using computer terms to define the brain/universe is closing a loop that doesn't exist
It only sounds like a waste because you(we)'re constrained by our own brains, the universe doesn't care about our ability to understand or make sense of it. Something might sound unoptimised to you(us) but might be the only way allowed by physics.
Naming and defining computer parts using human/brain related terms was an error imho, it confuses people and make them think we are able to replicate (or even understand) things that we have no clue about, they're very poor analogies.
> That would be huge waste of resources and just damn stupid.
Isn't the entire universe a huge waste of resource and just damn stupid ?
That's fun enough if that's intended as some form of recreational speculative fiction/sci-fi. I'm all about that.
But you said "model", without qualification, so its hard for me to tell from your comment the extent to which you want to put this forward as something that stands a chance of really being true.
That's unlikely to have any resemblance to how brains work though. For one, we would be lacking the mechanism to do such a 'retrieval'. We can also influence memory formation.
Say you binge drink. You are temporarily unable to form memories. We can pinpoint the exact area affected. Unless alcohol is somehow able to influence the micro-wormholes, it shouldn't affect memories at all.
> As developer I think it would make most sense to only save references rather than trying store all events inside everyones brains.
Remember, nature does not care about that.
References make sense to me as well, but I also agree with the conclusion of this article, that memories are distributed. If you put the two together, then I think it works well. Those could be distributed references, that when fired together, activate a specific memory somewhere else in space-time.
Deleted Comment
In all seriousness, though, I'd say that there's a broad dispute in the field between those that believe that memory involves dynamic neural activity involving a multiplexed circuit of neurons that encode many different memories and those (currently led by the Tonegawa lab) that think that memories are associated with individual neurons.
For example, if I try to remember the name of an actor, I can have hints from my brain telling me "their name starts with an _m_" and "it's a man who played in a movie from the 90's", etc. A memory feels like it cannot be isolated, it always feels like a composition of different elements that, when put together, describe one thing, or many. The more elements (in that case maybe single neurons, or very small group of neurons) are activated the clearer the memory. A Venn diagram of sort where the overlap gets smaller and smaller. This would explain the "this makes me think of…" process, since a certain number of these elements from one memory will overlap with the elements of another one.
This is completely personal and completely unscientific. And maybe this is neuroscience 101… In that case sorry for stating the obvious.
IIRC one of the main cellular mechanisms thought to underlie memory is LTP(Long Term Potentiation) in glutamatergic neurons. There are different kinds of glutamate receptors, but we're interested in the two subtypes of ion channel based(ionotropic) Glut receptors, AMPA and NMDA.
AMPA is sort of your main receptor for propagating signals: it's activated first.
NMDA is much more complicated in that it requires binding both glutamate and another neurotransmitter, glycine, for the ion channel to open. But this ion channel can also be blocked by Mg²+ ions, which for reasons that currently escape me, is removed when the neuron depolarizes. Once NMDA is open, it has the downstream effect of upregulating the AMPA receptor, making more sensitive to future transmission, hence serving as a kind of "memory" of previous signals. I think the open question is more about understanding how memory as we know it emerges out of networks of these neurons, and less about the basic cellular mechanisms. And this is probably only one mechanism of LTP, then you have its opposite, Long Term Depression, which is also involved.
Of course, in science the answer is always more complicated than what can be gleaned from the hand-wavey explanations of some programmer on HN :)
No, we don't.
When you see something, that visual input is processed in a certain area that, IIRC, is the same area that is fired when recalling a visual memory.
When a memory is recalled, it is processed by the same regions that interpreted it initially upon first experiencing the stimulus. Compare that to a computer pulling a png from storage, loading into memory, calling necessary drivers to present onscreen, etc.
I read a book that said the architecture of the brain was like a polyhedron where the vertexes represented different processing modes (visual, auditory, linguistic, emotional, ...) and that bundles of fibers that go down into the white matter and connect processing areas in the grey matter of the cortex.
If you think about a "dog", those connecting fibers activate images of the dog, the sounds the dog makes, the motor program to pet the dog, the feeling of the fur, etc...
Have you ever picked up an empty milk jug and yanked it super fast? Did you choose to yank it. Or did the program called pick up heavy milk jug run instead of empty milk jug. Feeling the confusion over how light the jug was causes the program "confusion" to run to help you look for an explanation about why this object is so different from what you expected.
The reasoning program creates the feeling of resolution once you arrive at, "I believed it was full but it wasn't", but prior to that resolution, if you really pay attention, you are just standing there for a moment puzzled about why your arm is moving so fast.
The maps highlighted many regions expected to participate in memory, but also many that were not
This feels like a study from decades ago, as if it were done in complete ignorance of the fact brains store event descriptors and entity descriptors separately. "Program data" and "character data" to use an analogy. Of course that sort of unselective "memory make marker go brr" analysis will show up all over the place because the brain is touching a dozen or more different types of data at once. If you want usefully specific results you have to use usefully specific methods.
Does it even make sense to say that a memory is stored somewhere in a specific region, if the brain is an analogue network? A property of analogue networks is that all nodes make a contribution, even if many of the contributions are infinitesimally small. The equivalent for deep learning is that information is stored in the weights and any given output is a function of all the weights. Some weights are more important than others in producing the output, but the point still stands.
On a technical level of course the whole network contributed to the output. In everyday reasoning and language however we usually focus on the parts that matter to a reasonable degree and ignore the rest. A sack of rice falling over in 2005 might have contributed to the 2008 financial crisis. With the world being an analog network of particles it even seems obvious that that sack of rice must have had some infinitesimal influence one way or the other, it's just more practical to ignore it.
https://www.sciencedirect.com/science/article/abs/pii/S08966...
> When NMDA receptors were removed, a much simpler network (fully connected neural network with one hidden layer) was sufficient to fit the model.