Readit News logoReadit News
pizza · 8 years ago
Relevant:

A compressed sensing perspective of hippocampal function (2014): Hippocampus is one of the most important information processing units in the brain. Input from the cortex passes through convergent axon pathways to the downstream hippocampal subregions and, after being appropriately processed, is fanned out back to the cortex. Here, we review evidence of the hypothesis that information flow and processing in the hippocampus complies with the principles of Compressed Sensing (CS). The CS theory comprises a mathematical framework that describes how and under which conditions, restricted sampling of information (data set) can lead to condensed, yet concise, forms of the initial, subsampled information entity (i.e., of the original data set). In this work, hippocampus related regions and their respective circuitry are presented as a CS-based system whose different components collaborate to realize efficient memory encoding and decoding processes. This proposition introduces a unifying mathematical framework for hippocampal function and opens new avenues for exploring coding and decoding strategies in the brain.

[0] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4126371/

mh-cx · 8 years ago
Reading about rewards in AI context makes me wonder, why current AI only focuses on the network topology aspect of natural intelligence.

What about the neurotransmitters like dopamine or serotonine? Could they (or more specific: their effects) be key to an AI that feels more “natural“?

eru · 8 years ago
Current AI approaches don't have that much to do with the biological brain. Despite the name 'neuronal networks' that we have history to thank for, it's better think of them as just long compositions of functions with a lot of parameters one can tweak.

Historically researchers looked at activation functions that are at least somewhat justified by nature, like the sigmoid. More recently people have focused on the mathematical aspects and figured out that the rectified liner unit `ReLU(x) = max(x, 0)` works just as well if not better for most applications.

Maxout networks (https://arxiv.org/pdf/1302.4389.pdf) are another interesting development in that direction.

There's still lots we can learn from biology, of course; but the last decades rapid advances we saw from deep learning at most barely relate to real brains.

mannigfaltig · 8 years ago
> Despite the name 'neuronal networks' that we have history to thank for, it's better think of them as just long compositions of functions with a lot of parameters one can tweak.

Actually, the name is not so misleading because `ReLU(Wx + b)` is an approximation of the response of a large ensemble of integrate-and-fire neurons in terms of the firing rate [1] (it models populations of pyramidal neurons in the cortex). The ReLU activation function basically approximates a logarithmic activation function [2]. Obviously many kinds of computations are left out in this model, for example in pyramidal neurons, the integration of the incoming spikes is often sublinear rather than linear; the branches of the neurons that receive signals (dendrites) can themselves perform complex computations such as spatial clustering and logical operators basically sending small (dendritic) spikes forward and backward within an neuron [3]; short-term plasticity is left out entirely and so are countless types of interneurons which inhibit other neurons nearby and measure mean activity across populations of neurons etc.

[1] https://web.stanford.edu/group/brainsinsilicon/documents/Won...

[2] https://www.utc.fr/~bordesan/dokuwiki/_media/en/glorot10nips...

[3] https://neurophysics.ucsd.edu/courses/physics_171/annurev.ne...

orbifold · 8 years ago
Actually ReLU(x) = max(x,0) can be thought after a change of parameters as the firing rate response of a leaky integrate and fire neuron at least for a certain range of parameters. So it is not completely far fetched, but of course nature does not use rate coding exclusively.
derefr · 8 years ago
Sure, but (our models of the functions of) neurotransmitters don't have that much to do with biology, either. For example, the local level of dopamine in a neuron is understood to be something like an encoding of a confidence interval associated with the computation, such that where synapses in the sensory (NMDA-excited) and model-predictive (AMPA-excited) regions of the brain come into contact, the dopamine in each region bubbles up as a bunch of coefficients that adjusts how comparisons between data and predictions shake out, temporarily biasing whether "branches" to error states (i.e. the activation of higher-layer neurons) are taken.

This works just as well whether you are modelling biological synapses with plasticity, or compositions of functions with training via backpropagation.

The only real difference is that NNs aren't usually "physically embodied" (i.e. embedded in a metric space with radiative signalling to non-connected but "local" neighbours) in the way they'd have to be to get the "area-effect excitation" effects of neurotransmission.

kortex · 8 years ago
Glutamate is the primary "connection" neurotransmitter in the brain ("It is used by every major excitatory function in the vertebrate brain, accounting in total for well over 90% of the synaptic connections in the human brain."). DA and SER are far more sparse and unevenly distributed, concentrating in areas with heavy interconnectivity, such as the limbic system and hippocampus. These monoamines are thought to primarily serve in a modulatory role, affecting performance and stability of neural connections in general, in an on-line fashion.

By comparison, ANNs have many meta-parameters (parameters besides the actual weights), such as learning rate and momentum, affecting stability and performance. Being able to tune these on the fly would likely improve performance.

The "ANNs are not like the brain!" camp like to point out that the ANN is a mathematical model bearing (on one level) little similarity to the human brain, but I think this glosses over the fact that key features (e.g. layers of connections, CNNs) were directly biologically inspired, and we can still glean a lot by studying the best example in our world of a learning system.

mh-cx · 8 years ago
Downvoters please explain what's so stupid about my question.
_0ffh · 8 years ago
I didn't downvote, but I have an inkling. I'll share what some of these people may be thinking about your comment. I'll be hyper-critical there because you specifically asked about why someone might have downvoted, so don't take it personally.

"Reading about rewards in AI context makes me wonder, why current AI only focuses on the network topology aspect of natural intelligence."

Well, it doesn't. Not at all. It looks as if you conflate one very specific AI techique (deep neural networks) with all of AI. Because that one technique is currently insanely popular and has been widely mainstream reported and hyped that makes it look as if your education in that field did not go beyond reading a couple of mainstream media articles. Also, there's a whole huge subfield in AI which is actually defined by it's focus on learning from rewards, which you seem not to be aware of.

"What about the neurotransmitters like dopamine or serotonine? Could they (or more specific: their effects) be key to an AI that feels more “natural“?"

Not applicable as written, too broad and unspecific. "Not even wrong." Looks like someone with superficial knowledge is just throwing a few sticks into the blue in the hope that someone might be aroused to a reply which competently tries to make actual sense out of it.

kortex · 8 years ago
I suspect it comes across as slightly naive to this audience. The comparison between ANNs/AI and the actual human brain comes up a lot in the general media, and this leads to lots of misconceptions about AI/ANNs/deep learning, and this makes (my opinion) people in both the fields of neuroscience and machine learning a bit salty.

Inescapably, much of the high level ideas in ANN-based machine learning ("connect a bunch of nodes", "integrate and fire", "convolutional neurons/ receptive field") are cribbed directly from neuroscience.

Spiking Neural Networks (see SpiNNaker) and Hierarchical Temporal Memory (see numenta/nupic) aim for more biological plausibility, directly.

*ANNs == artificial neural networks

pizza · 8 years ago
I upvoted you, and I'm actually glad you asked the question despite downvotes. I imagine those who did are of the opinion that essentially you're ascribing too much function to the neurotransmitters themselves, as opposed to the contexts in which the presence of the neurotransmitters are used as signals.

Take a look at this brain schematic here [0]. I vehemently disagree with some of the presented-as-fact ideological aspects of the article, but the depiction of neurotransmitters working differently in different brain regions is illustrative.

i.e., a dopamine molecule does not already constitute a meaningful coordinate system -- it doesn't have a meaning in and of itself, such as "the reward molecule", or euphoria/"feel-good"/"addiction trap"/excitement molecule) in the brain. Rather, the dopamine molecule has locally-interpreted semantics.

To make this more concrete with examples of different aspects of dopamine alone:

- "dopamine is known to encode the confidence level of motor predictions" - Scott Alexander, psychiatrist [1]

- saturating the synaptic cleft between neurons with released (or non-recycled) dopamine is a part some euphoric processes (e.g. drugs, orgasm, gambling) in the striatum negra

- 'excess' (not a simple thing to determine for individuals imo) dopamine is related with psychosis (not per se causally), and antipsychotics are often designed to blunt dopamine transmission

However,

- dopamine transmission is also fundamentally necessary for muscle movement, in the motor cortex

- additionally, the conscious sensation of dopamine is highly dependent upon the frequency of dopaminergic neuron spikes

- Parkinson's disease is (iirc, said to be,) the result of the degeneration of motor neurons; shakiness, problems sleeping;

What does that mean for AI? Well, if we want to go beyond network topology, and start to use new varieties of component substrates, the individual types of signals they represent must either a) be fairly meaningless outside of a specific context or b) not be suited for learning from evidence and generalization of data. Check out [2], [3], and [4] for more.

[0] http://www.nationalgeographic.com/magazine/2017/09/the-addic...

[1] http://slatestarcodex.com/2017/09/12/toward-a-predictive-the...

[2] https://www.edge.org/conversation/stephen_wolfram-ai-the-fut... (not that i am a huge fan of wolfram :P)

[3] https://www.edge.org/responses/what-scientific-concept-would..., ctrl-f "biological"

[4] https://en.wikipedia.org/wiki/Multi-state_modeling_of_biomol...

andrepd · 8 years ago
Edit instead of replying to your post.
miguelrochefort · 8 years ago
People assume you believe your idea is novel when it's not.
dontreact · 8 years ago
I don't know how this theory fits in with patient H.M. who was still capable of many types of learning and planning, despite having his entire hippocampus removed. https://en.wikipedia.org/wiki/Henry_Molaison At the least such a broad sweeping theory of how a brain region works should address how it fits in with evidence with the most famous lesion (and deeply studied) patient who did not have said brain area?
saurik · 8 years ago
> "The researchers found, to their surprise, that half of H.M.'s hippocampus had survived the 1953 surgery..."
SubiculumCode · 8 years ago
As much as I love the hippocampus (topic of my dissertation), I had to constantly remind myself that there is a lot of other brain.
eli_gottlieb · 8 years ago
Hmmm. I'll need to read more closely later and make a real comment, but since when did Gershman or anyone at DeepMind get on the predictive-coding train?

Nonetheless, this kind of algorithm sounds like exactly what I expect from a predictive brain that cares fundamentally about trajectories.

mannigfaltig · 8 years ago
What are your thoughts on this paper?
m3kw9 · 8 years ago
Are the structure same throughout the areas where we think in general? Otherwise there could be some sort of programming involved to get the brain to think predictively
shuma · 8 years ago
Numenta have been doing this for many years now, https://numenta.com/
eli_gottlieb · 8 years ago
Can you and others please stop reading actual neuroscience and saying, "oh, Numenta did this already"?

* Firstly, you're reading a neuroscience paper. It's biology! Numenta is not biology.

* Secondly, Numenta makes a lot of big claims and then never publishes anything.

This has gotten as if we were responding, "Simpsons did it!" to jokes which, in fact, The Simpsons had never aired, based on the supposition that since they once did a joke with vaguely similar vocabulary, it must have been the same as this joke.

neurokim · 8 years ago
This link should work for full text, final version, no paywall

http://rdcu.be/wnSU

rayuela · 8 years ago
Just $225 to read the actual paper... what a steal! If someone would so kindly share that pdf on here I'd greatly appreciate it.

Nvm found it: https://www.biorxiv.org/content/early/2016/12/28/097170

agumonkey · 8 years ago
ah well good catch

usually I try digging the authors in case preprints are on their own webpage

and just in case people miss it, the latest version is also there https://www.biorxiv.org/content/early/2017/07/27/097170

itschekkers · 8 years ago
theres a website called sci-hub (which maybe illegal) that you can use to get almost any paper in a STEM field, especially from major journals. you type www.sci-hub.cc/ and then paste the DOI of the paper. it takes you straight to the pdf!
SubiculumCode · 8 years ago
Or you can contact the researcher, who are generally happy to share our research via email. Or check their researchgate.
duozerk · 8 years ago
Also if it doesn't work (happens from time to time) or disappears, they have a tor mirror that works consistently: scihub22266oqcxt.onion.
neurokim · 8 years ago
Also final version on readcube: http://rdcu.be/wnSU