Readit News logoReadit News
dr_dshiv · 6 years ago
Hebb actually talked about causation, not synchrony (firing together):

"When an axon of cell A is near enough to excite cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A ‘s efficiency, as one of the cells firing B, is increased”

Synchrony is extremely important, particularly for the formation of cortical columns and neural pruning. But in spike timing dependent plasticity, where growth is potentiated if the presynaptic fires just before post synaptic firing, the connection is actually depressed if up and downstream neurons fire exactly synchronously. (there is a huge amount of variation in this across the brain, though)

dr_dshiv · 6 years ago
Note that there is also a mechanism for association between two presynaptic neurons. Probabilistically, when those upstream neurons fire synchronously, the downstream neuron will be more likely to actually fire. When that occurs, the postsynaptic neuron will, as a result of Hebb's Postulate, increase the connectivity to the synchronously firing neurons. So, "cells that fire together wire together" is more true of presynaptic neurons than pre-to-post synaptic neurons (and the wiring together is occurring through the postsynaptic neuron)
vinay427 · 6 years ago
I find it strange that the author couldn't find this in a textbook. This is rather common material in a developmental neuroscience textbook or lecture. I've looked through two such books during my (only) course on the topic, and all three sources covered this material.
ramraj07 · 6 years ago
What did they say ? Perhaps you misread what was in the textbooks? My understanding is that the authors questions are legitimate and still not fully answered.
vinay427 · 6 years ago
Yep, I agree that there are legitimate questions raised which still lack answers. However, I'm responding to the claim that the information the author provides in the summary is not found in any textbook they saw:

> So there you have it, a quick summary of one part of neural connectivity I’ve yet to see described in a textbook about the brain, but which really should be given out there, along with the classic Hebbian principle

throwitawayday · 6 years ago
The question of how neurons find each other to connect was recently studied with experimental connectomics--altering neurons and then mapping their synaptic circuits with electron microscopy--in this paper by Javier Valdes Aleman et al. 2019 https://www.biorxiv.org/content/10.1101/697763v1 , using Drosophila's somatosensory axons and central interneurons as a model.

If the OP's website Disqus worked (can't ever get the "post" button for comments after login), the above could have gone straight into the page.

g_airborne · 6 years ago
The connectedness of neurons in neural nets is usually fixed from the start (i.e. between layers, or somewhat more complicated in the case CNNs etc). If we could eliminate this and let neurons "grow" towards each other (like this article shows), would that enable smaller networks with similar accuracy? There's some ongoing research to prune weights by finding "subnets" [1] but I haven't found any method yet where the network grows connections itself. The only counterpoint I can come up with is that is probably wouldn't generate a significant performance speed up because it defeats the use of SIMD/matrix operations on GPUs. Maybe we would need chips that are designed differently to speed up these self-growing networks?

I'm not an expert on this subject, does anybody have any insights on this?

1. https://www.technologyreview.com/2019/05/10/135426/a-new-way...

tjwhitaker · 6 years ago
I think this is a really interesting area of machine learning. Some efforts have been made in ideas that are tangential to this one. Lots of papers in neuroevolution deal with evolving topologies. NEAT is probably the prime example http://nn.cs.utexas.edu/downloads/papers/stanley.ec02.pdf and another paper I read recently called pathnet that is different but very interesting https://arxiv.org/abs/1701.08734.
g_airborne · 6 years ago
This is very cool! Thanks!
londons_explore · 6 years ago
I experimented with networks where weights were removed if they did not contribute much to the final answer.

My conclusion was I could easily set >99% of weights to zero on my (fully connected) layers with minimal performance impact after enough training, but the training time went up a lot (effectively after removing a bunch of connections, you have to do more training before removing more), and inference speed wasn't really improved because sparse matrices are sloooow.

Overall, while it works out for biology, I don't think it will work for silicon.

pshc · 6 years ago
Would you say you found a result similar to the lottery ticket hypothesis? https://arxiv.org/abs/1803.03635
mennis16 · 6 years ago
Here is a relevant paper, which was the coolest thing I saw at this past NeurIPS: https://weightagnostic.github.io/

It is based on NEAT (as other commenters mentioned) and also ties in some discussion of the Lottery Ticket Hypothesis as you mentioned.

Deleted Comment

blamestross · 6 years ago
(See sibling comment NEAT is awesome)

The only reason we architect ANNs the way we do is optimization of computation. The bipartite graph structure is optimized for GPU matrix math. Systems like NEAT have not been used at scale because they are a lot more expensive to train and to utilize the trained network with. ASICs and FPGAs have a change to utilize a NEAT generated network in production, but we still don't have a computer well suited to training a NEAT network.

g_airborne · 6 years ago
So this might be an enormous opportunity for low-cost and more performant AI if someone was able to build an FPGA of some sort that could handle these types of computations as efficiently right?
Der_Einzige · 6 years ago
NEAT just doesn't have good, modern GPU powered implementations.

NEAT would totally be competitive if someone actually gets a version running in PyTorch/Tensorflow

Deleted Comment

hirenj · 6 years ago
Funnily enough over this last weekend, I read a great review on this subject from earlier this year:

“Synaptic Specificity, Recognition Molecules, and Assembly of Neural Circuits” by Sanes and Zipursky

https://doi.org/10.1016/j.cell.2020.04.008

For me, the hard part has always been understanding how this whole thing is orchestrated on a cellular and molecular level.

dr_dshiv · 6 years ago
When Hebb talks about "reverberation" in neural circuits, he still thinks in advance of our current knowledge of oscillatory neurodynamics. Here he speculates about the short term memory trace that holds position dynamically, prior to physical changes in the synapse:

"It might be supposed that the mnemonic trace is a lasting pattern of reverberatory activity without fixed locus like a cloud formation or eddies in a millpond"

From Hebb's 1948 "Organization of Behavior"

punnerud · 6 years ago
Main point: “(..) if the target neuron already has too many connections, it will tend to remove the weakest ones, and this includes the most recent ones. The scaling goes both ways after all – it goes for more synapses when it starts with too few, but for less, if it starts with too many.

But synaptic scaling is not everything. As it turns out, the tips of the growth cone constantly produce structures called filopodia, and these react to specific chemical attractants and repellents. These chemicals are produced by both cells at the target area, and by so-called guidepost cells along the way. There are suggestions that the system for such targeting is fairly robust, especially in early development (and its limitations in later life might explain why spinal cord injuries and the like are so hard to fix).“

Mirioron · 6 years ago
This makes me want to know how quickly this type of growth happens. Is it on the order of seconds? Minutes? Hours? Days? Is this why when you learn something, take a break, come back later and everything makes more sense happens?
jcims · 6 years ago
I've noticed that there's a weird area when learning a physical skill that there's a strange growth curve. You suck at first, then quickly get to some kind of milestone, then get worse before you get better. It feels like my brain is attempting to delegate some of the motor activity to lower levels before they are 'ready', but in fact it might be an essential part of training those neurons.
mikhailfranco · 6 years ago
See Mastery by George Leonard

which is a great book and highly recommended

even if you are not into karate or martial arts.

You have echoed his sketch of punctuated plateaus (p14):

The Mastery Curve

  There's really no way around it. Learning any new skill 
  involves relatively brief spurts of progress, each of
  which is followed by a slight decline to a plateau
  somewhat higher in most cases than that which preceded it.
[pdf] http://index-of.co.uk/Social-Interactions/Mastery%20-%20The%...

joveian · 6 years ago
A lot of learning involves interaction between the cerebellum and the rest of the brain, and the cerebellum has a different structure. "How the brain works" type descriptions should mention the cerebellum quite a lot, but usually don't. Partly because when I was looking at this stuff a while back there was much less known about this interaction.
Waterluvian · 6 years ago
I think I get what you're saying. You _do_ get worse, then get better. It's like the brain is taking the proof of concept code and re-writing it with the necessary abstractions to make it a more performant routine. And then suddenly I start to experience the skills from new_task feeding into tasks I already know, and it feels like it's happening subconsciously.
whymauri · 6 years ago
This is a bit of an open problem, to the point of being controversial. I'd hesitate to say anyone has a real answer even though we certainly have real experimental data. The philosophical takes range from:

* You never enter the same room twice.

* Your brain partially re-wires every time you sleep.

* Your brain rewires, but the way it rewires is surprisingly predictable and we can track the dynamics.

* Your brain is rewiring literally every second, but not every rewiring is functional - does this imply an implicit robustness?

Etc, etc.

Deleted Comment