Readit News logoReadit News
seanlinehan commented on Billion-Parameter Theories   worldgov.org/complexity.h... · Posted by u/seanlinehan
roughly · 3 days ago
There's a lot of ink in this spent on how Poverty, Climate Change, Urban Decay, and Financial Markets are Complex Hard Complicated problems.

The problem with these is they're also problems where there are actors profiting from the failure to fix the system - the issue isn't that we don't understand the complex nature of the domain, it's that the components of the system actively and agentically resist changes to the system. George Soros called this Reflexivity - the fact that the system responds to your manipulations means you can't treat yourself and the system as separate agents, and you can't treat the system as a purely mechanistic/passive recipient of your changes. It's maybe the biggest blind spot for people who want to apply the rules and methods of physics to social issues - the universe may be indifferent, but your neighbors are not.

seanlinehan · 3 days ago
Reflexivity is nodded to in the definition of complex systems in the piece!

I think what you're saying is poverty is actually simple, and the solution is to stop the bad actors causing poverty? But at the same time, you are correctly recognizing that attempts to stop bad actors from causing poverty triggers reflexive responses and cascading repercussions. Which sounds mighty like a complex system?

seanlinehan commented on Billion-Parameter Theories   worldgov.org/complexity.h... · Posted by u/seanlinehan
pash · 3 days ago
> Even for billion-parameter theories, a small amount of vectors might dominate the behaviour.

We kinda-sorta already know this is true. The lottery-ticket hypothesis [0] says that every large network contains a randomly initialized small network that performs as well as the overall network, and over the past eight years or so researchers have indeed managed to find small networks inside large networks of many different architectures that demonstrate this phenomenon.

Nobody talks much about the lottery-ticket hypothesis these days because it isn’t practically useful at the moment. (With the pruning algorithms and hardware we have, pruning is more costly than just training a big network.) But the basic idea does suggest that there may be hope for interpretability, at least in the odd application here or there.

That is, the (strong) lottery-ticket hypothesis suggests that the training process is a search through a large parameter space for a small network that already (by random initialization) exhibit the overall desired network behavior; updating parameters during the training process is mostly about turning off the irrelevant parts of the network.

For some applications, one would think that the small sub-network hiding in there somewhere might be small enough to be interpretable. I won’t be surprised if some day not too far into the future scientists investigating neural networks start to identify good interpretable models of phenomena of intermediate complexity (those phenomena that are too complex to be amenable to classic scientific techniques, but simple enough that neural networks trained to exhibit the phenomena yield unusually small active sub-networks).

0. https://en.wikipedia.org/wiki/Lottery_ticket_hypothesis

seanlinehan · 3 days ago
Super interesting, I've never heard of this before. Thanks for sharing!
seanlinehan commented on Billion-Parameter Theories   worldgov.org/complexity.h... · Posted by u/seanlinehan
niemandhier · 4 days ago
He talks about the Santa Fe institute and how they failed to carry their findings into the real world.

They did not.

They showed that for certain problems one could not do more than figure out some invariant and scaling laws. Showing what is impossible is not failure.

For the rest: Modern gene networks and lots of biological modelling is based on their work as well as quite a few other things. That’s also not failure.

I agree that modern AI is alchemy.

seanlinehan · 4 days ago
True -- I didn't mean to communicate that Santa Fe was a failure writ large. Their contribution was very important!

Though I think it's fair to say that the torch was picked up and carried by others with a different set of strategies.

seanlinehan commented on Billion-Parameter Theories   worldgov.org/complexity.h... · Posted by u/seanlinehan
ileonichwiesz · 4 days ago
This might be an unkind reading, but to me this just sounds like an attempt to reinvent the very same kind of mysticism that it mentions in the first paragraph.

“No need to study the world around you and wonder about its rules, peasant - it’s far beyond your understanding! Only ~the gods~ computers can ever know the truth!”

I shudder to think about a future where people give up on working to understand complex systems because it’s hard and a machine can do it better, so why bother.

seanlinehan · 4 days ago
Not the intention at all. The part about mechanistic interpretability was meant to gesture at how building such systems can provide new tool kit for building further intuition and understanding.
seanlinehan commented on Billion-Parameter Theories   worldgov.org/complexity.h... · Posted by u/seanlinehan
dakiol · 4 days ago
> You could capture the behavior of every falling object on Earth in three variables and describe the relationship between matter and energy in five characters.

What we can do is to approximate. Newton had a good approximation some time ago about gravitation (force equals a constant times two masses divided by distance squared. Super readable indeed) But nowadays there's a better one that doesn't look like Newton's theory (Einstein's field equations which look compact but nothing like Newton's). So, what if in a 1000 years we have yet a better approximation to gravity in the universe but it's encoded in millions of variables? (perhaps in the form of a neural network of some futuristic AI model?)

My point is: whatever we know about the universe now doesn't necessarily mean that it has "captured" the underlaying essence of the universe. We approximate. Approximations are useful and handy and will move humanity forward, but let's not forget that "approximations != truth"

If we ever discover the underlaying "truth" of the universe, we would look back and confidently say "Newton was wrong". But I don't think we will ever discover such a thing, thereore sure approximations are our "truth" but sometimes people forget.

seanlinehan · 4 days ago
Agreed!
seanlinehan commented on Billion-Parameter Theories   worldgov.org/complexity.h... · Posted by u/seanlinehan
curuinor · 4 days ago
Connectionist models have lots of theory by theoreticians explicitly pissed off about Chomsky's assertion that there is an inbuilt ability for language. Jay McClelland's office had a little corkboard thingy with Chomsky mockery on the side, for example. Putting forth even the implicature that the present direct descendants are intellectual descendants of Chomsky is like saying Protestants are intellectual descendants of Pope Leo X.
seanlinehan · 4 days ago
Perhaps a failure of communication -- I was indeed attempting to say that Chomsky was wrong and his ideas were interesting, but more or less a dead end.

u/seanlinehan

KarmaCake day1042October 11, 2011
About
CEO Exec.com

Previously VP Product at Flexport

Twitter @seanlinehan

View Original