Readit News logoReadit News
radarsat1 · 4 years ago
This reminded me a great deal of a paper I read like 10 years ago, which I just looked up because I wanted to see if it was cited here, "Distilling Free-Form Natural Laws from Experimental Data" [0]. Lo and behold it is not only cited, but is written by the same authors. So the novelty here, as discussed in the paper [1], is that they are doing something similar but from video instead of from sensor streams. Which is quite interesting, as it opens up the available information to systems that are hard to 'sense' apart from pointing a camera at them, like the lava lamp example.

I did see a presentation a few years ago on a similar topic by Erwin Couwans, author of Bullet physics engine, who was discussing doing neural inference of physics. He basically wanted to replace Bullet with a neural network, which I thought was kind of funny at the time, but cool if it worked. (Physical laws at this level of mechanics are mostly quite well understood, but the devil is in the details and actually solving time-discretized non-smooth dynamics, as well as performing good contact detection is less than obvious the more you get into it, not to mention friction, so I could see why a "learned" solution could be attractive, but at the time I found it funny that he was proposing to learn the physics that were already simulated by Bullet.) Looking back, it appears that those ideas culminated in a paper [2] two years ago, which I'll have to read now -- from the abstract, the differentiability of the simulation has benefits not only for learning the physics (mostly friction models it appears), but for learning controllers for plants within the simulated physics. It seems to be not cited in the linked article but maybe it is a slightly different, but related, topic. In any case, fascinating stuff.

[0]: https://www.science.org/doi/10.1126/science.1165893

[1]: https://arxiv.org/abs/2112.10755

[2]: https://arxiv.org/abs/2011.04217

qikInNdOutReply · 4 years ago
Out of interest, how would such a system derive material properties from a still 3d scene?

We know the leaves of a house plant will bow in the wind, because you have observed it in the wild, as a child several times. Even given non-still image training material, the nn would have to learn all of the physical properties of all the things from scratch, like a human.

To train a physics model like this to derive the properties of material from the world is going to be bound to hilarious failures. A mountain with smoke hanging over it, is obviously a house plant.

defterGoose · 4 years ago
I would think that what you're suggesting could be done by doing a bunch of Fourier transforms. If there is no relative motion of the camera you could assume that objects with high frequencies would be less massive than objects with low frequencies that could give you a useful bound on their mass/density. (Assuming a fairly uniform energy density in the scene)

For my own curiosity, I wonder if some of the variables they're missing are based more on the context of the video, rather than the content. Like, colors, for instance. Maybe if you ran everything through in black and white you could get it closer to an integer lower bound?

digdugdirk · 4 years ago
I'd imagine you could do a frame by frame to identify points of interest and then "predict" the track they should follow into the next frame, training the model on each point of interest as you go.

In this case, you wouldn't be aiming to identify the material properties specifically, but given a model that's tightly fitted enough, there should be constants that stand in for the missing material properties when it's all said and done.

Just an initial thought, I'd love to hear why I'm wrong from those in the field.

ok_dad · 4 years ago
That’s interesting thanks for sharing. I always was bearish on neural networks when advertised as “AI”, but now I think that perhaps when used as a tool rather than a panacea they could be very useful. Humans learn to operate heavy machinery in industrial settings through experience and use via our own neural networks, and we have incomplete understanding of physics, so why not use a neural network to learn the physically best way to drive an industrial machine and use the physics as backup for safety? Seems to work well based on all these stories so far, so maybe learning neural networks are the next differential equations for engineering.
radarsat1 · 4 years ago
The trick is to remember that a neural network is a function approximator. A good deal of AI research is in the business of casting "intelligence" as a "function" so that you can pose a problem and figure out how to feed it data, ie., well-defined input and output. That's why different function approximators can be used for AI, such as decision trees, etc., not just neural networks. What to model, and how to model it, are orthogonal problems, both interesting in their own right. It just happens that NN are particularly good at handling high-dimensional inputs, which are necessary for perception tasks, as well as for handling large vocabulary language modeling, so they are doing rather well lately.

On the other hand, there are lots of places where "functions" are useful, that have nothing to do with "intelligence", but where actually writing the function with full fidelity can be difficult or intractable. These are also opportunities where learnable function approximation can provide some great benefit, provided you can figure out how to pose the problem such that data is available for the learning part.

A good example is in physically-based rendering, you have lots of complicated aspects of light transport for which we can model the physics quite well, but when you get into complicated reflections and scattering, etc., this can all be modeled by a complicated function called the BRDF [0]. Hand-writing a good BRDF is possible, and quite typical of high-end renderers in fact, but it's no surprise that there's been research in replacing it with a "neural BRDF" [e.g. 1].

That's just to give an example of a place where a single, very targeted and small (but complicated) part of a larger framework can stand to benefit from data-driven modeling, and a neural network can be one good way to do that. Another example is similar usage in computation fluid dynamics [2], where we can hand write a pretty good model, but to capture what is missing, it can be useful to have an approximator. The problem there is that having approximated the function well, it doesn't necessarily lead to better human understanding of the phenomenon. Discovering the true, sparse latent variables in a way that is interpretable, instead of a black box, is a useful step towards that. (Which I guess the current article is aiming at, but I haven't read it in full yet.) But sometimes all you want is results, as in the case of synthesizing a good controller. For example in CFD, if you can use the blackbox model to generate a good stabilizer for an ocean platform, you don't really care about the physics, as long as it's accurate enough to be trustworthy. So the utility of these methods, like most things, is relative to what your goals are.

[0]: https://pbr-book.org/3ed-2018/Color_and_Radiometry/Surface_R...

[1]: https://arxiv.org/abs/2111.03797

[2]: https://github.com/loliverhennigh/Computational-Fluid-Dynami...

dls2016 · 4 years ago
Piggy backing here... but this dimension-reduction or system-identification stuff always reminds me of Takens' theorem [0]. He even did some neural network stuff back in the "dark ages" of the topic [1].

[0] https://en.wikipedia.org/wiki/Takens%27s_theorem

[1] https://clgiles.ist.psu.edu/papers/NC-2000-learning-chaos-nn...

adamnemecek · 4 years ago
This is not surprising considering this is just a fixed point or diagonalization. I have written up a bit on this topic

https://github.com/adamnemecek/adjoint

t_mann · 4 years ago
Thanks for those links! Back in school, I loved wondering about whether things like the inverse square law for gravity could be (have been) discovered from raw experimental data via ML - cool to see that it's actually been done already.
CrimpCity · 4 years ago
This is cool and I personally believe this type of work may lead to breakthroughs in messy data rich fields like biology where we can arrive at a higher levels of abstraction maybe not exactly to "laws" like physics but highly correlative rules around phenomena. I think this is more on the side of knowledge creation and is human friendly as opposed to being more of a black box prediction like deep neural networks. Though I think both things are complimentary since human curiosity isn't satisfied by prediction alone.

If anyone else is interested in this line of work I recommend checking out Kathleen Champion, Steve Brunton, and J. Nathan Kutz's work on Discovering governing equations from data by sparse identification of nonlinear dynamical systems(https://www.pnas.org/doi/full/10.1073/pnas.1517384113).

Also this intro video is great! https://youtu.be/Z-l7G8zq8I0

Deleted Comment

jamesakirk · 4 years ago
Thank you for mentioning J. Nathan Kutz! Reading through this article, I saw similarities to Dynamic Mode Decomposition (I am not literate enough on the topic to elaborate). His Coursera courses and book were a fascinating dive into orthogonal basis functions, lower-rank approximations like PCA... I'm not sharp enough anymore (over a decade since grad school) to fully grok it, but damn his work is so cool!
a_nop · 4 years ago
I'll be tickled when they discover the AI is measuring the lens distortion of the camera or some other artifact of the signal chain.
Tao3300 · 4 years ago
I was also wondering if one or more of these variables pertain to perception. Digital artifacts in the video being mistaken for machine elves.
bufferoverflow · 4 years ago
Or the noise specific to that camera sensor.
twobitshifter · 4 years ago
Seems likely that it would, our eyes have to do the same thing dealing with saccades, blind spots, inversions, distortion and more.
a-dub · 4 years ago
more cameras. more physical systems. different settings. more data. :)
ndsipa_pomu · 4 years ago
I'm not seeing the significance of that. Surely there's plenty of alternate descriptions of physics/systems that can be equally predictive and certainly there's lots of equivalences to shift from one framework to another.
samloveshummus · 4 years ago
It's true that there are many mathematically equivalent ways to describe physical systems. But the important point is that some are more useful than others. For example, Lagrangian mechanics and Hamiltonian mechanics are equivalent to Newtonian mechanics, but they can give much better intuition for certain problems. Feynman diagrams are equivalent to grinding out the QFT algebra by hand à la Schwinger, but they give a completely different intuition for the underlying Physics.

More importantly, though, they could use this NN on systems that have not yet successfully been modeled, perhaps complex dynamical systems, to discover good parameters and conserved quantities.

sbf501 · 4 years ago
> For example, Lagrangian mechanics and Hamiltonian mechanics are equivalent to Newtonian mechanics, but they can give much better intuition for certain problems. Feynman diagrams are equivalent to grinding out the QFT algebra by hand à la Schwinger, but they give a completely different intuition for the underlying Physics.

I just read about Langrangian and Hamiltonian mechanics. I didn't encounter those at all in my EE physics, and they are fascinating. Great examples! Are you a physics professor, or is this stuff undergrad physics majors learn?

MauranKilom · 4 years ago
> More importantly, though, they could use this NN on systems that have not yet successfully been modeled, perhaps complex dynamical systems, to discover good parameters and conserved quantities.

That would only make sense to try if the model could do this for systems we already understand. By the sound of the article, it can't even do that. Despite many efforts the researchers couldn't even understand the second pair of parameters. That doesn't correspond to my understanding of "good parameters".

taneq · 4 years ago
> OK so we know the answer to this question is 4, let's check our new software against that.

> [software returns 4.7]

> Oh my GOD, it's discovered new physics!

coliveira · 4 years ago
Good summary. There's no end to the kind of delusion some AI proponents will cling too. In a few days they'll say this is another proof we live in a simulation!
Rerarom · 4 years ago
The new physics is not in the 4.7 but that in that the model's third and fourth variables seem to be new, compared to the known models.
YeGoblynQueenne · 4 years ago
They seem to be uncorrelated with real-world physics. Whether they are "new" is anybody's guess. They're variables identified in a virtual environment, rather than the real world so there's very little chance they correspond to something in the real world, let alone physics, much less "new physics".
toxicFork · 4 years ago
At least it wasn't 42
lisper · 4 years ago
Worth noting: the concept of "energy" was controversial for a very long time.

https://en.wikipedia.org/wiki/Conservation_of_energy#History

comboy · 4 years ago
Well, it does seem like a "made up", not directly measurable variable. We just have it for convenience of understanding the world. Or am I missing something?
lisper · 4 years ago
Nothing is "directly measurable". Everything in science is a story that we make up to explain our personal experience, including "measurement" and "observation".
alok-g · 4 years ago
According to Noether's theorem, law of conservation of energy is same as symmetry of laws (equations) of physics with respect to time.
_hcuq · 4 years ago
My current thinking (I’m a total dilettante….) is that energy is a state in closed system. The system of course designed by the observer. For example, a cart half way up a hill. Does it have positive or negative potential energy? Depends on how you define your system.
lowbloodsugar · 4 years ago
Having been programming a while, when confronted with "Well, it got the answer 4.7, and we figured out 2, and we can't figure what the other 2.7 are", I wouldn't leap to "Wow, the AI has found a whole new physics". I'd go with "bug", until I can very concretely describe otherwise.
rch · 4 years ago
From the paper:

> code to reproduce our training and evaluation results is available at the Zenodo repository and GitHub (https://github.com/BoyuanChen/neural-state-variables).