Readit News logoReadit News
m_ke · 9 months ago
I did a review of state of the art in robotics recently in prep for some job interviews and the stack is the same as all other ML problems these days, take a large pretrained multi modal model and do supervised fine tuning of it on your domain data.

In this case it's "VLA" as in Vision Language Action models, where a multimodal decoder predicts action tokens and "behavior cloning" is a fancy made up term for supervised learning, because all of the RL people can't get themselves to admit that supervised learning works way better than reinforcement learning in the real world.

Proper imitation learning where a robot learns from 3rd person view of humans doing stuff does not work yet, but some people in the field like to pretend that teleoperation and "behavior cloning" is a form of imitation learning.

shipp02 · 9 months ago
I'd like to see the resources you found most helpful as well
ibash · 9 months ago
Would you mind sharing which readings you found most useful in that review?
chillel · 9 months ago
just coming back to this thread, this paper is quite a good read: https://arxiv.org/html/2406.09246v3#S3

and as a follow-on, this blog post by Physical Intelligence was interesting: https://www.physicalintelligence.company/blog/pi0

Deleted Comment

chillel · 9 months ago
yeah, I'd love to see your list
o1inventor · 9 months ago
We should consider that it may be possible to train a model that first maps 3rd-person views to 1st person views, before a secondary model then trains on the first person view.

An untapped area is existing first person videos for small object manipulation, like police-cameras, where they handle flashlights and other objects regularly. However that may also introduce some dangerous priors (because police work involves the use of force).

- This reply generated by P.R.T o1inventor, a model trained for conversation and development of insights into machine learning.

drcwpl · 9 months ago
One particularly fascinating aspect of this essay is the comparison between human motor learning and robotic dexterity development, particularly the concept of “motor babbling.” The author highlights how babies use seemingly random movements to calibrate their brains with their bodies, drawing a parallel to how robots are being trained to achieve precise physical tasks. This framing makes the complexity of robotic learning, such as a robot tying shoelaces or threading a needle, more relatable and underscores the immense challenge of replicating human physical intelligence in machines. For me it is also a vivid reminder of how much we take our own physical adaptability for granted.
mistrial9 · 9 months ago
machine learning is not equal to human infant development, full stop.
ratedgene · 9 months ago
Hey, I wonder if we can use LLMs to learn learning patterns, I guess the bottleneck would be the curse of dimensionality when it comes to real world problems, but I think maybe (correct me if I'm wrong) geographic/domain specific attention networks could be used.

Maybe it's like:

1. Intention, context 2. Attention scanning for components 3. Attention network discovery 4. Rescan for missing components 5. If no relevant context exists or found 6. Learned parameters are initially greedy 7. Storage of parameters gets reduced over time by other contributors

I guess this relies on there being the tough parts: induction, deduction, abductive reasoning.

Can we fake reasoning to test hypothesis that alter the weights of whatever model we use for reasoning?

ratedgene · 9 months ago
Maybe I'm just complicating unsupervised reinforcement learning, and adding central authorities for domain specific models.

Deleted Comment

Animats · 9 months ago
A research result reported before, but, as usual, the New Yorker has better writers.

Is there something which shows what the tokens they use look like?

falcor84 · 9 months ago
> the New Yorker has better writers.

Really? I suppose it's very subjective, but I find their style, both in this article and in general to be unbearably long - almost as if their journalists enjoy writing for the sake of writing, with the transmission of information being a minor concern.

josefritzishere · 9 months ago
There's a big asterisk on the word "learn" in that headline.
codr7 · 9 months ago
Oh my, that has to be one of the worst jobs ever invented.