Readit News logoReadit News
nee1r · 19 days ago
Hey guys! I’m Neel, been holed up in our south park office for the past year working on model training. excited to share our research!

This is a preview of a very different type of computer use model—we train on the internet. Specifically we have 11 million hours of computer video stored on our storage cluster (previously shared https://news.ycombinator.com/item?id=45438496 !) and the model can work in 30 FPS. Since we match the fundamental form factor of computer-use, we can get our model to do CAD, browse websites, and even drive a car using arrow keys. I’m super excited to see what our model can do as we scale more, it's a fun frontier to work on (not language models :) ).

The team and I will be online responding to the comments, so drop any questions.

ilaksh · 17 days ago
How do I access this? Any HF or API coming?

Any benchmark comparisons to Fara-7B or Sonnet 4.6, Qwen 3.5 etc.?

AndrewKemendo · 17 days ago
This looks like a really promising approach

In particular the Forward rollout module is very important. It aligns your (effectively) world model with what it expects from the world, and keeping those in sync I think gives this the power it needs to be able to generate the state action pairs to continuously train semi supervised

dangoodmanUT · 17 days ago
11 million hours of data is a lot, did you have to synthesize it at all, or was it purely collected?
nee1r · 17 days ago
collected! no synthetic
dr_dshiv · 16 days ago
Cool! Isn’t this what cursor initially tried to do before they pivoted? Hence cursor?

Must have been really hard. What was the breakthrough?

xianshou · 16 days ago
Great work! Why no benchmarks though?
arkmm · 17 days ago
Get ready for the acquisition offers.
kylenessen · 17 days ago
This seems like really great research, and the first time I’ve seen overwhelming praise on HN. Congrats!

I wanted to comment though that your title is not doing you any favors, and I suspect that is why this is not getting more traction (which it deserves). I fully expected some half baked GitHub repo, but instead found something truly awesome.

To use your own words, Neel, “ a very different type of computer use model” would have had me clicking faster. I’m not great at titles, however, and maybe there are better ideas out there.

Anyway, can’t wait to see how this develops! Especially looking forward to the CAD work.

nee1r · 17 days ago
cool thanks for the title idea!! hopefully when we scale up in the next month/two we can update the community
clemvonstengel · 19 days ago
I rly liked the point about ctrl-c only being able to be labelled retrocausally. I do think that with enough past context you should be able to know what was copied - in some sense the past does encode the future - but also an agentic decision is precisely the kind where the future is more informative than the past for reconstructing that decision.

It does make me wonder if you should have the inverse dynamics model split into specifically retrocausal and causal. You kind of do this already with the inverse and forward dynamics model, but the idea of a model that knows only about the future training in a feedback loop with a model that knows only about the past is kind of interesting.

I think you could just do a clever masking regime in your diffusion model to achieve the same effect without a whole architecture change.

g413n · 19 days ago
yeah we actually had some wacky ideas with ctc + a reverse-causal mask but diffusion does just make it all a bit more simple
cs702 · 17 days ago
At first glance, this looks incredible to me. The authors train one model on 40K hours of computer-use video, previously labeled by contractors with keyboard and mouse actions, then use that model, in effect, to label 11M hours of computer-use video, which they use to train the computer-action model. The key advance is in compression. Quoting from the OP:

> [previous models] burn a million tokens to understand just one minute of 30 FPS computer data. Our video encoder encodes nearly 2 hours of video in the same number of tokens—that’s 50x more token-efficient than the previous state-of-the-art and 100x more token-efficient than OpenAI’s encoder.

While I was already aware that there are people working on new, more efficient "world models," this is the first one I've seen in action. I'm a bit in shock at how good it is, quite frankly.

I've added the OP, as well as a related 2018 paper on Behavioral Cloning from Obervation (BCO) to my reading list.[a] So far, I've only skimmed the 2018 paper, but it's already evident that it's well-written. I'm no expert in deep RL, and I can understand it. BTW, "Behavioral Cloning from Obervation" is a really good name, with an easy-to-remember acronym.

Thank you for sharing this on HN.

[a] https://arxiv.org/abs/1805.01954

nee1r · 17 days ago
yeah! i love the BCO paper, i think its extremely intuitive and these methods are really interesting in a time where data without labels is abundant. i especially like the idea of iteratively making the inverse dynamics better—might lean closer to that in the future
cs702 · 16 days ago
> i especially like the idea of iteratively making the inverse dynamics better

Same here.

The notion of inducing these models to "hypothesize" distributions over possible actions given subsequent observed transitions makes me think of "contrastive divergence," the method Hinton and others came up with for unsupervised training of Restricted Boltzmann Machines (RBMs), in the prehistoric era of deep learning.

Given each training sample, an RBM would 1) execute a forward pass, 2) sample its output units, 3) "hypothesize" its input units, 4) execute another forward pass on the "hypothesized" input units to sample new output units, and (5) compute a type of contrastive error for local backpropagation. RMBs could be stacked, with output units from one becoming input units for the next one. Hinton called the input units "visible," and the output ones "hidden."

It's not the same, obviously, but the idea of modeling machine-generated inputs (or actions) given outputs (or transitions) has always been appealing. It has a long history.

alyxya · 19 days ago
This looks extremely impressive, really deserves more attention here.

Are the inverse dynamics and forward dynamics models trained separately? It sounds like if the inverse dynamics model is meant to extrapolate more training data, then perhaps all that means is it takes very little data to generalize directly with the forward dynamics model assuming the right architecture.

nee1r · 18 days ago
thanks! the inverse dynamics model is trained first on 40k hours of data and then frozen to label all 11 million hours. yup! the idea is that it should take a small amount of data to generalize environment dynamics, then you can use a lot of data to understand actions.
mcint · 17 days ago
Congratulations! I’ll be interested to see the next steps in alignment. Do you plan to start selling access, or collect more data to train bigger & better? What tasks or benchmarks are your biggest guide stars, or what was unexpectedly tricky—a few are hinted in the post.

It would be pretty interesting to see activation maps for the encoder on video, confidence building to see the compression derived from so much training.

nee1r · 17 days ago
we have an alignment blog post dropping soon! scaling up in the next couple of months, then hopefully opening up an API or licensing it.

Benchmarks are really fun—lots of secret ones. Our main thesis is that you should be using the same benchmarks to measure human ability to use a computer, as you would an AI model. Definitely a suite of continuous long term planning tasks (games) and things such as marking emails as spam etc.

definitely! we are looking into more interp + visualizations in general as we scale up.

npunt · 17 days ago
The mouse cursor binning special case is starting to look like how animals perceive, where we detect patterns and develop predictive models over time in how they are going to act, and that confidence leads to more deeply encoding those patterns for lower energy usage. Obviously the mouse cursor is a hand-rolled example in a controlled 2d environment, but it makes me wonder what efficiencies lie in identifying patterns in 3d environments once you construct an accurate enough 3d scene out of the images you have.

Do you have other examples of special cases you're looking at? Any 3d ones?

theredsix · 17 days ago
This is one of those hacker news posts that you stumble upon and see 2 genius ideas within the span of as many paragraphs. Thanks again for sharing the diffusion based labeling algorithm. Truly demonstrates a mastery and understanding of what diffusion is capable of.
nee1r · 17 days ago
thanks! i definitely love diffusion + pushed for it, as a non-causal generative method i think its pretty unique