Readit News logoReadit News
ctoth · 7 months ago
Do you want paperclips? Because this is how you get paperclips!

Eliminate all agents, all sources of change, all complexity - anything that could introduce unpredictability, and it suddenly becomes far easier to predict the future, no?

JoshTriplett · 7 months ago
> Do you want paperclips? Because this is how you get paperclips!

Don't^W worry, there are many other ways of getting paperclips, and we're doing all of them.

sitkack · 7 months ago
Even explaining how not to get paper clips, gets you paper clips when you can invert the loss function. Paper clips for everyone!
vlovich123 · 7 months ago
I don't know. Paperclips are awful useful. Would it be so bad to build more of them?
Ygg2 · 7 months ago
That's all fun and games until paperclip maximizers starts looking at your blood as source of iron.
valine · 7 months ago
So instead of next token prediction its next event prediction. At some point this just loops around and we're back to teaching models to predict the next token in the sequence.
lumost · 7 months ago
Tokens are an awfully convenient way to describe an event.
phyalow · 7 months ago
Tokens are just discretized state representations.

Deleted Comment

ww520 · 7 months ago
It’s the next state. So instead of spitting out words, it will spit out a whole movie, or a sequence of world states in a game or simulation.
jldugger · 7 months ago
From the abstract

> A simple trading rule turns this calibration edge into $127 of hypothetical profit versus $92 for o1 (p = 0.037).

I'm lazy: is this hypothetical shooting fish in a barrel, or is it a real edge?

nyrikki · 7 months ago
Note the 'hypothetical profit' part , I know of several groups looking for opportunities to skim off LLM traders, leveraging its limited sensitivity, expressiveness, and the loss of tail data.

Predictive AI is problematic no matter what tool you use. Great at demoware that doesn't deliver.

I am sure there are use cases, but it would be augmentation, not a reliable approach by itself.

amelius · 7 months ago
Why would you use RL if you're not going to control the environment, but just predict it?
TOMDM · 7 months ago
Because they're training a predictor, not an agent?
garbagecoder · 7 months ago
"a couple of wavy lines"

bzzzzz "sorry this isn't your lucky day"