Readit News logoReadit News
dang · 15 days ago
This is more interesting and deserves better discussion than we got from the previous title, which was derailed by the "AGI" bit, so I replaced the title with a representative sentence from the video.

(Edit: plus a question mark, as we sometimes do with contentious titles.)

tim333 · 15 days ago
>The era of boundary-breaking advancements is over

Maybe for LLMs but they are not the only possible algorithm. Only this week we had Genie 3 as in:

>The Surprising Leap in AI: How Genie 3’s World Model Redefines Synthetic Reality https://www.msn.com/en-us/news/technology/the-surprising-lea...

and:

>DeepMind thinks its new Genie 3 world model presents a stepping stone toward AGI https://techcrunch.com/2025/08/05/deepmind-thinks-genie-3-wo...

dgs_sgd · 15 days ago
How different are world models from LLMs? I'm not in the AI space but follow it here. I always assumed they belonged to the same "family" of tech and were more similar than different.

But are they sufficiently different that stalling progress in one doesn't imply stalling progress in the other?

halfcat · 15 days ago
> How different are world models from LLMs?

Depends if you’re asking about real world models or synthetic AI world models.

One of them only exists in species with a long evolutionary history of survivorship (and death) over generations living in the world being modeled.

There’s a sense of “what it’s like to be” a thing. That’s still a big question mark in my mind, whether AI will ever have any sense of what it’s like to be human, any more than humans know what it’s like to be a bat or a dolphin.

You know what it’s like for the cool breeze to blow across your face on a nice day. You could try explaining that to a dolphin, assuming we can communicate one day, but they won’t know what it’s like from any amount of words. That seems like something in the area of neuralink or similar.

tim333 · 15 days ago
There are similarities with that one. From their website:

>It is comprised of a spatiotemporal video tokenizer, an autoregressive dynamics model, and a simple and scalable latent action model.

my point is more people can try different models and algorithms rather than having to stick to LLMs.

jononor · 14 days ago
The world models are not really useful yet. So they are starting lower, compared to LLM. So they probably have some decent gains to make still, before it gets really hard (diminishing returns).
apwell23 · 15 days ago
machine learning street talk has interview with the team
rossdavidh · 16 days ago
On the one hand, that isn't necessarily a problem. It can be just a useful algorithm for tool calling or whatever.

On the other hand, if you're telling your investors that AGI is about two years away, then you can only do that for a few years. Rumor has it that such claims were made? Hopefully no big investors actually believed that.

The real question to be asking is, based on current applications of LLMs, can one pay for the hardware to sustain it? The comparison to smartphones is apt; by the time we got to the "Samsung Galaxy" phase, where only incremental improvements were coming, the industry was making a profit on each phone sold. Are any of the big LLMs actually profitable yet? And if they are, do they have any way to keep the DeepSeeks of the world from taking it away?

What happens if you built your business on a service that turns out to be hugely expensive to run and not profitable?

Salgat · 16 days ago
>On the other hand, if you're telling your investors that AGI is about two years away, then you can only do that for a few years.

Musk has been doing this with autonomous driving since 2015. Machine learning has enough hype surrounding it that you have to embellish to keep up with every other company's ridiculous claims.

ivan_gammel · 16 days ago
I doubt this was the main driver for the investors. People were buying Tesla even without it.

Whether there is hype or not, the laws of money remain the same. If you invest and don’t get expected returns, you will be eventually concerned and will do something about it.

coolThingsFirst · 16 days ago
Why are companies allowed to lie? I really can’t understand. If a person lies they lose credibility but it doesn’t apply to the rich and powerful.
bloppe · 16 days ago
Lying to investors is illegal, and investors have incentive and means to sue if they think they were defrauded. The problem is proving it. I'm sure a lot of founders genuinely believe AGI is about to appear out of thin air, so they're technically not lying. Even the cynical ones who say whatever they think investors want to hear are hard to catch in a lie. It's not really about being rich and powerful. That's just the unfortunate reality of rhetoric.
orionsbelt · 15 days ago
Predictions about the future and puffery are not illegal. Lying about facts are. Nobody knows how far away AGI is, everyone just has their own predictions.
rossdavidh · 15 days ago
In addition to the other comments/answers to this, I would like to add that if you lie to your investors (in public), and they suspect you're lying but also think it will allow you to cash out before the lie becomes apparent, they may not care, especially if the lie is difficult to distinguish from pathological levels of optimism.
xenotux · 15 days ago
It's not a crime to be wrong; it's only a crime to deliberately lie. And unless there's an email saying "haha we're lying to our investors", it's just not easy to prove.

Deleted Comment

lif · 15 days ago
so, without sarcasm: how many data centers is this non-happening worth? in other words, what justifies the huge spend?

Deleted Comment

maxhille · 16 days ago
I mean there are different definitions on what to call an AGI. Most of the time people don't specify which one they use.

For me an AGI would mean truly at least human level as in "this clearly has a consciousness paired with knowledge", a.k.a. a person. In that case, what do the investors expect? Some sort of slave market of virtual people to exploit?

kbrkbr · 16 days ago
Investors don't use this definition. For one because it contains something you cannot measure yet: consciousness.

How to find out if something has probably consciousness? Much less clearly? What is consciousness?

nemomarx · 16 days ago
Being able to make arbitrary duplicates of slaves would be profitable, as long as the energy and compute is lower than salaries yeah
seanalltogether · 16 days ago
Do we have a reasonable definition for what intelligence is? Is it like defining porn, you just know it when you see it?
smnrchrds · 14 days ago
OpenAI defines AGI as a "highly autonomous system that outperforms humans at most economically valuable work" [0]. It may not be the most satisfying definition, but it is practical and a good goal to aim for if you are an AI company.

[0] https://openai.com/our-structure/

AndrewDucker · 15 days ago
My personal definition is "The ability to form models from observations and extrapolate from them."

LLMs are great at forming models of language from observations of language and extrapolating language constructs from them. But to get general intelligence we're going to have to let an AI build their models from direct measurements of reality.

daveguy · 15 days ago
> LLMs are great at forming models of language

They really aren't even great at forming models of language. They are a single model of language. They don't build models, much less use those models. See, for example, ARC-AGI 1 and 2. They only performed ARC 1 decently [0] with additional training, and are failing miserably on ARC 2. That's not even getting to ARC 3.

[0] https://arcprize.org/blog/oai-o3-pub-breakthrough

> Note on "tuned": OpenAI shared they trained the o3 we tested on 75% of the Public Training set. They have not shared more details. We have not yet tested the ARC-untrained model to understand how much of the performance is due to ARC-AGI data.

... Clearly not able to reason about the problems without additional training. And no indication that the additional training didn't include some feature extraction, scaffolding, RLHF, etc created by human intelligence. Impressive that fine tuning can get >85%, but it's still additional human directed training and not self contained intelligence at the level of performance reported. The blog was very generous making the undefined "fine tuning" a footnote and praising the results as if they were directly from the model that would have cost > $65,000 to run.

Edit: to be clear, I understand LLMs are a huge leap forward in AI research and possibly the first models that can provide useful results across multiple domains without being retrained. But they're still not creating their own models, even of language.

alan-crowe · 16 days ago
LLMs have demonstrated that "intelligence" is a broad umbrella term that covers a variety of very different things.

Think about this story https://news.ycombinator.com/item?id=44845442

Med-Gemini is clearly intelligent, but equally clearly it is an inhuman intelligence with different failure modes from human intelligence.

If we say Med-Gemini is not intelligent, we will end up having to concede that actually it is intelligent. And the danger of this concession is that we will under-estimate how different it is from human intelligence and then get caught out by inhuman failures.

pan69 · 15 days ago
> Is it like defining porn

I guess when it comes to the definition of intelligence, just like porn, different people have different levels of tolerance.

Deleted Comment

erikerikson · 16 days ago
One of my favorites is efficient cross domain maximization
optimalsolver · 15 days ago
Efficient, cross-domain optimization.

I believe that’s Eliezer Yudkowsky’s definition.