Readit News logoReadit News
Flamentono2 commented on John Carmack talk at Upper Bound 2025   twitter.com/ID_AA_Carmack... · Posted by u/tosh
akomtu · 7 months ago
IMO, he's right. LLMs can't be AI because they don't create a model of observations to predict things, they just imitate observations based on their likeness to each other. When you play Quake, you create a simple model of the game physics and use that fast model to navigate the game. Your equivalent of LLM has a role too: it's a fuzzy detector of things you encounter in the game, sounds, images and symbols, but once detected, those things are fed into the fast and rigid physics model.
Flamentono2 · 7 months ago
Yes but the LLM could tell the physics system that it is physics related.

Hey look you see a stone falling

Flamentono2 commented on John Carmack talk at Upper Bound 2025   twitter.com/ID_AA_Carmack... · Posted by u/tosh
Flamentono2 · 7 months ago
I find it interesting that he dismisses LLMs.

I would argue that if he wants to do AGI through RL, a LLM could be a perfect teacher or oracle.

After all i'm not walking around as a human and not having guidance. It should/could make RL a lot faster leveraging this.

My logical part / RL part does need the 'database'/fact part and my facts are trying to be as logical as possible but its just not.

Flamentono2 commented on John Carmack talk at Upper Bound 2025   twitter.com/ID_AA_Carmack... · Posted by u/tosh
Flamentono2 · 7 months ago
Its hard to follow what you try to commounicate at least the last half.

Nonetheless, yes we do know certain brain structures like your image net analogy but the way you describe it, sounds a little bit of.

Our virtual cortex is not 'just a layer' its a component i would say and its optimized of detecting things.

Other components act differently with different structures.

Flamentono2 commented on AI in my plasma physics research didn’t go the way I expected   understandingai.org/p/i-g... · Posted by u/qianli_cs
ktallett · 7 months ago
Ok take Transcription, they were trying to use free as in cost tools instead of using software that works efficiently that has been effective for decades now.
Flamentono2 · 7 months ago
I'm following transcription software for 2 decades.

You assume too much...

Flamentono2 commented on AI in my plasma physics research didn’t go the way I expected   understandingai.org/p/i-g... · Posted by u/qianli_cs
nancyminusone · 7 months ago
LLMs are great at tasks that involve written language. If your task does not involve written language, they suck. That's the main limitation. No matter how hard you push, AI is not a 'do everything machine' which is how it's being hyped.
Flamentono2 · 7 months ago
Written language is very powerful apparently. After all LLM can generate SVG, python code to use Blender etc.

One demo i saw with LLM and code use: "Generate a small snake game" and because the author still had the Blender MCP tool connection, the LLM decided to generate 3D assets through Blender for that game.

Flamentono2 commented on AI in my plasma physics research didn’t go the way I expected   understandingai.org/p/i-g... · Posted by u/qianli_cs
lossolo · 7 months ago
> Its a paradigma shift for the whole world literaly.

That's hyperbolic. I use LLMs daily. They speed up tasks you'd normally use Google for and can extrapolate existing code into other languages. They boost productivity for professionals, but it's not like the discovery of the steam engine or electricity.

> And what limitations are obvious? Tell me? We have not reached any real ceiling yet.

Scaling parameters is the most obvious limitation of the current LLM architecture (transformers). That’s why what should have been called GPT-5 is instead named GPT 4.5, it isn’t significantly better than the previous model despite having far more parameters, a lot more cleaned up training data and optimizations.

The low-hanging fruit has already been picked, and most obvious optimizations have been implemented. As a result, almost all leading LLM companies are now operating at a similar level. There hasn’t been a real breakthrough in over two years. And the last huge architectural breakthrough was in 2017 (with paper "Attention is all you need").

Scaling at this point yields only diminishing returns. So no, what you’re saying isn’t accurate, the ceiling is clearly visible now.

Flamentono2 · 7 months ago
> ... but it's not like the discovery of the steam engine or electricity.

completly disagree. People might have googled before but the human<>computer interface was never in any way as accessable as it is now for a normal human being. Can i use Photoshop? yes but i learned it. My sisters played around with Dall-E and are now able to do simiiliar things.

It might feel boring to you that technology accessability drips down like this, but this changes a lot for a lot of people. The entry barrier to everything got a lot lower. It makes a huge difference to you as a human being if you have rich parents and good teachers or not. You had never the chance to just get help like this. Millions of kids struggle because they don't have parents they can ask certain questions required for understanding topics in school.

Steam Engine = fundamental for our scaling economy electricity = fundamental for liberating all of us from day time internet = interconnecting all of us LLM/ML/AI = liberating knowledge through accessability

> 'There hasn’t been a real breakthrough in over two years.' DeepSeek alone was a real breakthrough.

But let me ask an LLM about this:

- Mixture of Experts (MoE) scaling

- Long-context handling

- Multimodal capabilities

- Tool use & agentic reasoning

Funny enough your comment comes before claude 4.0 release (again increase in performance, etc.) and the Google IO.

We don't know if we found all 'low hanging fruits'. The meta paper about thinking in latent space came out in February. I would definitly call this a low hanging fruit.

We are limited, very hard, on infrastructure. Every experiement you want to try consumes a lot of it. If you look at the top x GPU AI clusters, we don't have that many on the planet. We have Google, Microsoft, Azure, Nvidia, Baidu, Tesla and xAI, Cerebras. Not that many researcher are able to just work on this.

Google has now its first Diffusion based Model active. 2025! We are so far away from testing out more and more approaches, architectures etc. And we are optimizing on every front. Cost, speed, precision etc.

Flamentono2 commented on Watching AI drive Microsoft employees insane   old.reddit.com/r/Experien... · Posted by u/laiysb
Aldipower · 7 months ago
Fair point, if there wouldn't be so many annoying and false promises before.
Flamentono2 · 7 months ago
Not sure what promises you heard. For me a lot of them came true.

I created images and music which was enjoyable. I use it to add more progress to an indie side project I'm playing around with (i added more functionality to it with ai stuff like claude code and now jules.google than i did myself in the last 3 years).

It helps my juniors to become better in their jobs.

Everything related to sound / talking to a computer is now solved. I talked to gemini yesterday and i interruptted it.

Image segmentation became a solved problem and that was really hard before.

I can continue my list of things AI/ML made things possible in the last few years which were impossible before that.

Flamentono2 commented on Veo 3 and Imagen 4, and a new tool for filmmaking called Flow   blog.google/technology/ai... · Posted by u/youssefarizk
bamboozled · 7 months ago
Existens is what it is. If it means being able to watch cat videos, so be it. We are not watching them for nothing, we watch them for happiness.

Well that's just your opinion.

Yes we can generate electricity, but it would be nice if used it wisely.

Flamentono2 · 7 months ago
Of course its my opinion, its my comment after all.

Nonetheless, survival can't be the life goal after all the moon will drift away from earth in the future, the sun will explode and if we survive that as a species, all bonds between elements will disolve.

It also can't be about giving your dna away because your dna has very little to no impact over just a handful of generations.

And no the goal of our society has to be to have as much energy available as possible to us. So much energy, that energy doesn't matter. There is enough ways of generating energy without a real issue at all. Fusion, renewable energy directly from the sun.

There is also no inherant issue right now preventing us all having clean stable energy besides capitalsm. We have the technology, we have the resources, we have the manufacturing capacity.

To finish my comment: Its not about energy, its about entropy. You need energy to create entropy. We don't even consume the energy of the sun, we use it for entropy and dissipate it back to space after.

Flamentono2 commented on Veo 3 and Imagen 4, and a new tool for filmmaking called Flow   blog.google/technology/ai... · Posted by u/youssefarizk
afroboy · 7 months ago
> Whats the elephant in the room now?

Your family will be target for example, just imagine your daughter in high-school getting bullied by these type of generated AI videos. it's easy to say nothing happen, but when it happen to you you will be aware how fucked is these AI videos.

Flamentono2 · 7 months ago
If someone bullies someone else, they will do it with anything they have.

At least with AI Video you can now always say its AI video.

Is it shitty that this is possible? yes of course. But hidding knowledge never works.

We have to deal with it as adults. We need to educate about it and we need to talk about it.

Flamentono2 commented on Veo 3 and Imagen 4, and a new tool for filmmaking called Flow   blog.google/technology/ai... · Posted by u/youssefarizk
afroboy · 7 months ago
Can we talk about the elephant in the room, porn and i mean the weird and dangerous one? that moment in history of AI is going to happen and when it did shit will hit the fan.
Flamentono2 · 7 months ago
AI porn already exist.

Im pretty sure kid/child ai porn already exist somewhere. But i'm quite lucky despite knowing rotten.com and plenty of other sides, never having seen real so i doubt i will see fake child porn.

Whats the elephant in the room now? Nothing changed. Whoever consumes real will consume fake too. FBI/CIA will still try to destroy cp rings.

We could even think it might make this situation somehow better because they might consume purely virtual cp?

u/Flamentono2

KarmaCake day71May 16, 2025View Original