Readit News logoReadit News
justcallmejm commented on AGI is an engineering problem, not a model training problem   vincirufus.com/posts/agi-... · Posted by u/vincirufus
andy99 · 20 hours ago
If you believe the bitter lesson, all the handwavy "engineering" is better done with more data. Someone likely would have written the same thing as this 8 years ago about what it would take to get current LLM performance.

So I don't buy the engineering angle, I also don't think LLMs will scale up to AGI as imagined by Asimov or any of the usual sci-fi tropes. There is something more fundamental missing, as in missing science, not missing engineering.

justcallmejm · 17 hours ago
The missing science to engineer intelligence is composable program synthesis. Aloe (https://aloe.inc) recently released a GAIA score demonstrating how CPS dramatically outperforms other generalist agents (OpenAI's deep research, Manus, and Genspark) on tasks similar to those a knowledge worker would perform.

I'd argue it's because intelligence has been treated as a ML/NN engineering problem that we've had the hyper focus on improving LLMs rather than the approach articulated in the essay.

Intelligence must be built from a first principles theory of what intelligence actually is.

justcallmejm commented on AGI is an engineering problem, not a model training problem   vincirufus.com/posts/agi-... · Posted by u/vincirufus
justcallmejm · 18 hours ago
I'd argue that it's because intelligence has been treated as a ML/NN engineering problem that we've had the hyper focus on improving LLMs rather than the approach you've written about.

Intelligence must be built from a first principles theory of what intelligence actually is.

The missing science to engineer intelligence is composable program synthesis. Aloe (https://aloe.inc) recently released a GAIA score demonstrating how CPS dramatically outperforms other generalist agents (OpenAI's deep research, Manus, and Genspark) on tasks similar to those a knowledge worker would perform.

justcallmejm commented on A Comprehensive Survey of Self-Evolving AI Agents [pdf]   arxiv.org/abs/2508.07407... · Posted by u/SerCe
justcallmejm · 10 days ago
Missing from this paper: Aloe, a self-evolving agent that creates its own tools in real time as it encounters new problems. It can then use these tools to create still-better tools.

It just beat OpenAI by 20 points on GAIA – interestingly by the widest margin (30 points) on the hardest questions.

justcallmejm commented on Is the A.I. Boom Turning Into an A.I. Bubble?   newyorker.com/news/the-fi... · Posted by u/FinnLobsien
justcallmejm · 12 days ago
"In the A.I. economy, it seems possible that many of the rewards will go to top firms that can afford to build and maintain large A.I. models..."

This is a flawed (and common, still, somehow!) idea of where value is within AI. The gigantic models are commoditizing rapidly. They practically incinerate cash.

There is real value being created at the layer where users can use a product they can actually trust. LLMs are certainly not that.

Our system, Aloe, is model agnostic - so when next month's model comes out we just get better. And we already beat the pants off the frontier models in capability, spending a tiny fraction of a percent of the capital to build the system.

The existing concentration of capital into big LLM companies is indeed a bubble. Eventually some of those VCs will wake up and invest where the value is being created, I presume.

justcallmejm commented on GPT-5: Overdue, overhyped and underwhelming. And that's not the worst of it   garymarcus.substack.com/p... · Posted by u/kgwgk
reilly3000 · 15 days ago
I feel his need to be right distracts from the fact that he is. It’s interesting to think about what a hybrid symbolic/transformer system could be. In a linked post he showed that by effectively delegating math to Python is what made Grok 4 so successful at math. I’d personally like to see more of what a symbolic first system would look like, effectively hard math with monads for where inference is needed.
justcallmejm · 15 days ago
Aloe's neurosymbolic system just beat OpenAI's deep research score on the GAIA benchmark by 20 points. While Gary is full of bluster, he does know a few things about the limitations of LLMs. :) (aloe.inc)
justcallmejm commented on GPT-5: Overdue, overhyped and underwhelming. And that's not the worst of it   garymarcus.substack.com/p... · Posted by u/kgwgk
malloryerik · 15 days ago
Humans have a direct connection to our world through sensation and valence, pleasure, pain, then fear, hope, desire, up to love. Our consciousness is animal and as much or more pre-linguistic as linguistic. This grounds our symbolic language and is what attaches it to real life. We can feel instantly that we know or don't know. Yes we make errors and hallucinate, but I'm not going to make up an API out of the blue; I'll know by feeling that what I'm doing is mistaken.
justcallmejm · 15 days ago
Perception and understanding are different things. Just because you have wiring in your body to perceive certain vibrations in spacetime in certain ways, does not mean that you fully grasp reality - you have some data about reality, but that data comprises an incomplete, human-biased world model.
justcallmejm commented on GPT-5   openai.com/gpt-5/... · Posted by u/rd
beeflet · 17 days ago
Perhaps it is not possible to simulate higher-level intelligence using a stochastic model for predicting text.

I am not an AI researcher, but I have friends who do work in the field, and they are not worried about LLM-based AGI because of the diminishing returns on results vs amount of training data required. Maybe this is the bottleneck.

Human intelligence is markedly different from LLMs: it requires far fewer examples to train on, and generalizes way better. Whereas LLMs tend to regurgitate solutions to solved problems, where the solutions tend to be well-published in training data.

That being said, AGI is not a necessary requirement for AI to be totally world-changing. There are possibly applications of existing AI/ML/SL technology which could be more impactful than general intelligence. Search is one example where the ability to regurgitate knowledge from many domains is desirable

justcallmejm · 17 days ago
It is definitively not possible. But the frontier models are no longer “just” LLMs, either. They are neurosymbolic systems (an LLM using tools); they just don’t say it transparently because it’s not a convenient narrative that intelligence comes from something outside the model, rather than from endless scaling.

At Aloe, we are model agnostic and outperforming frontier models. It’s the anrchitecture around the LLM that makes the difference. For instance our system using Gemini can do things that Gemini can’t do on its own. All an LLM will ever do is hallucinate. If you want something with human-like general intelligence, keep looking beyond LLMs.

justcallmejm commented on Attention is your scarcest resource (2020)   benkuhn.net/attention/... · Posted by u/jxmorris12
justcallmejm · 24 days ago
This is precisely why we (two longtime Vipassana meditators) built Aloe. Attention is the precursor to agency. It’s all we’ve got. In today’s over-saturated information environment we need superhuman attention if we’re going to have any agency left.

u/justcallmejm

KarmaCake day25February 1, 2025View Original