The self-edit approach is clever - using RL to optimize how models restructure information for their own learning. The key insight is that different representations work better for different types of knowledge, just like how humans take notes differently for math vs history.
Two things that stand out:
- The knowledge incorporation results (47% vs 46.3% with GPT-4.1 data, both much higher than the small-model baseline) show the model does discover better training formats, not just more data. Though the catastrophic forgetting problem remains unsolved, and it's not completely clear whether data diversity is improved.
- The computational overhead is brutal - 30-45 seconds per reward evaluation makes this impractical for most use cases. But for high-value document processing where you really need optimal retention, it could be worth it.
The restriction to tasks with explicit evaluation metrics is the main limitation. You need ground truth Q&A pairs or test cases to compute rewards. Still, for domains like technical documentation or educational content where you can generate evaluations, this could significantly improve how we process new information.
Feels like an important step toward models that can adapt their own learning strategies, even if we're not quite at the "continuously self-improving agent" stage yet.
Two close friends of mine who were math prodigies that went on to do ML very early (mid 2010's) were always talking to me about an algorithm that sounds similar to this:
"NEAT/HyperNEAT" (Neuroevolution of Augmented Topologies) [0]
I'm no ML practictioner, but as I understood it, the primary difference between NEAT and what is described in this paper is that while NEAT evolves the topology of the network, this paper seems to evolve the weights.
Seems like two approaches trying to solve the same problem -- one evolving networking structure, and the other the weights.
Those 2 friends are quite possibly the most intelligent people I've ever met, and they were very convinced that RL and evolutionary algorithms were the path forward in ML.
Humans are amazing, we build a hypothetical computing system trying to understand neurons, then find out it’s not really how they do it, but whatever, we still build a paradigm shifting tech around it. And we’re still enhancing it with ideas from that imaginary system
Since we lack knowledge and means to build like the real thing, this is what we have to go on with for now. I think it is obvious, that the industry goes with whatever is available. Though all the uninformed hype of it by people thinking it works like the brain is certainly annoying.
I just got sucked into this idea recently! After some success with using genetic algorithms to clone voices for Kokoro I wondered if it would be possible to evolve architecturers. So interested in the idea of self assembled intelligence, but do wonder how it can be made feasible. A hybrid approach like this might be for the best given how llms have turned out.
So the issue with genetic algorithms / genetic programming is you need a good way to handle the path the population takes. It is more reinforcement than y = f(x) for deep learning f() is what the nn is computing. X and y is the training data.
Finding a good scoring algorithm is hard as it is so easy for a GA to cheat...
"when assessed by Claude 3.5 Sonnet’s production-grade RM, our unsupervised assistant policy wins 60% of head-to-head comparisons against the policy trained with the human-supervised RM." So now the models can even post-train the new models better than a human can
Everytop model in ARC AGI used a test time finery king approach. They they had one example pair though and would usually do transformations (color, mirroring, etc) of it for the finetuning, and that might have been coded by hand
I wonder if anyone who’s really in the know could summarize where the research is at with getting LLMs to learn “on the job” (through continuous fine tuning or whatever) and what the blockers are to this being a useful deployable thing, e.g. having a model+coding agent that can actually learn a codebase over time (cost? model collapse? something else?).
I’m sure this is something the big labs are trying but from the outside as a user of LLMs it feels like people don’t talk about this very much and instead the focus right now is on better training (eg reinforcement learning) with the assumption that anything else not learned during training will be stuffed into the context somehow as needed. But from a naive perspective the lack of learning from experience after training seems like the biggest thing standing between us and AGI.
Many people here are right, compute, collapse, forgetting whatever.
The only "real" way to do this would be:
1. Train a model
2. New data
3. Retrain the model in full + new data
4. Repeat
5. You still have no garuntee on the "time" aspect though.
But CL as a field basically has zero answers on how to do this in a true sense. It's crazy hard because the "solutions" are hypocritical in many ways.
We need to expand the model's representation space while keeping the previous representation space nearly the same?
Basically, you need to modify it without changing it.
Most annoying is that even the smallest of natural brains do this easily. I have a long winded theory but basically it boils down to AI likely needs to "sleep" or rest somehow.
The cool thing about AI that I'm seeing as an outsider/non-academic, is that it's relatively cheap to clone. Sleeping/resting could be done by a "clone" and benefits could be distributed on a rolling schedule, right?
You should look into LoRA, it’s a partial retraining method, doesn’t require nearly as much as retraining the whole model. It’s different from what this paper is suggesting. The self improvements in this paper even sets the rules for the improvements, basically creating new data out of what it has.
This only seems to be the case with the current crop of models. "Online learning" is a term for having models deployed and keeping them learning and it has been around for more basic models for a long time.
but natural brains sleep too, which I guess is your point. But actually is it even clear in human brains whether most of neural compute is evaluation vs training? maybe the brain is like for e.g. capable of running 20T model of compute and deploying like 2B model at given time and most of compute is training in background new models--I mean like you say we have no idea except for training from scratch, but if we are working much below capacity of compute we could actually actively train from scratch repeatedly (like the xAI cluster could probably train gpt4o size in a matter of hours)
3. Unknow paper that connects this
- allow a "forgetting" model that improves generalization over time.
- I tried for a long time to make this but it's a bit difficult
Fun implication is that if true this implies AGI will need "breaks" and likely need to consume non task content of high variety much like a person does.
I'm no expert, but I'd imagine privacy plays (or should play) a big role in this. I'd expect that compute costs mean any learning would have to be in aggregate rather than specific to the user which would then risk leaking information across sessions very likely.
I completely agree that figuring out a safe way to continually train feels like the biggest blocker to AGI
The real answer is that nobody trusts their automated evals enough to be confident that any given automatically-trained release actually improves performance, even if eval scores go up. So for now everyone batches up updates and vibe-checks them before rolling them out.
The most obvious problem is alignment. LLM finetuning is already known to be able to get rid of alignment, so any form of continuous fine tuning would in theory be able to as well.
Is that necessarily a blocker? As others in this thread have pointed out, this probably becomes possible only once sufficient compute is available for some form of non-public retraining, at the individual user level. In that case (and hand-waving away just how far off that is), does a model need to retain its generality?
Hypothetically (and perhaps more plausibly), a continually learning model that adapts to the context of a particular org / company / codebase / etc., could even be desirable.
Hmm, it looks like it’s just a framework that fine-tunes LoRA adapter then merges the adapter into the original model. It is using the PeftModel and its “merge_and_unload” from the HuggingFace library which performs the adapter merge into the base model…what is new here, exactly?
Looks like it may be the stability of the approach, avoiding alignment tax and model collapse.
I'd love to see a full circle of hypernetworks, with both models continuously updated through generated LoRAs, the hypernetwork updated to accommodate the new model state. You'd need a meta-hypernetwork to apply LoRAs to the hypernetwork, and then you could effectively have continuous learning.
> Large language models (LLMs) are powerful but static; they lack mechanisms to adapt their weights in response to new tasks
The learning and inference process are entirely separate, which is very confusing to people familiar with traditional notions of human intelligence. For humans, learning things and applying that knowledge in the real world is one integrated feedback process. Not so with LLMs, we train them, deploy them, and discard them for a new model that has "learned" slightly more. For an LLM, inference is the end of learning.
Probably the biggest misconception out there about AI. If you think LLMs are learning, it's easy to fantasize that AGI is right around the corner.
Everything I've read in the last 5 months says otherwise. Probably best described by the Apple ML group's paper call The Illusion of Thinking. It empirically works, but the explanation could just be that making the stochastic parrot squawk longer yields a better response.
In any case, this is a far cry from what I was discussing. At best, this shows an ability for LLMs to "learn" within the context window, which should already be somewhat obvious (that's what the attention mechanism does). There is no global knowledge base or weight updates. Not until the content gets published, rescraped, and trained into the next version. This does demonstrate a learning feedback loop, albeit one that takes months or years, driven by external forces - the company that trains it. But it's way too slow to be considered intelligent, and it can't learn on its own without help.
A system that truly learned, ie incorporated empirical data from its environment into its model of the world, would need to do this in millisecond time frames. Single celled organisms can do this. Where you at AGI?
What if you can check if the user responds positively/negatively to the output, and then you train the LLM on the input it got and the output it produced?
It seems to me that "forgetting correctly" is rapidly becoming a more pertinent problem in this field than "learning correctly." We're making great strides in getting models to teach themselves new facts, but the state of the art in jettisoning the least relevant information given new knowledge and finite capacity is lagging far behind.
"Forgetting correctly" is something most human brains are exceptionally good at, too. I wonder how that works...
I don't think forgetting correctly is something humans are really good at. I'm not convinced human brains are "exceptionally good" at much of what we do tbh. I think human brain memory capacity is so large that most of forgetting is nowhere near "clearing space for new info" but because the brain correctly knows that some past bad information interferes with learning new things.
Yea, As far as im aware we have no true idea of the limits of human memory. Either way its amazing that the hippocampus can encode sequences of neurons firing somewhere and replay them later.
Eh, I'd disagree. First the human brain is an evolutionary miracle when it comes to filtering. When you walk in a new room and then are questioned about it later you will most likely remember things like the door or where set some object, but after that your brain will filter out and just make up details as needed.
The other thing is the brain down values and prunes paths we don't use and strengthens one's we do. This is why something you've not done it a while might need a refresher for you to do right again.
As far as I know we have made very little progress on identifying which weights to what degree in an ANN are responsible for what output and as such we cannot discard information, that a user would mark as wrong or inaccurate or undesirable. The human mind however, can do this easily. We remember (though not perfectly) that something is wrong, classified as not useful, irrelevant, and we don't do that any longer and over time might even forget about that now less traveled path. An ANN has no obvious mechanism for that at least.
Learning is strongly related to spaced repetition.
This is often associated with learning tools like anki and stuff, but the real world is all about encountering things at certain frequencies (day night cycles, seasons, places you visit, people you see.... everything, really)
I'm wondering if there maybe some sort of inverse to SR, maybe?
Is it some form of least-recently-used approach? I'm running tests on my own mind trying to figure it out now :D part of what I love about this area of computer science.
Two things that stand out:
- The knowledge incorporation results (47% vs 46.3% with GPT-4.1 data, both much higher than the small-model baseline) show the model does discover better training formats, not just more data. Though the catastrophic forgetting problem remains unsolved, and it's not completely clear whether data diversity is improved.
- The computational overhead is brutal - 30-45 seconds per reward evaluation makes this impractical for most use cases. But for high-value document processing where you really need optimal retention, it could be worth it.
The restriction to tasks with explicit evaluation metrics is the main limitation. You need ground truth Q&A pairs or test cases to compute rewards. Still, for domains like technical documentation or educational content where you can generate evaluations, this could significantly improve how we process new information.
Feels like an important step toward models that can adapt their own learning strategies, even if we're not quite at the "continuously self-improving agent" stage yet.
"NEAT/HyperNEAT" (Neuroevolution of Augmented Topologies) [0]
I'm no ML practictioner, but as I understood it, the primary difference between NEAT and what is described in this paper is that while NEAT evolves the topology of the network, this paper seems to evolve the weights.
Seems like two approaches trying to solve the same problem -- one evolving networking structure, and the other the weights.
Those 2 friends are quite possibly the most intelligent people I've ever met, and they were very convinced that RL and evolutionary algorithms were the path forward in ML.
[0] https://en.wikipedia.org/wiki/Neuroevolution_of_augmenting_t...
SethBling's MarI/O - Machine Learning for Video Games
https://www.youtube.com/watch?v=qv6UVOQ0F44
Finding a good scoring algorithm is hard as it is so easy for a GA to cheat...
Source: experience
https://arxiv.org/html/2506.10139v1
"when assessed by Claude 3.5 Sonnet’s production-grade RM, our unsupervised assistant policy wins 60% of head-to-head comparisons against the policy trained with the human-supervised RM." So now the models can even post-train the new models better than a human can
Unsupervised Elicitation of Language Models - https://news.ycombinator.com/item?id=44276041
Deleted Comment
I’m sure this is something the big labs are trying but from the outside as a user of LLMs it feels like people don’t talk about this very much and instead the focus right now is on better training (eg reinforcement learning) with the assumption that anything else not learned during training will be stuffed into the context somehow as needed. But from a naive perspective the lack of learning from experience after training seems like the biggest thing standing between us and AGI.
Many people here are right, compute, collapse, forgetting whatever.
The only "real" way to do this would be: 1. Train a model 2. New data 3. Retrain the model in full + new data 4. Repeat 5. You still have no garuntee on the "time" aspect though.
But CL as a field basically has zero answers on how to do this in a true sense. It's crazy hard because the "solutions" are hypocritical in many ways.
We need to expand the model's representation space while keeping the previous representation space nearly the same?
Basically, you need to modify it without changing it.
Most annoying is that even the smallest of natural brains do this easily. I have a long winded theory but basically it boils down to AI likely needs to "sleep" or rest somehow.
LoRA paper: https://arxiv.org/abs/2106.09685
Deleted Comment
1. Preventing collapse -> model gets "full" https://arxiv.org/pdf/1612.00796
2. Forgetting causes better generalization https://arxiv.org/abs/2307.01163
3. Unknow paper that connects this - allow a "forgetting" model that improves generalization over time. - I tried for a long time to make this but it's a bit difficult
Fun implication is that if true this implies AGI will need "breaks" and likely need to consume non task content of high variety much like a person does.
I completely agree that figuring out a safe way to continually train feels like the biggest blocker to AGI
Hypothetically (and perhaps more plausibly), a continually learning model that adapts to the context of a particular org / company / codebase / etc., could even be desirable.
There are tons of benchmarks around this you can easily run with 1 gpu.
It's compute only in the sense that the only way to do it is retrain a model from scratch at every step.
If you solve CL with a CNN you just created AGI.
Dead Comment
I'd love to see a full circle of hypernetworks, with both models continuously updated through generated LoRAs, the hypernetwork updated to accommodate the new model state. You'd need a meta-hypernetwork to apply LoRAs to the hypernetwork, and then you could effectively have continuous learning.
The learning and inference process are entirely separate, which is very confusing to people familiar with traditional notions of human intelligence. For humans, learning things and applying that knowledge in the real world is one integrated feedback process. Not so with LLMs, we train them, deploy them, and discard them for a new model that has "learned" slightly more. For an LLM, inference is the end of learning.
Probably the biggest misconception out there about AI. If you think LLMs are learning, it's easy to fantasize that AGI is right around the corner.
In any case, this is a far cry from what I was discussing. At best, this shows an ability for LLMs to "learn" within the context window, which should already be somewhat obvious (that's what the attention mechanism does). There is no global knowledge base or weight updates. Not until the content gets published, rescraped, and trained into the next version. This does demonstrate a learning feedback loop, albeit one that takes months or years, driven by external forces - the company that trains it. But it's way too slow to be considered intelligent, and it can't learn on its own without help.
A system that truly learned, ie incorporated empirical data from its environment into its model of the world, would need to do this in millisecond time frames. Single celled organisms can do this. Where you at AGI?
"Forgetting correctly" is something most human brains are exceptionally good at, too. I wonder how that works...
The other thing is the brain down values and prunes paths we don't use and strengthens one's we do. This is why something you've not done it a while might need a refresher for you to do right again.
This is often associated with learning tools like anki and stuff, but the real world is all about encountering things at certain frequencies (day night cycles, seasons, places you visit, people you see.... everything, really)
I'm wondering if there maybe some sort of inverse to SR, maybe?
They don't just "forget" that information can come back at a later time if you continue to train.
So basically any time a model is trained you need to check it's entire memory not just a small part.