They claim OOM faster learning, and robustness acro domains. There's enough detail to probably do your own PuTorch implementation, though they haven't released code. The paper has been accepted into AMLDS2025. So peer reviewed.
At first blush, this sounds really exciting and if results hold up and are replicated, it could be huge.
It's a helpful analogy to understand the contrast between today's gradient descent vs open-ended exploration.
[1] First half of https://www.youtube.com/watch?v=T08wc4xD3KA
More notes from my deep dive: https://x.com/jinaycodes/status/1932078206166749392
One thought... in the video, Ken makes the observation that it takes way more complexity and steps to find a given shape with SGD vs. open-endedness. Which is certainly fascinating. However...
Intuitively, this feels like a similar dynamic is at play with the "birthday paradox". That's where if you take a room of just 23 people, there is a greater than 50% chance that two of them have the same birthday. This is very surprising to most people. It seems like you should need way more people (365 in fact!). The paradox is resolved when you realize that your intuition is asking how many people it takes to have your birthday. But the situation with a room of 23 people is implicitly asking for just one connection among any two people. Thus you don't have 23 chances, you have 23 ^ 2 = 529 chances.
I think the same thing is at work here. With the open-ended approach, humans can find any pattern at any generation. With the SGD approach, you can only look for one pattern. So it's just not an apples to apples comparison and sort of misleading / unfair to say that open-endedness is way more "efficient", because you aren't asking it to do the same task.
Said another way, I think with the open-endedness, it seems like you are looking for thousands (or even millions) of shapes simultaneously. With SGD, you're kinda flipping that around, and looking for exactly 1 shape, but giving it thousands of generations to achieve it.