Readit News logoReadit News
Posted by u/mikeknoop a year ago
ARC Prize – a $1M+ competition towards open AGI progressarcprize.org/blog/launch...
Hey folks! Mike here. Francois Chollet and I are launching ARC Prize, a public competition to beat and open-source the solution to the ARC-AGI eval.

ARC-AGI is (to our knowledge) the only eval which measures AGI: a system that can efficiently acquire new skill and solve novel, open-ended problems. Most AI evals measure skill directly vs the acquisition of new skill.

Francois created the eval in 2019, SOTA was 20% at inception, SOTA today is only 34%. Humans score 85-100%. 300 teams attempted ARC-AGI last year and several bigger labs have attempted it.

While most other skill-based evals have rapidly saturated to human-level, ARC-AGI was designed to resist “memorization” techniques (eg. LLMs)

Solving ARC-AGI tasks is quite easy for humans (even children) but impossible for modern AI. You can try ARC-AGI tasks yourself here: https://arcprize.org/play

ARC-AGI consists of 400 public training tasks, 400 public test tasks, and 100 secret test tasks. Every task is novel. SOTA is measured against the secret test set which adds to the robustness of the eval.

Solving ARC-AGI tasks requires no world knowledge, no understanding of language. Instead each puzzle requires a small set of “core knowledge priors” (goal directedness, objectness, symmetry, rotation, etc.)

At minimum, a solution to ARC-AGI opens up a completely new programming paradigm where programs can perfectly and reliably generalize from an arbitrary set of priors. At maximum, unlocks the tech tree towards AGI.

Our goal with this competition is:

1. Increase the number of researchers working on frontier AGI research (vs tinkering with LLMs). We need new ideas and the solution is likely to come from an outsider! 2. Establish a popular, objective measure of AGI progress that the public can use to understand how close we are to AGI (or not). Every new SOTA score will be published here: https://x.com/arcprize 3. Beat ARC-AGI and learn something new about the nature of intelligence.

Happy to answer questions!

neoneye2 · a year ago
I'm Simon Strandgaard and I participated in ARCathon 2022 (solved 3 tasks) and ARCathon 2023 (solved 8 tasks).

I'm collecting data for how humans are solving ARC tasks, and so far collected 4100 interaction histories (https://github.com/neoneye/ARC-Interactive-History-Dataset). Besides ARC-AGI, there are other ARC like datasets, these can be tried in my editor (https://neoneye.github.io/arc/).

I have made some videos about ARC:

Replaying the interaction histories, and you can see people have different approaches. It's 100ms per interaction. IRL people doesn't solve task that fast. https://www.youtube.com/watch?v=vQt7UZsYooQ

When I'm manually solving an ARC task, it looks like this, and you can see I'm rather slow. https://www.youtube.com/watch?v=PRdFLRpC6dk

What is weird. The way that I implement a solver for a specific ARC task is much different than the way that I would manually solve the puzzle. Having to deal with all kinds of edge cases.

Huge thanks to the team behind the ARC Prize. Well done.

parentheses · a year ago
The UX of your solution entry is _way_ better than the ARC site itself.
mkl · a year ago
Being able to hold the mouse button down is certainly much nicer. Not being able to see the examples while you are solving makes it harder than it should be though.
neoneye2 · a year ago
That warms my heart. Thank you.

The short story. I needed something that could render thumbnails of tasks, so I could visual debug what was going on in my solver. However I have never gotten around to make the visual inspection tool. After having the thumbnail renderer, mid january 2024, then it eventually turned into what it is now.

ECCME · a year ago
"Here is a challenge, designed to be unsolvable or so. We'll give you a bazillion dollars if you complete the challenge, and, in the meantime, we will use your attempts to train an as AI that will be worth the cost!!"
gota · a year ago
In the most charitable interpretation of this comment - I can understand the feeling, when so much of social media interactions are in the form 'It's post a picture of you as a baby, 10 year old, and current age!'. Those and many other instances can bring out excessive skepticism

But the people involved in this haven't signaled that they are in that path, either in the message about the challenge (precisely the opposite) or seemingly in their careers so far

So I guess I don't share the concern but a better way to phrase your comment could be -

"how can we be sure the human-provided solutions won't turn out to be just fodder for training a RL model or something that will later be monetized, closed and proprietary? Do the challenge organizers provide any guarantees on that?"

geor9e · a year ago
No, you missed the point. The striking thing about ARC is the puzzles are super easy, for humans. The average person solves 85% of the tasks, but the worlds best LLMs are only solving 5%. The challenge is to simply make an AI score as well as the average human.
skrebbel · a year ago
Did you even try the puzzles? They’re not particularly “unsolvable”.
salamo · a year ago
This is super cool. I share Francois' intuition that the presently data-hungry learning paradigm is not only not generalizable but unsustainable: humans do not need 10,000 examples to tell the difference between cats and dogs, and the main reason computers can today is because we have millions of examples. As a result, it may be hard to transfer knowledge to more esoteric domains where data is expensive, rare, and hard to synthesize.

If I can make one criticism/observation of the tests, it seems that most of them reason about perfect information in a game-theoretic sense. However, many if not most of the more challenging problems we encounter involve hidden information. Poker and negotiations are examples of problem solving in imperfect information scenarios. Smoothly navigating social situations also requires a related problem of working with hidden information.

One of the really interesting things we humans are able to do is to take the rules of a game and generate strategies. While we do have some algorithms which can "teach themselves" e.g. to play go or chess, those same self-play algorithms don't work on hidden information games. One of the really interesting capabilities of any generally-intelligent system would be synthesizing a general problem solver for those kinds of situations as well.

com2kid · a year ago
> humans do not need 10,000 examples to tell the difference between cats and dogs,

I swear, not enough people have kids.

Now, is it 10k examples? No, but I think it was on the order of hundreds, if not thousands.

One thing kids do is they'll ask for confirmation of their guess. You'll be reading a book you've read 50 times before and the kid will stop you, point at a dog in the book, and ask "dog?"

And there is a development phase where this happens a lot.

Also kids can get mad if they are told an object doesn't match up to the expected label, e.g. my son gets really mad if someone calls something by the wrong color.

Another thing toddlers like to do is play silly labeling games, which is different than calling something the wrong name on accident, instead this is done on purpose for fun. e.g. you point to a fish and say "isn't that a lovely llama!" at which point the kid will fall down giggling at how silly you are being.

The human brain develops really slowly[1], and a sense of linear time encoding doesn't really exist for quite awhile. (Even at 3, everything is either yesterday, today, or tomorrow) so who the hell knows how things are being processed, but what we do know is that kids gather information through a bunch of senses, that are operating at an absurd data collection rate 12-14 hours a day, with another 10-12 hours of downtime to process the information.

[1] Watch a baby discover they have a right foot. Then a few days later figure out they also have a left foot. Watch kids who are learning to stand develop a sense of "up above me" after they bonk their heads a few time on a table bottom. Kids only learn "fast" in the sense that they have nothing else to do for years on end.

PheonixPharts · a year ago
> Now, is it 10k examples? No, but I think it was on the order of hundreds, if not thousands.

I have kids so I'm presuming I'm allowed to have an opinion here.

This is ignoring the fact that babies are not just learning labels, they're learning the whole of language, motion planning, sensory processing, etc.

Once they have the basics down concept acquisition time shrinks rapidly and kids can easily learn their new favorite animal in as little as a single example.

Compare this to LLMs which can one-shot certain tasks, but only if they have essentially already memorized enough information to know about that task. It gives the illusion that these models are learning like children do, when in reality they are not even entirely capable of learning novel concepts.

Beyond just learning a new animal, humans are able to learn entirely new systems of reasoning in surprisingly few examples (though it does take quite a bit of time to process them). How many homework questions did your entire calc 1 class have? I'm guessing less than 100 and (hopefully) you successfully learned differential calculus.

9cb14c1ec0 · a year ago
> not enough people have kids.

Second that. I think I've learned as much as my children have.

> Watch a baby discover they have a right foot. Then a few days later figure out they also have a left foot.

Watching a baby's awareness grow from pretty much nothing to a fully developed ability to understand the world around is one of the most fascinating parts of being a parent.

smusamashah · a year ago
My kid is about 3 and has been slow on language development. He can barely speak a few short sentences now. Learning names of things and concepts made a big difference for him and that's a fascinating watch and realization.

This reminds of the story of Adam learning names, or how some languages can express a lot more in fewer words. And it makes sense that LLMs look intelligent to us.

My kid loves repeating the names of things he learned recently. For past few weeks, after learning 'spider' and 'snake' and 'dangerous' he keeps finding spiders around, no snakes so makes up snakes from curly drawn lines and tells us they are dangerous.

I think we learn fast because of stereo (3d) vision. I have no idea how these models learn and don't know if 3d vision will make multi model LLMs better and require exponentially less examples.

Nition · a year ago
> the kid will stop you, point at a dog in the book, and ask "dog?"

Of course for a human this can either mean "I have an idea about what a dog is, but I'm not sure whether this is one" or it can mean "Hey this is a... one of those, what's the word for it again?"

llm_trw · a year ago
Babies, unlike machine learning models, aren't placed in limbo when they aren't running back propagation.

Babies need few examples for complex tasks because they get constant infinitely complex examples on tasks which are used for transfer learning.

Current models take a nuclear reactors worth of power to run back prop on top of a small countries GDP worth of hardware.

They are _not_ going to generalize to AGI because we can't afford to run them.

1024core · a year ago
> I swear, not enough people have kids.

My friends toddler, who grew up with a cat in the house, would initially call all dogs "cat". :-D

resource0x · a year ago
I haven't seen 1000 cats in my entire life. I'm sure I learned how to tell a dog from a cat after being exposed to just a single instance of each.
cess11 · a year ago
I have a small kid. When they first saw some jackdaws, the first bird they noticed could fly, they thought it was terribly exciting and immediately learned the word for them, and generalised it to geese, crows, gulls and magpies (plus some less common species I don't know what they're called in english), pointing at them and screaming the equivalent of 'jackda! jackda!'.

Deleted Comment

PontifexMinimus · a year ago
> Now, is it 10k examples? No, but I think it was on the order of hundreds, if not thousands.

If I was presented with 10 pictures of 2 species I'm unfamiliar with, about as different as cats and dogs, I expect I would be able to classify further images as either, reasonably accurately.

ein0p · a year ago
Not to mention that babies receive petabytes of visual input to go with other stimuli. It’s up for debate how sample efficient humans actually are in the first few years of their lives.
AuryGlenz · a year ago
That’s all true, yet my 2.5 year old sometimes one-shots specific information. I told my daughter that woodpeckers eat bugs out of trees after doing what you said and asking “what’s that noise?” for the fifth time in a few minutes when we heard some this spring. She brought it up again at least a week later, randomly. Developing brains are amazing.

She also saw an eagle this spring out the car window and said “an eagle! …no, it’s a bird,” so I guess she’s still working on those image classifications ;)

bamboozled · a year ago
I think your comment over intellectualises the way children experience the world.

My child experiences the world in a really pure way. They don’t care much about labels or colours or any other human inventions like that. He picks up his carrot, he doesn’t care about the name or the color . He just enjoys it through purely experiencing eating it. He can also find incredible flow state like joy from playing with river stones or looking at the moon.

I personally feel bad I have to each them to label things and but things in boxes. I think your child is frustrated at times because it’s a punish of a game. The departure from “the oceanic feeling.

Your comment would make sense to me if the end game of our brains and human experience is labelling things. It’s not. It’s useful but it’s not what living is about.

theptip · a year ago
> humans do not need 10,000 examples to tell the difference between cats and dogs

The optimization process that trained the human brain is called evolution, and it took a lot more than 10,000 examples to produce a system that can differentiate cats vs dogs.

Put differently, an LLM is pre-trained with very light priors, starting almost from scratch, whereas a human brain is pre-loaded with extremely strong priors.

PaulDavisThe1st · a year ago
> The optimization process that trained the human brain is called evolution, and it took a lot more than 10,000 examples to produce a system that can differentiate cats vs dogs.

Asserted without evidence. We have essentially no idea at what point living systems were capable of differentiating cats from dogs (we don't even know for sure which living systems can do this).

llm_trw · a year ago
>The optimization process that trained the human brain is called evolution

A human brain that doesn't get visual stimulus at the critical age between 0 and 3 years old will never be able to tell the difference between a cat and a dog because it will be forevermore blind.

Deleted Comment

pants2 · a year ago
Humans, I would bet, could distinguish between two animals they've never seen based only on a loose or tangential description. I.e. "A dog hunts animals by tracking and chasing them long enough to exhaust their energy, but a cat is opportunistic and strikes using stealth and agility."

A human that has never seen a dog or a cat could probably determine which is which based on looking at the two animals and their adaptations. This would be an interesting test for AIs, but I'm not quite sure how one would formulate a eval for this.

taneq · a year ago
Only after being exposed to (at least pictures and descriptions of) dozens if not hundreds of different types of animal and their different attributes. Literal decades of training time and carefully curated curriculum learning are required for a human to perform at what we consider ‘human level’.
ryankrage77 · a year ago
A possible way to this idea would be to draw two aliens with different hunting strategies and do a poll of which is which. I'd try it but my drawing skills are terrible and I'm averse to using generated images.
tigerlily · a year ago
Seems analogous to bouba/kiki effect:

https://en.m.wikipedia.org/wiki/Bouba/kiki_effect

jules · a year ago
Do computers need 10,000 examples to distinguish dogs from cats when pretrained on other tasks?
curious_cat_163 · a year ago
No.
VirusNewbie · a year ago
>: humans do not need 10,000 examples to tell the difference between cats and dogs

well, maybe. We view things in three dimensions at high fidelity: viewing a single dog or cat actually ends up being thousands of training samples, no?

amelius · a year ago
Yes, but we do not call a couch in a leopard print a leopard. Because we understand that the print is secondary to the function.
bbor · a year ago
Eh, still doesn’t hold up. I really don’t think there’s many psychologists working on the posited mechanism of simple NN-like backprop learning. Aka conditioning, I guess. As Chomsky reminds us every time we let him: human children learn to understand and use language — an incredibly complex and nuanced domain, to say the least — with shockingly little data and often zero-to-none intentional instruction. We definitely employ principles and patterns that are far more complex (more “emergent”?) than linear regression.

Tho I only ever did undergrad stats, maybe ML isn’t even technically a linear regression at this point. Still, hopefully my gist is clear

AIorNot · a year ago
There’s a great episode from Darkwish Patels podcast discussing this today

https://youtu.be/UakqL6Pj9xo?si=iDH6iSNyz1Net8j7

nphard85 · a year ago
Dwarkesh*
goertzen · a year ago
I don’t know enough of biology or genetics or evolution, but surely the millions of years of training that is hardcoded into our genes and expressed in our biology had much larger “training” runs.
allanrbo · a year ago
If a human eye works at say 10 fps, then 8 minutes with a cat is about 10k images :-D
captaincaveman · a year ago
I'd say that was more like a single instance, one interaction with a thing.
fennecbutt · a year ago
Humans don't need those examples because our brains are very pretrained. Natural fear of snakes and snakelike things, etc etc.

ML models are starting from absolute zero, single celled organism level.

Deleted Comment

woadwarrior01 · a year ago
> humans do not need 10,000 examples to tell the difference between cats and dogs

Neither do machines. Lookup few-shot learning with things like CLIP.

nextaccountic · a year ago
> humans do not need 10,000 examples to tell the difference between cats and dogs

Humans learn through a lifetime.

Or are we talking about newborn infants?

lacker · a year ago
I really like the idea of ARC. But to me the problems seem like they require a lot of spatial world knowledge, more than they require abstract reasoning. Shapes overlapping each other, containing each other, slicing up and reassembling pieces, denoising regular geometric shapes, you can call them "core knowledge" but to me it seems like they are more like "things that are intuitive to human visual processing".

Would an intelligent but blind human be able to solve these problems?

I'm worried that we will need more than 800 examples to solve these problems, not because the abstract reasoning is so difficult, but because the problems require spatial knowledge that we intelligent humans learn with far more than 800 training examples.

modeless · a year ago
> to me it seems like they are more like "things that are intuitive to human visual processing".

Yann LeCun argues that humans are not general intelligence and that such a thing doesn't really exist. Intelligence can only be measured in specific domains. To the extent that this test represents a domain where humans greatly outperform AI, it's a useful test. We need more tests like that, because AIs are acing all of our regular tests despite being obviously less capable than humans in many domains.

> the problems require spatial knowledge that we intelligent humans learn with far more than 800 training examples.

Pretraining on unlimited amounts of data is fair game. Generalizing from readily available data to the test tasks is exactly what humans are doing.

> Would an intelligent but blind human be able to solve these problems?

I'm confident that they would, given a translation of the colors to tactile sensation. Blind humans still understand spatial relationships.

HarHarVeryFunny · a year ago
I just did the first 5 of the "public eval set" without having looked at the "public training set", and found them easy enough. If we're defining AGI as at least human level, then the AGI should also be able to do these without seeing any more examples.

I don't think there's any rules about what knowledge/experience you build into your solution.

mewpmewp2 · a year ago
AGI should obviously be able to do them. But AI being able to do those 100 percent wouldn't be evidence of AGI however. It is a very narrow domain.
nickpsecurity · a year ago
To parent: the spatial reasoning and blind person were great counterexamples. It still might be OK despite the blind exceptions if it showed general reasoning.

To OP: I like your project goal. I think you should look at prior, reasoning engines that tried to build common sense. Cyc and OpenMind are examples. You also might find use for the list of AGI goals in Section 2 of this paper:

https://arxiv.org/pdf/2308.04445

When studying intros of brain function, I also noted many regions tie into the hippocampus which might do both sense-neutral storage of concepts and make inner models (or approximations) of external world. The former helps tie concepts together through various senses. The latter helps in planning when we are imagining possibilities to evaluate and iterate on them.

Seems like AGI should have these hippocampus-like traits and those in the Cyc paper. One could test if an architecture could do such things in theory or on a small scale. It shouldn’t tie into just one type of sensory input either. At least two with the ability to act on what only exists in one or what is in both.

Edit: Children also have an enormous amount of unsupervised training on visual and spatial data. They get reinforcement through play and supervised training by parents. A realistic benchmark might similarly require GB of prettaining.

HarHarVeryFunny · a year ago
CYC was an expert system, which is arguably what LLMs are.

A similar vintage GOFAI project that might do better on these, with a suitable visual front end, is SOAR - a general purpose problem solver.

andoando · a year ago
I would argue that spatial reasoning encompasses all reasoning. All the things you mentioned have a direct analogue to abstract models and logic we employ and are engrained deeply into language. For example, shapes containing eachother:

There are two countries both which lay claim to the same territory. There is a set X that contains Y and there is a set Z that contains Y. In the case that the common overlap is 3D and one in on top of the other, we can extend this to there is a set X that contains -Y and a set Z that contains Y, and just as you can only see one on top and not both depending on where you stand, we can apply the same property here and say set X and Z cannot both exist, and therefore if set X is on then -Y and if set Z then Y.

If you pay attention to the language you use youll start to realize how much of it uses spatial relationships to describe completely abstract things. For example, one can speak of disintigrating hegonomic economies. i.e turning things built on top of eachother into nothing, to where it came

We are after all, reasoning about things which happen in time and space.

And spatial != visual. Even if you were blind youd have to reason spatially, because again any set of facts are facts in space-time. What does it take to understand history? People in space, living at various distances from each other, producing goods from various locations of the earth using physical processes, and physically exchanging them. To understand battles you have to understand how armies are arranged physically, how moving supplies works, weather conditions, how weapons and their physical forms affect what they can physically do, etc.

Hell LLMs, the largest advancement we had in artificial intelligence do what exactly? Encode tokens into multi dimensional space.

parentheses · a year ago
Spatial reasoning is easily isomorphic to many kinds of reasoning - just not all of them. Spatial reasoning in this case also limits the AI to 2 dimensions. I concede that with more dimensions, there will be more isomorphisms.

Is there a number of dimensions that captures all reasoning? I don't know..

CooCooCaCha · a year ago
“Would an intelligent but blind human be able to solve these problems?”

This is the wrong way to think about it IMO. Spatial relationships are just another type of logical relationship and we should expect AGI to be able to analyze relationships and generate algorithms on the fly to solve problems.

Just because humans can be biased in various ways doesn’t mean these biases are inherent to all intelligences.

crazygringo · a year ago
> Spatial relationships are just another type of logical relationship and we should expect AGI to be able to analyze relationships and generate algorithms on the fly to solve problems.

Not really. By that reasoning, 5-dimensional spatial reasoning is "just another type of logical relationship" and yet humans mostly can't do that at all.

It's clear that we have incredibly specialized capabilities for dealing with two- and three-dimensional spatiality that don't have much of anything to do with general logical intelligence at all.

janalsncm · a year ago
Part of the concern might be that visual reasoning problems are overrepresented in ARC in the space of all abstract reasoning problems.

It’s similar to how chess problems are technically reasoning problems but they are not representative of general reasoning.

dimask · a year ago
> Would an intelligent but blind human be able to solve these problems?

Blind people can have spatial reasoning just fine. Visual =/= spatial [0]. Now, one would have to adapt the colour-based tasks to something that would be more meaningful for a blind person, I guess.

[0] https://hal.science/hal-03373840/document

Lerc · a year ago
I don't think the intent is to learn the entire problem domain from the examples, but the specific rule that is being applied.

There may (almost certainly will be) additional knowledge encoded in the solver to cover the spacial concepts etc. The distinction with the AGI-ARC test is the disparity between human and AI performance, and that it focuses on puzzles that are easier for humans.

It would be interesting to see a finetuned LLM just try and express the rule for each puzzle as english. It could have full knowledge of what ARC-AGI is and how the tests operate, but the proof of the pudding is simply how it does on the test set.

lynx23 · a year ago
If a blind individual can solve a visually oriented challenge is not really a question of their intelligence but more a question of accessibility/translation. Just because I cant see something myself doesnt really say anything about my ability to deal with abstractions.
pmayrgundter · a year ago
This claim that these tests are easy for humans seems dubious, and so I went looking a bit. Melanie Mitchell chimed in on Chollet's thread and posted their related test [ConceptARC].

In it they question the ease of Chollet's tests: "One limitation on ARC’s usefulness for AI research is that it might be too challenging. Many of the tasks in Chollet’s corpus are difficult even for humans, and the corpus as a whole might be sufficiently difficult for machines that it does not reveal real progress on machine acquisition of core knowledge."

ConceptARC is designed to be easier, but then also has to filter ~15% of its own test takers for "[failing] at solving two or more minimal tasks... or they provided empty or nonsensical explanations for their solutions"

After this filtering, ConceptARC finds another 10-15% failure rate amongst humans on the main corpus questions, so they're seeing maybe 25-30% unable to solve these simpler questions meant to test for "AGI".

ConceptARC's main results show CG4 scoring well below the filtered humans, which would agree with a [Mensa] test result that its IQ=85.

Chollet and Mitchell could instead stratify their human groups to estimate IQ then compare with the Mensa measures and see if e.g. Claude3@IQ=100 compares with their ARC scores for their average human

[ConceptArc]https://arxiv.org/pdf/2305.07141 [Mensa]https://www.maximumtruth.org/p/ais-ranked-by-iq-ai-passes-10...

mikeknoop · a year ago
Here is some published research on the human difficulty of ARC-AGI: https://cims.nyu.edu/~brenden/papers/JohnsonEtAl2021CogSci.p...

> We found that humans were able to infer the underlying program and generate the correct test output for a novel test input example, with an average of 84% of tasks solved per participant

kenjackson · a year ago
I just tried the first puzzle and I can't get it right. I think my solution makes logical sense and I explain why the patterns are consistent with the input, but it says its wrong. I'm either a lot dumber than I thought or they need to do a better job of vetting their tests.
mikeknoop · a year ago
(You can direct link to a task like this: https://arcprize.org/play?task=009d5c81 in case you want to share!)
saati · a year ago
It's pretty easy, just follow the second example with the colors from the test input. (if it's the same puzzle 00576224 for you too)
salamo · a year ago
They claim that the average score for humans is between 85% and 100%, so I think there's a disagreement on whether the test is actually too hard. Taking them at their word, if no existing model can score even half what the average human can, the test is certainly measuring some kind of significant difference.

I guess there might be a disagreement of whether the problems in ARC are a representative sample of all of the possible abstract programs which could be synthesized, but then again most LLMs are also trained on human data.

gkbrk · a year ago
The tasks are very easy for humans. Out of the 6 tasks assigned when I opened the web page, I got all of them correct on the first try.

Maybe if you run into some exceptionally difficult tasks it might not be 100%, but there's no way the challenge can be called unfair because it's too difficult for humans too.

mark_l_watson · a year ago
I saw Melanie’s post and I am intrigued by an easier AGI suite. I would like some experimenting done by individuals like myself snd smaller organizations.
bbor · a year ago
Are you working on (a book detailing) AGI also? It’s a lonely field but I have no doubt there are a sea of malcontent engineers across the world who saw the truth early on and are pushing solo for AGI. It’s going well for me, but I’m not sure whether to take that as “you’re great” or “it’s really that easy”, so was interested to see such a fellow brazen American on HN of all places.

Game on for the million, if so :). If not, apologies for distracting from the good fight for OSS/noncorp devs!

E: it occurred to me on the drive home how easily we (engineers) can fall into competitiveness, even when we’ve all read the thinkpieces about why an AI Race would/will be/is incredibly dangerous. Maybe not “game on”, perhaps… “god I hope it’s impossible but best of luck anyway to both of us”?

neoneye2 · a year ago
Melanie is coauthor/supervisor of ConceptARC, that can be tried here: https://neoneye.github.io/arc/?dataset=ConceptARC
PaulDavisThe1st · a year ago
You actually think that has not been going for 30, 40 or 50 years?
paxys · a year ago
While I agree with the spirit of the competition, a $1M prize seems a little too low considering tens of billions of dollars have already been invested in the race to AGI, and we will see many times that put into the space in the coming years. The impact of AGI will be measured in trillions at minimum. So what you are ultimately rewarding isn't AGI research but fine tuning the newest public LLM release to best meet the parameters of the test.

I'd also urge you to use a different platform for communicating with the public because x.com links are now inaccessible without creating an account.

mikeknoop · a year ago
I agree, $1M is ~trivial in AI. The primary goal with the prize is to raise public awareness about how close (or far today) we are from AGI: https://arcprize.org/leaderboard and we hope that understanding will shift more would-be AI researchers to working new ideas
bongodongobob · a year ago
That was my initial reaction too.

"Endow circuitry with consciousness and win a gift certificate for Denny's (may not be used in conjunction with other specials)"

hackerlight · a year ago
The $1M ARC prize is advertising, just like being #1 on the huggingface leaderboard. It won't matter for end consumers, but for attracting the best talent it could be valuable.
cma · a year ago
They thought of that and so have yearly $100,000 in yearly prizes for the best results as well, so things can build up towards someone winning the $1 million over time: the yearly prizes require you to publish the techniques.
elicksaur · a year ago
The leaderboard is on the website. What medium should they use? https://arcprize.org/leaderboard
ks2048 · a year ago
The submissions can't use the internet. And I imagine can't be too huge - so you can't use "newest public LLMs" on this task.
mikeknoop · a year ago
That is correct for ARC Prize: limited Kaggle compute (to target efficiency) and no internet (to reduce cheating).

We are also trialing a secondary leaderboard called ARC-AGI-Pub that imposes no limits or constraints. Not part of the prize today but could be in the future: https://arcprize.org/leaderboard

cma · a year ago
Using the internet would leak the test data, a big problem with ML benchmarks, and also allow communication with humans during the test.
lxgr · a year ago
Yeah, I also immediately had Dr. Evil narrating the prize money amount in my head once I saw it.

AGI will take much more than that to build, and once you have it, if all you can monetize it for is a million dollars, you must be doing something extremely wrong.

btbuildem · a year ago
Yeah, in 2006 Netflix offered $1M in a similar scheme. At least back then that sum meant something.
elicksaur · a year ago
I’m a big fan of the ARC as a problem set to tackle. The sparseness of the data and infinite-ness of the rules which could apply make it much tougher than existing ML problem sets.

However, I do disagree that this problem represents “AGI”. It’s just a different dataset than what we’ve seen with existing ML successes, but the approaches are generally similar to what’s come before. It could be that some truly novel breakthrough which is AGI solves the problem set, but I don’t think solving the problem set is a guaranteed indicator of AGI.

nadam · a year ago
I love this, this is super interesting, but my intuition based on looking at a dozen examples is that the problem is hard, but easy enough that if this problem becomes popular, near-human level results will appear in a year or less, and AGI will not be reached. The problem seems to be finding a generic enough transformation description language with the appropriate operators. And then heuristics to find a very short program (in the information theoretical sense) in this language that produces all the examples for a problem. I would be very surprised if we would not increase the 34% result soon significantly, and I would be surprised if this could be transferred to general intelligence, at least when I think of the topics where I use AI today and where it falls short yet. Basically my intuition is that this will be yet another 'Chess' or 'Go'-like problem in AI. But still a worthwhile research topic, absolutely: the value that could come out of this is well worth the 1M dollars.
zug_zug · a year ago
I have the exact same impression.

Imo there's no evidence whatsoever that nailing this task will be true AGI - (e.g. able to write novel math proofs, ask insightful questions that nobody has thought of before, self-direct its own learning, read its own source code)

apendleton · a year ago
I'm not sure the goal of this competition, in and of itself, is AGI. They point to current LLMs emerging from transformers, which in turn emerged from a general basket of building blocks from machine-translation research (attention, etc.). It seems like the suggestion is that to get from where we are now to AGI, some fundamental building blocks are missing, and this is an attempt to spur the development of some of those building blocks, but by analogy with LLMs, the goal here is to come up with a new thing like "attention," not a new thing like GPT4.
Animats · a year ago
> the only eval which measures AGI.

That's a stretch. This is a problem at which LLMs are bad. That does not imply it's a good measure of artificial general intelligence.

After working a few of the problems, I was wondering how many different transformation rules the problem generator has. Not very many, it seems. So the problem breaks down into extracting the set of transformation rules from the data, then applying them to new problems. The first part of that is hard. It's a feature extraction problem. The transformations seem to be applied rigidly, so once you have the transformation rules, and have selected the ones that work for all the input cases, application should be straightforward.

This seems to need explicit feature extraction, rather than the combined feature extraction and exploitation LLMs use. Has anyone extracted the rule set from the test cases yet?

elicksaur · a year ago
Yes to your last question, that is essentially how the first iteration solutions operated. Some of the original kaggle competition’s best solutions used a DSL made of these transformations. That was 4 years ago. [1]

The issue with that path is that the problems aren’t using a programmatic generator. The rule sets are anything a person could come up with. It might be as simple as “biggest object turns blue” but they can be much more complicated.

Additionally, the test set is private so it can’t be trained on or extracted from. It has rules that aren’t in the public sets.

[1] https://www.kaggle.com/competitions/abstraction-and-reasonin...

n2d4 · a year ago
The tasks are handmade. There is no "problem generator".
slicerdicer1 · a year ago
AGI is not when the AI is good at some particular thing, AGI is when we have nothing left at which the AI is bad at (compared to humans).