I don't think that is new. Back when Walmart tried to expand to Germany it was reported that they had employees do some Walmart chant. As you can guess this didn't go over well with German employees.
> At the core of most definitions you’ll find the idea of a machine that can match humans on a wide range of cognitive tasks.
I expect this definition will be proven incorrect eventually. This definition would best be described as a "human level AGI", rather than AGI. AGI is a system that matches a core set of properties, but it's not necessarily tied to capabilities. Theoretically one could create a very small resource-limited AGI. The amount of computational resources available to the AGI will probably be one of the factors what determines whether it's e.g. cat level vs human level.
That’s like Peter Norvig’s definition of AGI [1] which is defined with respect to general purpose digital computers. The general intelligence refers to the foundation model that can be repurposed to many different contexts. I like that definition because it is clear.
Currently, AGI is defined in a way where it is truly indistinguishable from superintelligence. I don’t find that helpful.
I think "being able to do as well as a 50th percentile human who's had a little practice," on a wide range of tasks, is a pretty decent measure.
Yes, that's more versatile than most of us, because most of us are not at or above the median practiced person in a wide range of tasks. But it's not what I think of when I hear "superintelligence," because its performance on any given task is likely still inferior to the best humans.
That definition gives me a headache. If it's not up to the level of a human then it's not "general". If you cut down the resources so much that it drops to cat level, then it's a cut down model related to an AGI model and no more.
What does this even mean? How can we say a definition of “AGI” is “correct” or “incorrect” when the only thing people can agree on, is we don’t have AGI yet?
Of course AGI is just an abbreviation for artificial general intelligence and stuff like GPT-5 is artificial, somewhat intelligent and somewhat general.
I think the goalposts for "AGI" will keep moving so current AI doesn't match it.
I've thought of it as human level but already people are saying beating the average human isn't enough and it has to beat the best and be nobel prize worthy.
Based on my personal experience, I feel like we've already had AGI for some time. Just based on how centralized society has become. It feels like the system is not working for the vast majority of people, yet somehow it's still holding together in spite of enormous complexity... It FEELS like there is some advanced intelligence holding things together. Some aspects of the system's functioning seems too clever to be the result of human intelligence.
Also, in retrospect, something doesn't quite add up about the 'AI winter' narrative. It's hard to believe that so many people were studying and working on AI and it took so long given that ultimately, attention is all you need(ed).
I studied AI at university in Australia over a decade ago, did the introductory course which was great; we learned about decision trees, Bayesian probability and machine learning; we wrote our own ANNs from scratch. then I took on the advanced course, expecting to be blown away by the material, but the whole course was about mathematics, no AI theory; even back then there was a lot of advanced material which they could have covered (e.g. evolutionary computation) but didn't... I dropped out after a week or two because of how boring it was.
In retrospect, I feel like the course was made boring and irrelevant on purpose. I remember I even heard someone in my entourage mention that AI winter is not real... While we were supposedly in the middle of it.
Also, I remember thinking at the time that evolutionary computation combined with ANNs was going to be the future... So I was kind of surprised how evolutionary computation seemingly disappeared out of view... In retrospect though, I think to myself; progress in that area could potentially lead to unpredictable and dangerous outcomes so it may not be discussed openly.
Now I think; take an evolutionary algorithm and combine it with modern neural nets with attention mechanisms and you'd surely get some impressive results.
I think the AI winter was over by 2007. There was a lot of hype about machine learning and big data. The Netflix Prize for building a Recommender model launched in 2006. There was research on neural networks and deep belief networks, but they weren't as popular as they are today.
People are interested in consciousness much the same way that we see faces in the clouds. We just think we're going to find it everywhere: weather patterns, mountains, computers, robots, in outer space, etc.
If we were dogs, we'd invent a basic computer and start writing scifi films about whether the computers could secretly smell things. We'd ask "what does the sun smell like?"
Ray Kurzweil and his "Age of Spiritual Machines", which I read in 1999, is much more to blame than the others like Goertzel that came after him but Kurzweil doesn't get a mention. Kurzweil is also a MIT grad closely associated with MIT and possibly the MIT Technology Review.
Yeah totally, not a single mention of Kurzweil in this article. I also read “Age of spiritual machines” in 1999 (in college), and skimmed most of his subsequent books
Then Kurzweil became my manager’s peer at Google in 2014 or so (actually 2 managers). I remember he was mocked by a few coworkers (and maybe deservedly so, because they had some mildly funny stories)
So I have been wondering with all the AGI talk why Kurzweil isn’t talked about more. Was he vindicated in some sense?
I did get a partial answer - one reason is that doomer AGI prophecies are better marketing than Kurzweil’s brand of AGI, which is about merging with machines
And of course both kinds of AGI prophecies are good distractions from AI ethics, which is more likely to slow investment than to grow it
No. He's still saying AGI will demand political rights in 2029. Like Geoffrey Hinton, Kurzweil gets a pass because he's brilliant and acomplished. But also like Hinton, he's wrong about this one issue. With Hinton it appears to be fear driving his fantasies. With Kurzwel it's probably over-confidence.
Very little I disagree with there, so just nibbling at the edges.
> a scheme that’s flexible enough to sustain belief even when things don’t work out as planned; the promise of a better future that can be realized only if believers uncover hidden truths; and a hope for salvation from the horrors of this world.
Sometimes 90% of the "hidden truths" are things already "known" by the believers, an elite knowledge that sets them apart from the sheeple. The remaining 10% is acquiring some McGuffin that finally proves they were Right-All-Along so that they can take a victory lap.
> Superintelligence is the hot new flavor—AGI but better!—introduced as talk of AGI becomes commonplace.
In turn, AGI was the hot new flavor—AI but better!—companies pivoted to as consumers started getting disappointed/jaded experiencing "AI" that wasn't going to give them robot butlers.
> When those people are not shilling for utopia, they’re saving us from hell.
Yeah, much like how hatred is not really the opposite of love, the "AI doom" folks are really just a side-sect of the "AI awesome" folks.
> But what if there are, in fact, shadowy puppet masters here—and they’re the very people who have pushed the AGI conspiracy hardest all along? The kings of Silicon Valley are throwing everything they can get at building AGI for profit. The myth of AGI serves their interests more than anybody else’s.
Yes, the economic engine behind all this, the potential to make money, is what really supercharges everything and lifts it out of niche communities.
One thing that struck me recently is that LLMs are necessarily limited by what's expressible with existing language. How can this ever result in AGI? A lot of human progress required inventing new language to represent new ideas and concepts. An LLM only experience of the world is what can be expressed with words. Meanwhile, even a squirrel has an intuitive understanding of the laws of gravity that are beyond the ability of an LLM to ever experience because it's stuck in a purely conceptual, silicon prison.
> Meanwhile, even a squirrel has an intuitive understanding of the laws of gravity that are beyond the ability of an LLM to ever experience because it's stuck in a purely conceptual, silicon prison.
I just don't think that's true. People used to say this kind of thing about computer vision - a computer can't really see things, only compute formulas on pixels, and "does this picture contain a dog" obviously isn't a mathematical formula. Turns out it is!
I don't know why you would think that the model can't create new language. That is a trivial activity. For example, I asked GPT5 to read the news and make a new word.
Wattlash /ˈwɒt-læʃ/
n. The fast, localized backlash that erupts when AI-era data centers spike electricity demand—triggering grid constraints, siting moratoriums, bill-shock fears, and, paradoxically, a rush into fixes like demand-response deals, waste-heat reuse, and nuclear/fusion PPAs.
They experience the world through tokens, which can contain more information than just words. Images can be tokenized, so can sounds, pressure sensors, etc.
> And there it is: You can’t prove it’s not
true. “The idea that AGI is coming and
that it’s right around the corner and
that it’s inevitable has licensed a great
many departures from reality,” says
the University of Edinburgh’s Vallor. “But we really don’t have any
evidence for it.”
That's the most important paragraph in the article. All of the self serving excessive exaggerations of Sam Altman and his ilk, predicting things and throwing out figures they cannot possibly know. "ai will cure cancer, and demetia! And reverse global warming! Just give more money to my company which is a non profit and is working for the good of humanity. What is that? Do you mean to say you don't care about the good of humanity?" What is the word for such behaviour?
It's not hubris, it's a combination of wild prophecy and severe
main character syndrome.
I heard once, though i have no idea if it's true that he claims he carries a remote control around with him to nuke his data centres if they ever start trying to kill everyone. Which is obviously nonsense but is exactly the kind of thing he might say.
In the meantime they're making loads of money by claiming expertise in a field which doesn't even exist and in my opinion never will, and that's the main thing i suppose.
> I heard once, though i have no idea if it's true that he claims he carries a remote control around with him to nuke his data centres if they ever start trying to kill everyone.
That would be quite useless even if it exists since now that you said it, the AGISGIAIsomething will surely know about it and take appropriate measures!
Oh no! Someone better phone up Sam Altman and warn him of my terrible blunder. I would hate to be the one responsible for the destruction of the entire universe.
There is...chanting in team meetings in the US?
Has this been going for long now or is this some new trend picked up in Asia or something like that?
[1] https://www.mercurynews.com/2020/11/25/theranos-founder-holm...
This is a meme that will keep on giving.
I expect this definition will be proven incorrect eventually. This definition would best be described as a "human level AGI", rather than AGI. AGI is a system that matches a core set of properties, but it's not necessarily tied to capabilities. Theoretically one could create a very small resource-limited AGI. The amount of computational resources available to the AGI will probably be one of the factors what determines whether it's e.g. cat level vs human level.
"AGI is reached when it’s no longer easy to come up with problems that regular people can solve … and AIs can’t."
Currently, AGI is defined in a way where it is truly indistinguishable from superintelligence. I don’t find that helpful.
[1] https://www.noemamag.com/artificial-general-intelligence-is-...
Yes, that's more versatile than most of us, because most of us are not at or above the median practiced person in a wide range of tasks. But it's not what I think of when I hear "superintelligence," because its performance on any given task is likely still inferior to the best humans.
Deleted Comment
I think the goalposts for "AGI" will keep moving so current AI doesn't match it.
I've thought of it as human level but already people are saying beating the average human isn't enough and it has to beat the best and be nobel prize worthy.
Also, in retrospect, something doesn't quite add up about the 'AI winter' narrative. It's hard to believe that so many people were studying and working on AI and it took so long given that ultimately, attention is all you need(ed).
I studied AI at university in Australia over a decade ago, did the introductory course which was great; we learned about decision trees, Bayesian probability and machine learning; we wrote our own ANNs from scratch. then I took on the advanced course, expecting to be blown away by the material, but the whole course was about mathematics, no AI theory; even back then there was a lot of advanced material which they could have covered (e.g. evolutionary computation) but didn't... I dropped out after a week or two because of how boring it was.
In retrospect, I feel like the course was made boring and irrelevant on purpose. I remember I even heard someone in my entourage mention that AI winter is not real... While we were supposedly in the middle of it.
Also, I remember thinking at the time that evolutionary computation combined with ANNs was going to be the future... So I was kind of surprised how evolutionary computation seemingly disappeared out of view... In retrospect though, I think to myself; progress in that area could potentially lead to unpredictable and dangerous outcomes so it may not be discussed openly.
Now I think; take an evolutionary algorithm and combine it with modern neural nets with attention mechanisms and you'd surely get some impressive results.
If we were dogs, we'd invent a basic computer and start writing scifi films about whether the computers could secretly smell things. We'd ask "what does the sun smell like?"
Then Kurzweil became my manager’s peer at Google in 2014 or so (actually 2 managers). I remember he was mocked by a few coworkers (and maybe deservedly so, because they had some mildly funny stories)
So I have been wondering with all the AGI talk why Kurzweil isn’t talked about more. Was he vindicated in some sense?
I did get a partial answer - one reason is that doomer AGI prophecies are better marketing than Kurzweil’s brand of AGI, which is about merging with machines
And of course both kinds of AGI prophecies are good distractions from AI ethics, which is more likely to slow investment than to grow it
No. He's still saying AGI will demand political rights in 2029. Like Geoffrey Hinton, Kurzweil gets a pass because he's brilliant and acomplished. But also like Hinton, he's wrong about this one issue. With Hinton it appears to be fear driving his fantasies. With Kurzwel it's probably over-confidence.
> a scheme that’s flexible enough to sustain belief even when things don’t work out as planned; the promise of a better future that can be realized only if believers uncover hidden truths; and a hope for salvation from the horrors of this world.
Sometimes 90% of the "hidden truths" are things already "known" by the believers, an elite knowledge that sets them apart from the sheeple. The remaining 10% is acquiring some McGuffin that finally proves they were Right-All-Along so that they can take a victory lap.
> Superintelligence is the hot new flavor—AGI but better!—introduced as talk of AGI becomes commonplace.
In turn, AGI was the hot new flavor—AI but better!—companies pivoted to as consumers started getting disappointed/jaded experiencing "AI" that wasn't going to give them robot butlers.
> When those people are not shilling for utopia, they’re saving us from hell.
Yeah, much like how hatred is not really the opposite of love, the "AI doom" folks are really just a side-sect of the "AI awesome" folks.
> But what if there are, in fact, shadowy puppet masters here—and they’re the very people who have pushed the AGI conspiracy hardest all along? The kings of Silicon Valley are throwing everything they can get at building AGI for profit. The myth of AGI serves their interests more than anybody else’s.
Yes, the economic engine behind all this, the potential to make money, is what really supercharges everything and lifts it out of niche communities.
I just don't think that's true. People used to say this kind of thing about computer vision - a computer can't really see things, only compute formulas on pixels, and "does this picture contain a dog" obviously isn't a mathematical formula. Turns out it is!
Wattlash /ˈwɒt-læʃ/
n. The fast, localized backlash that erupts when AI-era data centers spike electricity demand—triggering grid constraints, siting moratoriums, bill-shock fears, and, paradoxically, a rush into fixes like demand-response deals, waste-heat reuse, and nuclear/fusion PPAs.
That's the most important paragraph in the article. All of the self serving excessive exaggerations of Sam Altman and his ilk, predicting things and throwing out figures they cannot possibly know. "ai will cure cancer, and demetia! And reverse global warming! Just give more money to my company which is a non profit and is working for the good of humanity. What is that? Do you mean to say you don't care about the good of humanity?" What is the word for such behaviour? It's not hubris, it's a combination of wild prophecy and severe main character syndrome.
I heard once, though i have no idea if it's true that he claims he carries a remote control around with him to nuke his data centres if they ever start trying to kill everyone. Which is obviously nonsense but is exactly the kind of thing he might say.
In the meantime they're making loads of money by claiming expertise in a field which doesn't even exist and in my opinion never will, and that's the main thing i suppose.
That would be quite useless even if it exists since now that you said it, the AGISGIAIsomething will surely know about it and take appropriate measures!