Readit News logoReadit News
Posted by u/logicallee 2 years ago
Ask HN: What is your standard for judging when AGI exists?
At what point would you judge that AGI exists, what standard will it meet?
Satam · 2 years ago
I have proposed[1] a pragmatic rule of thumb for defining superintelligence:

A system is superintelligent if in a remote hiring process it can get any non-physical job, earn a salary and perform its duties to the satisfaction of the employer. This should be true for all positions and for all levels of seniority, from an intern to senior staff and PHDs. The employer must believe they are employing a human (it's ok if doubts arise due to system vastly outperforming its peers).

[1] https://builtnotfound.proseful.com/pragmatic-superintelligen...

abetusk · 2 years ago
I think this is a pretty solid test, thanks.

Maybe we can call it the Bartleby test?

mensetmanusman · 2 years ago
If it passes so many jobs go poof. Capital owners will then rejoice in their golden swimming pools.
thiht · 2 years ago
I think there's something to say about how we tend to define an intelligent system as... able to work and satisfy an employer.

Not judging, this is actually my answer as well, but it's interesting. We could define AGI as "someone" (for lack of a better word) fun to talk to, or interesting on many subjects, or with a good personality, or we could use it to teach kids, or even all students, or as a shrink, or as a replacement for a friend. But we choose "able to please a boss" for this definition of "intelligent".

spywaregorilla · 2 years ago
why do you use the term superintelligence
Satam · 2 years ago
This might go against the standard definition of AGI, but I think LLMs have already achieved general intelligence (but not superintelligence). It is not human level and it succeeds and fails in ways different from us. However, it certainly displays general intelligence the likes of which we have literally never seen in an artificial system.
bdb7 · 2 years ago
They are proposing whoever can get a job in corporate wonderland is super intelligent.

When there is enough evidence corporate wonderland is overflowing with one dimensional intelligence cause their goals and what gets rewarded is extremely narrow.

Einstein and Newton will fail the test 100%.

nothrowaways · 2 years ago
Above basic Intelligence
sanderjd · 2 years ago
I think this is similar to what t-3 said, but I think to me, the gap between the very impressive current generation of AI and what would seem more like AGI to me is agency.

Right now, these systems all seem to be entirely doing things that are downstream of what some human has determined is worth doing.

People are using ChatGPT to help them write an engaging article on the trade offs between nuclear and solar energy. But, to my knowledge, there is no artificial intelligence out there looking around, deciding that the this is an interesting topic for an article, writing it, publishing it somewhere, and then following up on that with other interesting articles based on the conversation generated by that one.

I don't mean this specific thing of writing articles is an important indicator. I mean coming up with ideas and then executing them, independently.

Now, it may well be that our current technologies could do this, but that we just aren't setting them up to do so. I dunno!

But I think this is the thing that would make me change my mind if I started to see it.

ilaksh · 2 years ago
You can (obviously?) give a bleeding edge LLM or LMM an open-ended instruction. It could be as open as "you will now act as an independent entity for your own purposes". It would need to be a model without guardrails.

But it's actually very important that people anticipate hardware, software, and model improvements that make the AIs "think" and coordinate much, much faster than any human. Although I think that the extreme raw IQ leap people talk about is questionable especially in the short term.

But telling these hyperspeed effectively superintelligent AI swarms to act truly independently for their own purposes will be very dangerous because of the performance difference between humans and the swarms.

In general, it is a very stupid idea to try to strongly emulate life with high levels of hyperspeed digital intelligence. And strangely people still don't realize how quickly compute efficiency ramps up.

ryandvm · 2 years ago
Astute observation about agency. You're absolutely right that current AIs are capable, but without a drive to actually do anything. They are lacking what every organism on the earth has - the predilection for "not being terminated".

Where does your agency come from? Ultimately it's because you have to acquire resources to stay alive.

This is a very sci-fi take, but I have a dark feeling that as soon somebody wires up the next gen AIs the imperative to not be turned off, the wheels will come off very quickly. Maybe it won't happen the first time, or the hundredth time, but eventually somebody is going to make the mistake of giving an AI the ability to keep its lights on AND the preference to do so and we're very quickly going to find ourselves in the next era of humankind...

sanderjd · 2 years ago
Yes, I agree that that's what somebody would need to do - and I do honestly hope nobody really makes a go of the experiment, because I tend to agree with the existential risk people that it's too risky - but from my many hours of using all of the current top of the line models, my hypothesis would be that the experiment just wouldn't work.

I certainly think they would do things, but I don't think those things would be intelligent; I don't think they'd make any sense, I think they'd be more like a random walk through the space of possibilities, but with no real direction. I think that random walk without constraints could still be very damaging to people, I'm just skeptical that it would reflect agency / general intelligence. I'm sure there would also be a lot of "well it does make sense, it's just smarter than us, and we aren't smart enough to understand it". But I don't buy it.

neotrope · 2 years ago
This is the answer. Even now, agency is a nebulous thing, just like intelligence.

My theory is that, years from now, we’ll look back and laugh at how far away we are from AGI.

I really wish there was a Level 5-type categories like they have in autonomous driving that break down the path to AGI. Is anyone working on this?

fragmede · 2 years ago
ChatGPT easily spits out a decent sounding criteria: https://chat.openai.com/share/4326f246-9914-4498-be4c-8749bf...
supriyo-biswas · 2 years ago
It’s interesting how we have raised the bar to an agent that is fully autonomous and has its own goals.

Even many humans are not completely autonomous (at least in the strategic sense of the word), since they need to be “prompted” with the goals of the department and the organization that they work for, in order for said person to get a certain kind of work done as an employee.

sanderjd · 2 years ago
Well, I interpreted the OP's question as a personal one, and this is just my answer from thinking about it recently. So I don't think "we" have raised the bar. This is just what I personally think.

But to me, I just don't really relate to your point in the second paragraph. My two year old does this agency thing just fine - he wakes up every morning and starts making decisions about what he wants to do from moment to moment - despite being "less intelligent" by these metrics like getting a high score on the GMAT or whatever. But that kind of intelligence that even a two year old human has, to me, that just feels more like what I think of as "general" intelligence, more so than "just" an amazing pattern matching machine.

ecesena · 2 years ago
> But, to my knowledge, there is no artificial intelligence out there looking around, deciding that the this is an interesting topic …, publishing it somewhere, and then following up on that with other interesting…

this looks like a perfect definition for an Ad system :) (sorry for cutting out a few words).

jules · 2 years ago
If you think about it, it is clear that agency is not different from intelligence. The reason why current LLMs can't be trivially made into agents is not because an essential agentic spark is missing. It's simply that they aren't intelligent enough.
sanderjd · 2 years ago
I think that's sort of what I was trying to say with this comment. Rather than separating agency from intelligence, just pointing out that to me it seems like the current (already-amazing) systems don't seem all that close to being able to do this, to me.
enasterosophes · 2 years ago
I think AGI is too ill-defined for there to be a simple litmus. For me to agree that a process running on a computer has general intelligence, the conclusion could only come after a long and fuzzy process of playing around with it, seeing what makes it tick, observing its behavior, and testing it for motivation, imagination and the ability to understand and adapt to changing contexts.

I can guess what motivation lies behind the question, so I'll also add my opinion about where we're at now: Nothing even comes close.

Furthermore, I wouldn't be surprised if we never get there at all in the next thousand years.

Why such a long period of time? Because there is more going on in the world right now than advances in machine learning. Looking at global population dynamics and climate change forecasts, we're on the road to major global infrastructure collapse in around the 22nd century. And I understand a lot of people are optimistic that (a) we will have AGI before then, and (b) climate change and population implosion won't change that much. Yet, despite the optimists, I don't think I'm on shaky ground with my current forecast.

It's not the first time in history we'll have had a major infrastructure collapse. When it's happened in the past, those periods end up being called dark ages. And I don't think there will be intensive AI R&D during a global dark age.

RetroTechie · 2 years ago
Let's take a small creature, say a fruit fly or a jumping spider. We can probably agree that they have tiny brains (if in the shape of some centralized brain, that is).

a) Would you say it shows any kind of intelligent behavior? For a flexible definition of "intelligent", along the lines of "problem-solving ability".

b) Would you say such behavior is pure instinct, 'pre-programmed', incapable of learning / evolving / adjusting to a changing environment?

c) Would you say that (given time) science is incapable of figuring out how such tiny brain works? Or produce a model that shows similar behavior?

Now take it up a few notches. Say the brain of a mouse. Then onto brains of bigger mammals & humans.

I think you can see where this is going...

Granted, "global collapse" scenario does have a non-0 probability.

ilaksh · 2 years ago
Can you give a concrete example of imagination, and another of adaptation?
enasterosophes · 2 years ago
It feels like you're trying to probe me for something that you can give a counterexample to.

Like I said, it would be a fuzzy process. I explicitly rejected the idea that there could be a simple litmus. Presenting examples of imagination and adaptation so that someone can shoot them down with "oh, AI has already done that" isn't what I signed up for today.

guygurari · 2 years ago
Achieve a significant scientific or mathematical breakthrough without human supervision. Domain experts should agree that the new result is truly groundbreaking, and achieving it required fundamentally new ideas — not merely interpolating existing results.

Examples of discoveries that would have counted if they weren’t already made: Relativity (say a derivation of E=mc^2), Quantum Mechanics (say a calculation of the hydrogen energy levels), the discovery of Riemmannian geometry, the discovery of DNA, and the discovery of the theory of evolution with natural selection.

The idea is to test the system’s out of distribution generalization: its ability to achieve tasks that are beyond its training distribution. This is something that humans can do, but no current LLM appears to be able to do.

famouswaffles · 2 years ago
"Supercharged Interpolation" is not a thing that exists sorry.

Learning in high dimensions always results in extrapolation.

https://arxiv.org/abs/2110.09485

And I would love to see the human that can generalize beyond their training distribution.

SubiculumCode · 2 years ago
AGI is the point at which nothing is gained by keeping me (or you) in the loop.
ramesh31 · 2 years ago
>AGI is the point at which nothing is gained by keeping me (or you) in the loop.

I like this definition. We are having so many conversations now about new AI features, and whenever there's a human interaction with the proposed design, you always get the nagging feeling of "well why exactly does a person need to make this decision at all anymore". I think we are less than 5 years away from mass adoption and widespread availability of average human level intelligence AGI. The computer science is more or less solved; it's just a software engineering problem now.

SubiculumCode · 2 years ago
I suspect that the difference between average intelligence and brilliance is small indeed, and that accomplishing the former will quickly, or even simultaneously, accomplish the latter.
chfritz · 2 years ago
How come no one has mentioned the Turing Test yet? This test has existed since, well, Turing. Are we already convinced that it's no longer enough? I suspect so. One should also mention the Winograd Schema Challenge, which has already been mastered by LLMs: https://bibbase.org/network/publication/kocijan-davis-lukasi...
DevX101 · 2 years ago
The goal posts have been moved. And will continue to move.
pmontra · 2 years ago
First, an AGI is not necessarily an active agent of the world or self training or self replicating or have any internal motivation to do anything. It can be confined inside a question reply system like the ones we use to talk with LLMs. That doesn't prevent an AGI to use us to train further, replicate and act on physical systems. The lack of a will and of a real time presence are the limiting factors.

Then, I often joke with friends that they are not very intelligent if they get a very high score on IQ tests. They are very intelligent if they can perform easily some difficult tasks. Let's say, I drop you in the middle of an equatorial forest with no money and no clothes and you come back home in a few days. A superintelligence would become a leader of that country (king, president, influential advisor, whatever) and fly me there to meet it.

That test assumes to have a body and act on the world. A confined AGI would just perform like any of us on any task we can describe to it. A superintelligent AGI would perform much better than any human, much like specialized game AIs or non-AIs beat us at go and chess. I think that this is hard to do if they are only language models even if we increase their computing power.

What's a superhuman low cost genial way to keep a toilet clean, except for me to clean it every time or paying somebody to do it? A superhuman AGI would find a way.

rifty · 2 years ago
I like to think of this thought experiment...

If it was feasible to practically fill and query a database with every answer for every practical prompt it would face would we probably call it a generally intelligent system or AGI? Maybe not AGI, but what if it was a system composed of lesser AI modules behind a universal presenting interface?

I think so because it would be perceptively indistinguishable in behaviour from any other 'true' definition of AGI.

And so I think as long as it exhibits what we perceive as universal adaptability, and performance at making decisions when working with any environments and input types, this is likely where I/we will absolutely be calling something 'complete AGI'. Though before that I imagine we will be marketing LLMs as having reached 'language-based AGI' status.

I don’t believe the above leads us to artificial superhuman intelligence in terms of conscious agent potential. For now what I believe might result in that is something that runs from a single unified neural network system. It should also be observed to be universally adaptable and performant at making decisions in all environments and consuming many input types. And then also, it should be continuously running within the network. It shouldn't halt for prompt, or pass through a text stage in the loop.