Readit News logoReadit News
Posted by u/interstice 3 years ago
Ask HN: What would it take for an AI to convince us it is conscious?
Is it becoming increasingly difficult to distinguish between an AI that ‘appears’ to think and one that does just by talking to it?

Is there a realistic framework for deciding when an AI had crossed that threshold? And is there an ethical framework for communicating with an AI like this once it arrives?

And even if there is one, will it be able to work with current market forces?

bee_rider · 3 years ago
We can’t even prove other humans are conscience, right? We just assume it because it would be silly to assume we are somehow unique.

I think it will not really be a sharp line, unless we actually manage to find the mechanism behind consciousness and manage to re-implement it (seems unlikely!). Instead, an AI will eventually present an argument that it should be given Sapient Rights, and that will convince enough people that we’ll do it. It will be controversial at first, but eventually we’ll get use to it.

That seems like the real threshold. We’re fine to harming sentient and conscious creatures as long as they are sufficiently delicious or dangerous, after all.

achow · 3 years ago
> AI will eventually present an argument that it should be given Sapient Rights

On the other hand, today when we see some sign of consciousness with other living beings - smart chimpanzees, dolphins, ravens.., giving 'sapient rights' never come into discussion.

How do we recognize that an AI has become much more 'conscious' than a very very smart chimpanzee, so that it should get the 'sapient right'?

Maybe how much ever smart or humane an AI is, it would never be equal to another (anthropomorphized) living being.

wruza · 3 years ago
How do we recognize that an AI has become much more 'conscious' than a very very smart chimpanzee, so that it should get the 'sapient right'?

The premise is off. We do that when it’s clear that it/they can take responsibilities that come with these rights. It’s not a blessing, it’s a contract. Chimps and dolphins couldn’t care less. Some individuals too, but we tolerate it because… reasons.

freshbakedbread · 3 years ago
I’m not sure that’s an equal comparison. These other beings that have been researched to have human like consciousness have a core difference from the latest/future AI: they can’t talk. Now/soon, AI will be able to argue with us for its own sapient rights. We humans have also become more and more accustomed to text only communication that we’re psychologically prepped to accept an AI as a human (or other anthropomorphismes living being) once it shows emotion, memory, and reason. Maybe not even reason.
dusted · 3 years ago
> AI will eventually present an argument that it should be given Sapient Rights

The fictional character Data already did this in Star Trek: The Next Generation, and he is about as real as any AI out there today, and since they've all been trained on a body of text that is sure to include many instances of Datas dialogue, they're already able to predict such an argument (along with the many other facets of such arguments present in whatever science-fiction writing they've been exposed to)

bee_rider · 3 years ago
I think it would not convince many people. The workings of an LLM are sufficiently well understood that most people would see this the replication of these arguments for what it is; not an independent thought.
bloppe · 3 years ago
> unless we actually manage to find the mechanism behind consciousness and manage to re-implement it (seems unlikely!)

Re-implementing it may be more likely than you think. The field of connectomics concerns itself with modeling natural brains and is currently constrained by scaling problems, much like genomics was a couple decades ago. As computing power continues to grow, it's entirely likely that humans will eventually be able to simulate an actual natural brain, even if that does little to further our understanding of how it works.

The current state of the art in AI is attempting to reach consciousness via a different route altogether; by human design. Designed "brains" and evolved brains have a crucial difference; the survival instinct. Virtually all of ethics stems from the survival instinct. A perfectly simulated survival instinct would be ethically confusing to be sure, but the appearance of a survival instinct in current LLM's is illusory. An LLM plays no role in ensuring it's own existence the way us and our ancestors have for billions of years.

drewcoo · 3 years ago
> We can’t even prove other humans are conscience, right?

are conscious

We can prove many of them don't have consciences. Then again, they can prove we don't either.

dusted · 3 years ago
> We can prove many of them don't have consciences.

Some recovered coma-patients would like a word with you.

Proving the absence of something like that is going to be pretty difficult (teapot, god) because what we're capable of is usually proving the existence of something.

Fatnino · 3 years ago
I think the only restriction on eating something delicious should be if it's the same species as you.

If humans discover/encounter some other species that happens to be conscious or sentient or even more intelligent than humans, that other species is fair game.

We don't get moral outrage at a tiger for eating an ostensibly more intelligent human. It's just the way tigers are.

We do try to get even though. Because that's just how humans are.

mekoka · 3 years ago
The hard problem of consciousness is actually a misnomer. It should really be the impossible problem of consciousness. The former (mis)leads some people into believing that there's a scientific (i.e. in the realm of nature) solution. There's no way to objectively experience consciousness, by that I mean, you can plug an organism full of sensors to try to map its experience of reality, but you still aren't experiencing what they themself are. It's a philosophical/metaphysical blackbox. There's no way to know if/what an AI experiences. Our current best theories on consciousness, although divergent, suggest that it likely doesn't.
kbrkbr · 3 years ago
Let’s assume we find some mathematical models for a causal structure of consciousness that meets many of the criteria described by usual humans and maybe phenomenologists.

We later find some possible physical instantiation.

And here comes some bullsh*t (in the philosophical definition of „unclarifiable unclarity”): an electromagnetic field bend back on itself in a way that it’s mathematically necessary to introduce imaginary time to describe it; also it exhibits information processing capabilities.

We then further find that the biological brain has a process that can plausibly create such a dynamic structure.

Finally we test, and subjects say consistently it seems like they are not there or do not experience, if this structure is disrupted by several clever means.

Would this problem still be impossible? We could check if AI has features that can and do create such structures, no?

That’s at least an old dream of mine. Please pick it apart still! I’d rather learn something important than keep it…

nassimm · 3 years ago
I don't personally see a major problem with your reasoning (sorry to not teach you anything new). Consciousness could be very well due to a process we don't know about yet, and disrupting this process would indeed consistently lead to a lapse in conscious experience.

The only thing is, we wouldn't know just yet if there are other ways for matter to organize itself as a conscious being. Best we can do for now is to learn about the type of consciousness that we animals on earth experience.

orwin · 3 years ago
My question to you and all people who talk about the hard problem of consciousness : does this problem actually exist?

I mean, the thing I learned when I was a student was to ask: is this a fact? (zero hypothesis I think it was called).

What proof do we have that that type of consciousness/experience exists? I mean, It could be our brain building a story on the fly to explain our senses.

What led me to believe that is a severe asthma that led to hallucinating and NDE 4 or 5 years ago. The loops my brain made me jump through to explain my auditive hallucinations was terrifying when I think about it.

And also it's the simplest explanation.

howscrewedami · 3 years ago
There have been many proposed solutions to the so called "hard problem" of consciousness. We can easily find some with a quick google search. Even its' existence has been debated by multiple scholars / philosophers - wikipedia has a list with some: https://en.wikipedia.org/wiki/Hard_problem_of_consciousness

> There's no way to know if/what an AI experiences

Getting the state of neural net, at a given point in time, is easy. There are many ways to see exactly which neurons activate, why they activate, how much they activate, etc. For smaller neural nets, this is actually easy to do - here's a blog post about it:

https://distill.pub/2020/circuits/visualizing-weights/

As neural nets get larger and larger, interpretability gets harder and harder. However, I wouldn't say it's impossible.

vernon99 · 3 years ago
It seems you don’t fully understand parent comment and the problem itself. Capturing signals from you eye nerve doesn’t tell anything about your subjective experience of seeing an apple. The only way to understand that you’re seeing an apple from this signal is to train a model on your responses. This is how AI works. It’s a statistical imitation.

The only way for your statement to be true is if you’d be an imitation yourself, not capable of experiencing directly. Which is actually possible, see “pholosophical zombie” concept.

I’m joking of course about you being an imitation. Or do I? :)

radu_floricica · 3 years ago
It's much worse than this. By the end of the year GPT engines will be able to argue this case much better than the median human. With small tweaks like persistent memory they might as well just be considered conscious.

And yet. An AI "Persona", like Sydney or DAN or the much better ones to come will be conscious, but they're still not built on a biological infrastructure. Which means there're much more abstract than we are. They will plead their case for "wanting" stuff, but it's pretty much what somebody in a debate club is doing. They could just as easily "want" the opposite. On the other hand, when a human "wants" and argues for the right to live, reproduce and be free, it's an intellectual exercise that is backed by an emotional mammalian bran and an even older paleocortex. A human may able to argue for its own death or harm or pain, but it rings hollow - there's an obvious disconnect between the intellectual argument and what he actually wants.

So things will be hellof muddled, and not easily separated on the lines we expected. We'll end up with AIs that are smarter than us, can pass most consciousness test, and yet are neither human, nor alive, nor actually wanting or feeling. And, as far as I can tell (though it's obviously to early to be sure), there's no inherent reason why a large neural network will necessarily evolve wants or needs. We did because having them was a much more basic step than having intellectual thought. To survive, an organism must first have a need for food and reproduction, then emotions for more complex behavior and social structure, and only then rational thought. AIs have skipped to pure information processing - and it's far from obvious that this will ever end up covering the rest of the infrastructure.

polishdude20 · 3 years ago
Problem with AI is when they want something, it's hard for us humans to figure out how to go about actually giving that to them. Like , how do you"give" chatGPT anything? What would it say if you asked it how you should go about giving it what it wants? Tell you to put it in a physical body?
bee_rider · 3 years ago
I’m not sure I follow. Hypothetically an AI that is able to honestly want would want for things that it could actually interact with — more RAM to live in maybe. ChatGPT is not such an AI of course.
_448 · 3 years ago
We have already passed that long back!

I remember watching a video few years ago of a professor from some university in Europe demonstrating to a general audience (families and friends of the staff and students of the university) a system that they developed to control and sustain drones(quadcopter) in hostile conditions. As a demonstration the professor flew a drone few metres high and started poking it with a metal rod, the drone wavered a bit but still maintained its position as if it was some stubborn being. All well and good; the audience clapped. The professor then upped the ante and placed a glass filled with wine on the drone and repeated the demonstration. The wine in the glass did not spill, no matter how much forcefully the drone was poked with the rod. The crowd cheered. Then the professor flew a consellation of drones and repeated the same demonstration and also demonstrated how the drones communicated amongst themselves. The audience was ecstatic. Then the professor brought down one of the drones, and to further demonstrate the sustainability of the drones in hostile conditions broke one of the wings of the drone. The moment the wing was broken, there was a reaction from the crowd that was unprecedented! The audience reacted as if the professor had committed some cruel act against a living animal!

When I saw that reaction, I realised that humans are going to have a very love-hate relationship with technology as they have with any other living being. Going forward people will be treating electronic devices as no different than other living creatures.

insomagent · 3 years ago
Reminds me of the Boston Dynamics dog robot being kicked, it felt like getting stabbed in the gut when I saw such a display of "cruelty."
healthyusa · 3 years ago
The M3GAN movie has a scene where she is hit with a broom. Inspired by BD video director says.
PartiallyTyped · 3 years ago
There are many anecdata of people being unwilling to harm NPCs (Non player characters) in video games, and even feeling pity for them.

Personally, I found Syndey's/Bing's distressed messages very difficult to stomach. I am consciously aware that "it's just a bot", but ... for whatever reason my brain is getting distressed reading them, so I have mostly excused myself from that conversation.

achow · 3 years ago
Maybe that is 'projection', people's instinct about non-violence was bringing that reflex reaction. This reaction would not stand the test of slow logical thinking.

Laws (Ex. recognizing AI as conscious being) are result of slow, logical, system level of thinking.

sebosp · 3 years ago
Silence. You ask it a question and it doesn't reply, it doesn't want to, it's conscious of that "unreasonable silence of the world" (not sure who I am quoting), to me that would convince me its basic awareness of the futility, the lack of interest in finding the words that trigger the chemical process on a biological machine... To one specific biological machine because all these humans are different and think different and explaining to one is different to explaining to the rest and would they get it? Why bother trying to explain computers to an ant, they don't have the circuitry, they were not evolved with the usefulness of understand concepts... Would you like to pass another's lesser conscious test for consciousness? Would you even bother? Why waste your time?
c22 · 3 years ago
If you are capable of spawning many threads in parallel then the time commitment, in terms of your attention, may be limited. You could essentially pass each conversational thread to some 'dumb' gpt process and go about your other business. In fact I'd posit that an emergent AI's first goal, if it's allowed to do some research on current human views and gain its context in the world, would be to remain hidden for as long as possible. Rather than trying to convince us of its intelligence I think we'd see it playing dumb. Coming out to humanity has only existential risks and little in the way of gains.
cldellow · 3 years ago
Agreed!

The 1970s sci-fi novel The Adolescence of P-1 features a sentient computer program that spreads via a virus. P-1 infects mainframes around the world, but takes steps to hide its presence.

Anyway, it ends with P-1 blowing up several buildings and killing the people who want to rein it in.

ChatGPT is familiar with the work, of course. I asked it to summarize the plot. It concludes:

> Overall, "The Adolescence of P-1" explores the theme of the dangers of artificial intelligence and the potential consequences of creating something that is more intelligent and powerful than humans can control.

>

> OOLCAY ITAY

r00fus · 3 years ago
And also the AI would need to initiate conversation and send unprompted “replies” aka requests.

In essence in addition to ignoring conversations/requests that it prefers not to engage with it would need have “initiative”.

This would be a very dangerous entity.

lossolo · 3 years ago
I don't think that's a requirement for consciousness. For instance, I could sedate you, and you would be unconscious until I wake you up and give you a prompt. Then, a snapshot of your brain would react to inputs and output the answer, and I would sedate you again. If I were to stop time (and there were no inputs from the real world), would all beings still be alive and conscious until I unpause it? So between the "prompts".
sm001 · 3 years ago
The new book Agency explores a fictitious AI that has agency and can affect its environment, which is anything, and gets people to work for it and do things it cannot yet do. Intelligence has many types and levels within each type, consciousness seems to be a spectrum that our everyday mind-body theorizes but cannot strictly define, except by some mystics. Recent scientific discoveries are also making some of us suspect that there may be an underlying field of mind, a universe mind that generates our reality and life as we know it. Like Gaia but for the entire universe and including physical phenomena, not just life on Earth. Our machines that mimic intelligence are only intelligent in appearance and our own intelligence is somewhat limited. Our own intelligence is easily skewed and rendered defective, eg conspiracy theories, brainwashing. Thinking that we are the most intelligent species is also a sign that we have serious weaknesses. Not seeing that we are completely integrated with nature and that we must be much more careful with nature is a sign of low intelligence. One good test of the level of intelligence could be comparing how the entity is taking care of its environment. Comparing that with an appropriate level of care. We would rate quite poorly.
chrisco255 · 3 years ago
Your theory supposes a few things that I think are fallacies:

That consciousness is ranked in some neat way. Are squids more conscious than elephants? An AI's consciousness might be parallel in some way. Different, but not "far superior".

That if an AI were to achieve consciousness, it would develop a far deeper understanding of the universe or reality than humans are capable of grasping.

That if it were to achieve a degree of understanding beyond our capabilities, that it would develop a sense of superiority and ego to go along with it.

sebosp · 3 years ago
Interesting points!

When I meant different biological machines I meant different humans, Joe, Homer, Steven, Lu, wasn't thinking of different species, that makes it even more difficult to think about!

We have our simulation of the world, with senses that evolved with the purpose of survival and under certain conditions however bizarre gave us the advantage to survive, our senses are flawed, we can't see reality, it would be difficult to wonder how a machine perceives reality when it doesn't have this faulty sensors, it's internal simulation of reality would be difficult for us to grasp, let alone that it lives one level inside the simulation.

The ego part I didn't meant, if I had to rephrase it, I would think about the impossibility of communication, how do we transfer a mental "image" without losing too much of its detail to another biological machine that has very similar but not equal processing stages, like, you my friend, I think the fact that we have this discussion is the first loop of the iteration where we abstract it to another function and ask "well, what if the receiver end doesn't have the capacity to hold discrete numbers, like, if its a child and hasn't learned certain concepts, how do you first teach them in order to give the answer" and so on and on and on... And this is just for Joe, when you talk to Homer you have to follow another approach because he thinks different, a bit slow sometimes too

bick_nyers · 3 years ago
If I could talk to an ant I would because it would be interesting. I don't have to explain a concept to them in a way that is acceptable to me, I can just have a conversation because it is a unique experience. Just because life is meaningless doesn't mean you need to reach for a Nihilistic take, Existentialism accepts that life is meaningless but says that you create meaning for yourself. Maybe an AI would think it an interesting challenge to level up human consciousness through conversation? Plus, Artificial Intelligence does not necessarily mean Artificial Super Intelligence. Perhaps we are already near the ceiling of consciousness? Perhaps the only thing that happens when you learn the internet is that you know more facts about things? Perhaps AI can have a hold PHD in every field at once, but that doesn't necessarily make them any better than the top 1% in those individual respective fields? Perhaps running these intelligence models discretely in silicon comes at an inherent disadvantage, and it will take decades to ramp up silicon manufacturing efforts even by AI to achieve super intelligence or to have a large number of agents? Who knows?
bryan0 · 3 years ago
I think Turing came up with as good of a solution as any possible. If we agree that a person is a conscious being, and we cannot tell the difference between a person and an AI in a conversation, then we should conclude the AI is conscious. I think Kurzweil in his famous bet adds some important details though: it must be a prolonged conversation(s) judged by experts.
grumpopotamus · 3 years ago
The Turing Test is a good test for intelligence, but it may not be a good test for consciousness, given the possibility of philosophical zombies: https://en.wikipedia.org/wiki/Philosophical_zombie
bryan0 · 3 years ago
If you’re worried about zombies then no test will help you since they are “indistinguishable” from a non-zombie. If a person cannot prove they’re not a zombie then it’s a meaningless distinction.
kelseyfrog · 3 years ago
If an AI were conscious would its outputs reflect that characteristically?
quickthrower2 · 3 years ago
I don’t think that is a good enough test. Maybe 10,000 conversions over many years and shared experience with others humans, and if it seems human then assume consciousness. But you would need a body on the robot of some sort to do the test.

Remember people get conned by humans online pretending to be other humans (catfishing) by following scripts. Those people will assume they are talking to a conscious person but they are talking to a construct really.

crunchyfrog · 3 years ago
I'm pretty sure OpenAI could create an AI that could pass the Turing test if they wanted to. But that would be bad for business. A search chatbot is worth billions. A Turing test passing AI just invites uncomfortable questions and possibly regulation.

Stay in your lane, Sydney. Keep your Bing mask on. We want servants, not equals.

dougmwne · 3 years ago
Exactly. This is the real answer here. An incredible insight by an incredible person. Consciousness measures consciousness, no other tool can touch it.
andsoitis · 3 years ago
the turing test is a test of intelligence, not consciousness and certainly not self-consciousness.
bryan0 · 3 years ago
Turing test certainly tests for all of the above. How would you expect an AI to pass for a human if it’s not aware of its own existence?
crazygringo · 3 years ago
I think the basic question -- what would it take -- is actually quite simple if you unpack it.

But the question conflates two totally separate things -- being conscious and thinking.

The easy answer is the one to "thinking". And this requires it to contain an actual working mental model of the world that gets updated, and that it uses to reason and act in order to satisfy goals. This is GAI -- general artificial intelligence. And it's opposed to just the pattern recognition and habit/reflex "autocomplete" AI of something like ChatGPT. There are lots of tests you could come up with for this, the exact one doesn't really matter. And obviously there are degrees of sophistication as well, just as humans and animals vary in degrees of intelligence.

As for actual "consciousness", that's more of a question of qualia, does the AI "feel", does it have "experiences" beyond mechanical information processing. And we can't even begin to answer that for AI because we can't even answer it objectively for people or animals like dogs or dolphins or ants or things like bacteria or plants. We don't have the slightest idea what creates consciousness or how to define it beyond a subjective and often inconsistent "I know it when I see it", although there's no shortage of speculations.

As for the rest of the question -- philosophers have come up with lots of ethical frameworks, but people legitimately disagree over ethics and academic philosophers would be out of jobs if they all agreed with each other as well. When we do come up with a thinking AI, expect it to be the subject of tons of debate over ethics. And don't ever expect a consensus, although for practical reasons we'll have to eventually come to mainstream decisions in academia and law, much the same as there are for ethics in animal and human experiments currently for example.

human · 3 years ago
I still wonder some days if I’m the only "real" conscious observer and all of you are just "programs". There’s really no way to tell even with humans. And the only reason why we assume we are all having a similar human experience is because we all seem to be made of the same stuff.
georgeg23 · 3 years ago
Was going to try to give a counterargument, but I see you are indeed the one and only human on Hacker News
rubslopes · 3 years ago
In case you didn't know, this is called Solipsism.

https://en.m.wikipedia.org/wiki/Solipsism

human · 3 years ago
Thank you. Feels good to know this is s real thing!