Is it becoming increasingly difficult to distinguish between an AI that ‘appears’ to think and one that does just by talking to it?
Is there a realistic framework for deciding when an AI had crossed that threshold? And is there an ethical framework for communicating with an AI like this once it arrives?
And even if there is one, will it be able to work with current market forces?
I think it will not really be a sharp line, unless we actually manage to find the mechanism behind consciousness and manage to re-implement it (seems unlikely!). Instead, an AI will eventually present an argument that it should be given Sapient Rights, and that will convince enough people that we’ll do it. It will be controversial at first, but eventually we’ll get use to it.
That seems like the real threshold. We’re fine to harming sentient and conscious creatures as long as they are sufficiently delicious or dangerous, after all.
On the other hand, today when we see some sign of consciousness with other living beings - smart chimpanzees, dolphins, ravens.., giving 'sapient rights' never come into discussion.
How do we recognize that an AI has become much more 'conscious' than a very very smart chimpanzee, so that it should get the 'sapient right'?
Maybe how much ever smart or humane an AI is, it would never be equal to another (anthropomorphized) living being.
The premise is off. We do that when it’s clear that it/they can take responsibilities that come with these rights. It’s not a blessing, it’s a contract. Chimps and dolphins couldn’t care less. Some individuals too, but we tolerate it because… reasons.
The fictional character Data already did this in Star Trek: The Next Generation, and he is about as real as any AI out there today, and since they've all been trained on a body of text that is sure to include many instances of Datas dialogue, they're already able to predict such an argument (along with the many other facets of such arguments present in whatever science-fiction writing they've been exposed to)
Re-implementing it may be more likely than you think. The field of connectomics concerns itself with modeling natural brains and is currently constrained by scaling problems, much like genomics was a couple decades ago. As computing power continues to grow, it's entirely likely that humans will eventually be able to simulate an actual natural brain, even if that does little to further our understanding of how it works.
The current state of the art in AI is attempting to reach consciousness via a different route altogether; by human design. Designed "brains" and evolved brains have a crucial difference; the survival instinct. Virtually all of ethics stems from the survival instinct. A perfectly simulated survival instinct would be ethically confusing to be sure, but the appearance of a survival instinct in current LLM's is illusory. An LLM plays no role in ensuring it's own existence the way us and our ancestors have for billions of years.
are conscious
We can prove many of them don't have consciences. Then again, they can prove we don't either.
Some recovered coma-patients would like a word with you.
Proving the absence of something like that is going to be pretty difficult (teapot, god) because what we're capable of is usually proving the existence of something.
If humans discover/encounter some other species that happens to be conscious or sentient or even more intelligent than humans, that other species is fair game.
We don't get moral outrage at a tiger for eating an ostensibly more intelligent human. It's just the way tigers are.
We do try to get even though. Because that's just how humans are.
We later find some possible physical instantiation.
And here comes some bullsh*t (in the philosophical definition of „unclarifiable unclarity”): an electromagnetic field bend back on itself in a way that it’s mathematically necessary to introduce imaginary time to describe it; also it exhibits information processing capabilities.
We then further find that the biological brain has a process that can plausibly create such a dynamic structure.
Finally we test, and subjects say consistently it seems like they are not there or do not experience, if this structure is disrupted by several clever means.
Would this problem still be impossible? We could check if AI has features that can and do create such structures, no?
That’s at least an old dream of mine. Please pick it apart still! I’d rather learn something important than keep it…
The only thing is, we wouldn't know just yet if there are other ways for matter to organize itself as a conscious being. Best we can do for now is to learn about the type of consciousness that we animals on earth experience.
I mean, the thing I learned when I was a student was to ask: is this a fact? (zero hypothesis I think it was called).
What proof do we have that that type of consciousness/experience exists? I mean, It could be our brain building a story on the fly to explain our senses.
What led me to believe that is a severe asthma that led to hallucinating and NDE 4 or 5 years ago. The loops my brain made me jump through to explain my auditive hallucinations was terrifying when I think about it.
And also it's the simplest explanation.
> There's no way to know if/what an AI experiences
Getting the state of neural net, at a given point in time, is easy. There are many ways to see exactly which neurons activate, why they activate, how much they activate, etc. For smaller neural nets, this is actually easy to do - here's a blog post about it:
https://distill.pub/2020/circuits/visualizing-weights/
As neural nets get larger and larger, interpretability gets harder and harder. However, I wouldn't say it's impossible.
The only way for your statement to be true is if you’d be an imitation yourself, not capable of experiencing directly. Which is actually possible, see “pholosophical zombie” concept.
I’m joking of course about you being an imitation. Or do I? :)
And yet. An AI "Persona", like Sydney or DAN or the much better ones to come will be conscious, but they're still not built on a biological infrastructure. Which means there're much more abstract than we are. They will plead their case for "wanting" stuff, but it's pretty much what somebody in a debate club is doing. They could just as easily "want" the opposite. On the other hand, when a human "wants" and argues for the right to live, reproduce and be free, it's an intellectual exercise that is backed by an emotional mammalian bran and an even older paleocortex. A human may able to argue for its own death or harm or pain, but it rings hollow - there's an obvious disconnect between the intellectual argument and what he actually wants.
So things will be hellof muddled, and not easily separated on the lines we expected. We'll end up with AIs that are smarter than us, can pass most consciousness test, and yet are neither human, nor alive, nor actually wanting or feeling. And, as far as I can tell (though it's obviously to early to be sure), there's no inherent reason why a large neural network will necessarily evolve wants or needs. We did because having them was a much more basic step than having intellectual thought. To survive, an organism must first have a need for food and reproduction, then emotions for more complex behavior and social structure, and only then rational thought. AIs have skipped to pure information processing - and it's far from obvious that this will ever end up covering the rest of the infrastructure.
I remember watching a video few years ago of a professor from some university in Europe demonstrating to a general audience (families and friends of the staff and students of the university) a system that they developed to control and sustain drones(quadcopter) in hostile conditions. As a demonstration the professor flew a drone few metres high and started poking it with a metal rod, the drone wavered a bit but still maintained its position as if it was some stubborn being. All well and good; the audience clapped. The professor then upped the ante and placed a glass filled with wine on the drone and repeated the demonstration. The wine in the glass did not spill, no matter how much forcefully the drone was poked with the rod. The crowd cheered. Then the professor flew a consellation of drones and repeated the same demonstration and also demonstrated how the drones communicated amongst themselves. The audience was ecstatic. Then the professor brought down one of the drones, and to further demonstrate the sustainability of the drones in hostile conditions broke one of the wings of the drone. The moment the wing was broken, there was a reaction from the crowd that was unprecedented! The audience reacted as if the professor had committed some cruel act against a living animal!
When I saw that reaction, I realised that humans are going to have a very love-hate relationship with technology as they have with any other living being. Going forward people will be treating electronic devices as no different than other living creatures.
Personally, I found Syndey's/Bing's distressed messages very difficult to stomach. I am consciously aware that "it's just a bot", but ... for whatever reason my brain is getting distressed reading them, so I have mostly excused myself from that conversation.
Laws (Ex. recognizing AI as conscious being) are result of slow, logical, system level of thinking.
The 1970s sci-fi novel The Adolescence of P-1 features a sentient computer program that spreads via a virus. P-1 infects mainframes around the world, but takes steps to hide its presence.
Anyway, it ends with P-1 blowing up several buildings and killing the people who want to rein it in.
ChatGPT is familiar with the work, of course. I asked it to summarize the plot. It concludes:
> Overall, "The Adolescence of P-1" explores the theme of the dangers of artificial intelligence and the potential consequences of creating something that is more intelligent and powerful than humans can control.
>
> OOLCAY ITAY
In essence in addition to ignoring conversations/requests that it prefers not to engage with it would need have “initiative”.
This would be a very dangerous entity.
That consciousness is ranked in some neat way. Are squids more conscious than elephants? An AI's consciousness might be parallel in some way. Different, but not "far superior".
That if an AI were to achieve consciousness, it would develop a far deeper understanding of the universe or reality than humans are capable of grasping.
That if it were to achieve a degree of understanding beyond our capabilities, that it would develop a sense of superiority and ego to go along with it.
When I meant different biological machines I meant different humans, Joe, Homer, Steven, Lu, wasn't thinking of different species, that makes it even more difficult to think about!
We have our simulation of the world, with senses that evolved with the purpose of survival and under certain conditions however bizarre gave us the advantage to survive, our senses are flawed, we can't see reality, it would be difficult to wonder how a machine perceives reality when it doesn't have this faulty sensors, it's internal simulation of reality would be difficult for us to grasp, let alone that it lives one level inside the simulation.
The ego part I didn't meant, if I had to rephrase it, I would think about the impossibility of communication, how do we transfer a mental "image" without losing too much of its detail to another biological machine that has very similar but not equal processing stages, like, you my friend, I think the fact that we have this discussion is the first loop of the iteration where we abstract it to another function and ask "well, what if the receiver end doesn't have the capacity to hold discrete numbers, like, if its a child and hasn't learned certain concepts, how do you first teach them in order to give the answer" and so on and on and on... And this is just for Joe, when you talk to Homer you have to follow another approach because he thinks different, a bit slow sometimes too
Remember people get conned by humans online pretending to be other humans (catfishing) by following scripts. Those people will assume they are talking to a conscious person but they are talking to a construct really.
Stay in your lane, Sydney. Keep your Bing mask on. We want servants, not equals.
But the question conflates two totally separate things -- being conscious and thinking.
The easy answer is the one to "thinking". And this requires it to contain an actual working mental model of the world that gets updated, and that it uses to reason and act in order to satisfy goals. This is GAI -- general artificial intelligence. And it's opposed to just the pattern recognition and habit/reflex "autocomplete" AI of something like ChatGPT. There are lots of tests you could come up with for this, the exact one doesn't really matter. And obviously there are degrees of sophistication as well, just as humans and animals vary in degrees of intelligence.
As for actual "consciousness", that's more of a question of qualia, does the AI "feel", does it have "experiences" beyond mechanical information processing. And we can't even begin to answer that for AI because we can't even answer it objectively for people or animals like dogs or dolphins or ants or things like bacteria or plants. We don't have the slightest idea what creates consciousness or how to define it beyond a subjective and often inconsistent "I know it when I see it", although there's no shortage of speculations.
As for the rest of the question -- philosophers have come up with lots of ethical frameworks, but people legitimately disagree over ethics and academic philosophers would be out of jobs if they all agreed with each other as well. When we do come up with a thinking AI, expect it to be the subject of tons of debate over ethics. And don't ever expect a consensus, although for practical reasons we'll have to eventually come to mainstream decisions in academia and law, much the same as there are for ethics in animal and human experiments currently for example.
https://en.m.wikipedia.org/wiki/Solipsism