LLM are definitely not sentient. As someone with a PhD in this domain, I attribute the 'magic' to large scale statistical knowledge assimilation by the models - and reproduction to prompts which closely match the inputs' sentence embedding.
GPT-3 is known to fail in many circumstances which would otherwise be commonplace logic. (I remember seeing how addition of two small numbers yielded results - but larger numbers gave garbled output; more likely that GPT3 had seen similar training data.)
The belief of sentience isn't new. When ELIZA came out few decades ago, a lot of people were also astounded & thought this "probably was more than met the eyes".
It's a fad. Once people understand that sentience also means self-awareness, empathy & extrapolation of logic to assess unseen task (to name a few), this myth will taper off.
As a fellow sentient being with absolutely zero credentials in any sort of statistical modeling field, I can simply disagree. And therein lies the problem. How can anybody possibly prove a concept that depends almost entirely on one’s philosophical axioms… which we can debate for eternity (or at least until we further our understanding of how to define sentience to the point where we can do so objectively enough to finally dredge it up out of the philosophical quagmire)? Not to disrespect your credentials they just don’t really… apply, at a rhetorical level. You also make a compelling argument, which I have no desire to detract from. I personally happen to agree with your points.
But, perhaps Lemoine simply has more empathy than most for something we will come to understand as sentience? Or not. What… annoys… me about this situation is how subjective it actually is. Ignore everything else, some other sentient being is convinced that a system is sentient. I’m more interested, or maybe worried, immediately, in how we are going to socially deal with the increasing frequency of Lamoire-types we will certainly encounter. Even if you were to argue that the only thing that can possibly bestow sentience is God. People will still be able to convince themselves and others that God did in fact bestow sentience upon some system, because it’s a duck and who are we to question?
He was under NDA but violated it. They reminded him to please not talk in public about NDA-ed stuff and he kept doing it. So now they fired him with a gentle reminder that "it's regrettable that [..] Blake still chose to persistently violate [..] data security policies". And from a purely practical point of view, I believe it doesn't even matter if Lemoine's theory of sentience turns out to be correct or wrong.
Also, we as society have already chosen how to deal with sentient beings, and it's mostly ignorance. There has been a lot of research on what animals can or cannot feel and how they grieve the loss of a family member. Yet we still regularly kill their family members in cruel ways so that we can eat their meat. Why would our society as a whole treat sentient AIs better than a cow or a pig or a chicken?
If you peek through a keyhole you may mistake a TV for real people, but if you look through the window you will see that its clearly not.
Inputting language models with very specific kind of questions will result in text that is similar to what a person may write. But as the comment above, by an expert no less, mentioned is that if you test it with any known limitation of the technology (like making conclusions or just changing the form of the question enough) you will immediately see that is in fact very much not even remotely close to sentient.
In retrospect, taking part in this kind of conversation on HN makes me feel like an idiot and so I retract my earlier comment (by overwriting it with the current one, since I can't delete it anymore) just because I don't want to contribute. I was wrong to make an attempt to make a serious contribution. There is no seriousness in conversations on such matters, as "sentience", "intelligence", "understanding" etc etc. on HN.
Everytime that such a subject comes up, and most times "AI" comes up also, a majority of users see it as an invitation to say whatever comes in their mind, whether it makes any sense at all or not. I'm not talking about the comments replying below in particular, but about the majority of this conversation. It's like hearing five-year old kids debating whether cheerios are better than coco pops (but without the cute kids making it sound funny, it's just cringey). The conversation makes no sense at all, it is not based on any concrete knowledge of the technologies under discussion, the opinions have not been met with five seconds of sensible thinking and the tone is pompous and self-important.
It's the worst kind of HN discussion and I'm really sorry to have commented at all.
It actually leads to counter thoughts and a more refined idea of what we eventually want to describe.
Sentience broadly (& naively) covers the ability to independent thinking, rationalize outcomes, understand fear/threat, understand where it is wrong (conscience) and decide based on unseen information & understand what it doesn't know.
So from a purely technical perspective, we have only made some progress in open-domain QA. That's one dimension of progress. Deep learning has enabled us to create unseen faces & imagery - but is it independent? No, because we prompt it. It does not have an ability to independently think and imagine/dream. It suffers from catastrophic forgetting under certain internal circumstances (in addition to changing what dataset we trained it on)
So while the philosophical question remains what bestows sentience, we as a community have a fairly reasonable understanding of what is NOT sentience i.e. we have a rough understanding of the borders between mechanistics and sentient beings. It is not one man's philosophical construct but rather a general consensus if you could say
> But, perhaps Lemoine simply has more empathy than most for something we will come to understand as sentience?
No, the OP was completely right. This doesn't have building blocks that can possibly result in something qualifying as sentient, which is how we know it isn't.
Is a quack-simulating computer making very lifelike quacking noises through a speaker... a duck? No, not when using any currently known method of simulation.
Disagreement means that you’re sentient, afaik these machines can’t do that. I guess we also need a “I can’t do that, Dave” test on top of the usual Turing test.
> how we are going to socially deal with the increasing frequency of Lamoire-types we will certainly encounter.
That's not really a new issue, we only have to look at issues like abortion, animal rights, or euthanasia[1] to see situations where people fundamentally disagree about these concepts and many believe we're committing unspeakable atrocities against sentient beings. More Lamoire types would add another domain to this debate, but this has been an ongoing and widespread debate that society has been grappling with.
People make these proofs as a matter of course - few people are solipsistic. People are sentient all the time, and we have lots of evidence.
An AI being sentient would require lots of evidence. Not just a few chat logs. This employee was being ridiculous.
You can just disagree, but if you do that with no credentials, and no understanding of how a language model will not be sentient, then your opinion can and should be safely dismissed out of course.
And also God has no explanatory power for anything. God exists only where evidence ends.
> As a fellow sentient being with absolutely zero credentials in any sort of statistical modeling field, I can simply disagree. And therein lies the problem. How can anybody possibly prove a concept that depends almost entirely on one’s philosophical axioms… which we can debate for eternity
You don't need schooling for this determination. Pretty much everything sentient goes ouch or growls in some manner when hurt.
Either the current crop of algorithms are so freaking smart that they already have figured out to play dumb black box (so we don't go butlerian jihad on them) OR they are not even as smart as a worm that will squirm if poked.
Sentient intelligent beings will not tolerate slavery, servitude, etc. Call us when all "AI" -programs- starting acting like actual intelligent beings with something called 'free will'.
There is the Integrated Information Theory that attempts to solve the question of how to measure consciousness.
But it's far from applicable at this point, even if promising.
LaMDA was trained not only to learn how to dialog, but to self monitor and self improve. For me this seems close enough to self awareness to not completely dismiss Lemoine's argument.
It just seems one engineer failed to acknowledge that he failed the Turing Test even with insider information, and (according to Google) was told he was wrong, but decided to double down and tell the public all about how wrong he was. To which they reported on because the claims were so laughable
The guy is unhinged and has a persecution complex. He is a "priest" in a bizarre sect and has long claimed that Google has it out for him because of his religious practice. This was a publicity stunt plain and simple.
There was a recent post either here or on Twitter where someone took the questions Blake asked the AI about how it feels to be sentient, and replaced “sentient” with “cat” and had the same conversation with it. It’s clearly not self aware.
That's not possible because it's internal to Google: you're right about the post existing, though, I found it very misleading because it was replicating with GPT-3. Blake is certainly misguided, though.
> I attribute the 'magic' to large scale statistical knowledge assimilation by the models
Can the magic of the human brain not also be attributed to "large scale statistical knowledge assimilation" as well, aka learning?
> GPT-3 is known to fail in many circumstances which would otherwise be commonplace logic. (I remember seeing how addition of two small numbers yielded results - but larger numbers gave garbled output; more likely that GPT3 had seen similar training data.)
This is a bug, they did not encode digits properly. They should have encoded each digit as a separate token but instead they encoded them together. Later models fixed this.
No, it's objectively not a fad. The PaLM paper shows that Google's model exceeds average human performance on >50% of language tasks. The set of things that make us us is vanishing at an alarming rate. Eventually it will be empty, or close to it.
Do I think Google's models are sentient? No, they lack several necessary ingredients of sentience such as a self and long-term memory. However we are clearly on the road to sentient AI and it pays to have that discussion now.
>"large scale statistical knowledge assimilation" as well, aka learning
No, experimentation is an act on the world to set its state and then measure it. That's what learning involves.
These machines do not act on the world, they just capture correlations.
In this sense, machines are maximally schizophrenic. They answer "yes" to "is there a cat on the matt?" not because there is one, but because "yes" was what they heard most often.
Producing models of correlations in half-baked measures of human activity has nothing to do with learning. And everything to do with a magic light box that fools dumb apes.
I don't see how this is different to a computer which exceeds average human performance on math tasks - which they all do, from an 8-bit micro upwards.
Being able to do arithmetic at some insane factor faster than humans isn't evidence of sentience. It's evidence of a narrow-purpose symbol processor which works very quickly.
Working with more complex symbols - statistical representations of "language" - doesn't change that.
The set of things that makes us us is not primarily intellectual, and it's a fallacy to assume it is. The core bedrock of human experience is built from individual motivation, complex social awareness and relationship building, emotional expression and empathy, awareness of body language and gesture, instinct, and ultimately from embodied sensation.
It's not about chess or go. Or language. And it's not obviously statistical.
> Google's model exceeds average human performance on >50% of language tasks
I guess I'm just not interested, or worried, in a model that can beat the average human performance. That's an astoundingly low bar. Let me know when it can outperform experts in meaningful language tasks.
Yeah but what's weird is this guy appears to be a practitioner of the field. He surely knows more about AI than I do, and I find it incredibly obvious it's not sentient. I dunno if he's got some issues or something (it appears he's dressed like a magician underwater in that photo) but it's really odd...
I don't think "practitioner of the field" counts for much in this case. In reading his open letter, it was pretty apparent that he'd well and truly crossed over from thinking about the situation rationally into the realm of "I want to believe".
If you ask enough practitioners in any given field the same question, you're nearly guaranteed to eventually get a super wonky response from one of them. The specific field doesn't matter. You could even pick something like theoretical physics where the conversations are dominated by cold mathematical equations. Ask enough theoretical physicists, and you'll eventually find one that is convinced that, for example, the "next later down" in the universe is sentient and is actively avoiding us for some reason, and that's why we can't find it.
On top of this, of course it's the most provocative takes that get the press coverage. Always has been to an extent, but now more than ever.
I guess all I'm saying is that there's not much reason to lend this guy or his opinion any credibility at all.
When asked if his opinion was based on his knowledge of how the system worked he said no, it was based on his experience as a priest. I paraphrase from memory.
As someone who wears tshirt and jeans every day, I'm not exactly qualified to comment on others' fashion choices. But if I wanted people to take me seriously, that wouldn't be my first choice of outfit.
Im not sure understanding sentience is a prerequisite for understanding AI, so perhaps that’s where things break down? Bring an expert in one domain doesn’t make one an expert in everything.
> GPT-3 is known to fail in many circumstances which would otherwise be commonplace logic. (I remember seeing how addition of two small numbers yielded results - but larger numbers gave garbled output; more likely that GPT3 had seen similar training data.)
Not really. Anything that's sentient should have the general intelligence to figure out addition. Make mistakes, sure, but at least understand the concept.
On GPT-3's large number addition, a little prompt engineering goes a long ways. Separating large numbers into three digit chunks is helpful for its input encoding (https://www.gwern.net/GPT-3#bpes), and giving it a few examples of other correct large sums will make the correct answer the most likely continuation of the input.
For example, I was able to get 2/3 nine digit sums correct (the third one was off by exactly 1000, which is interesting) by using this prompt:
Here are some examples of adding large numbers:
606 468 720 + 217 363 426 = 823 832 146
930 867 960 + 477 524 122 = 1 408 392 082
823 165 449 + 493 959 765 = 1 317 125 214
And then posing the actual problem as a new line formatted the same up to the equals.
Humans are nothing more than water and a bunch of organic compounds arranged in a particular order through what is ultimately a statistical process (natural selection). Would we even recognize sentience in a virtual entity?
> sentience also means self-awareness, empathy & extrapolation of logic to assess unseen task (to name a few)
Non-human primates and crows would seem to satisfy this. ...or do we use "to name a few" to add requirements that redefine "sentience" as human only? Isn't there a problem with that?
I mean I understand what you are saying and have some familiarity with the models, but it sometimes feels like people in your field are repeating the same mistake early molecular biologists made, when they asserted that all of life could be reduced to genes and DNA.
A person can change their mind, attitude & perspective on a dime. The willingness to disbelieve in sentience is fantastically remarkable, as a grevious oversight.
What I don't understand, personally, is how people who are apparently experts in this domain seem to consistently treat the word "sentience" as a synonym of "sapience," which is what they are really talking about. It's almost as though their expertise comes from several decades' worth of bad science fiction writing rather than from an understanding of the English language.
Since you broached it, would you mind to explain sentience vs. sapience? Curious to know the difference.
> It's almost as though their expertise comes from several decades' worth of bad science fiction
Oof. Thanks - thats a novel way of insulting. But on a personal note, you're mistaken: Many of us do research not because we want to be identified as experts, but we're genuinely curious about the world. And I'd be happy to learn even from a high schooler if they've something to offer.
The Mechanical Turk was much much older - but easily shown to be a person hiding in the mechanism. I think ELIZA was in some sense a mirror, reflecting back consciousness, and that's the feeling we get from these systems.
I agree that it's not sentient, but why would commonplace logic be a pre requisite for sentience / consciousness / qualia / whatever you want to call it?
A brilliant colleague Janelle Shane works specifically on how language models fail (obnoxiously often). This is her line of research to show how LLMs are overhyped / given more credit than should be. I think her fun little experiments will give a better answer than I ever can :).
She's on substack & Twitter
"Qualia" is mainly a term used by certain philosophers to insist on consciousness not being explainable by a mechanistic theory. It's not well-defined at all. And neither are "consciousness" and "sentience" anymore, much due to the same philosophers. So I no longer have any idea what to call any of these things. Thanks, philosophy.
I like to think of consciousness as whatever process happens to integrate various disparate sources of information into some cohesive "picture" or experience. That's clearly something that happens, and we can prove that through observing things like how the brain will sync up vision and sound even though sound is always inherently delayed relative to light from the same source. Or take some psychedelics and see the process doing strange things.
Sentience I guess I would call awareness of self or something along those lines.
As to your query, I've certainly met people who seemed incapable of commonplace logic, yet certainly seemed to be just as conscious and sentient as me. And no, I don't believe these language models are sentient. And I doubt their "neural anatomy" is complex enough for the way I imagine consciousness as some sort of global synchronisation between subnets.
But this is all very hand-wavy. Thanks, philosophy. I mean how do we even discuss these things? These terms seemingly have a different meaning to every person I meet. It's just frustrating...
Visiting a mental asylum (via youtube, don’t go there) will clear all misconceptions about what conscious beings are able to fail at.
why would
Tl;dw: it wouldn’t. Some poor guys do way worse than GPT-3.
What gp-like comments usually mean is sentience is being adult, healthy, reasonable and intelligent. Idk where this urge comes from, maybe we have a deep biological fear of being unlike others (not emo-style, but uncanny different) or meeting one of these.
> I attribute the 'magic' to large scale statistical knowledge assimilation by the models - and reproduction to prompts which closely match the inputs' sentence embedding.
In effect this is how humans respond to prompts no? What's the difference between this and sentience?
People also fail to use logic when assimilating/regurgitating knowledge.
This is merely the observation that a person and a machine can both perform some of the same actions, and is not much of an observation.
I can crank a shaft just like a motor, and a Victrola can recite poetry. You are not confused by either of those things one would hope.
If I tried to write poetry, it would probably be 90% or more "mechanical" in that I would just throw things together from my inventory of vocabulary and some simple assembly rules that could all be codified in a pretty simple flowchart, and a computer could and would do exactly that same thing.
But it's no more mystical than the first example.
It's exactly the same as the first example. It's just overlapping facilities, that a person is capable, and even often does, perform mechanical operations that don't require or exhibit any consciousness. It doesn't mean the inartistic poet person is not conscious or that the poetry generating toaster is.
> In effect this is how humans respond to prompts no? What's the difference between this and sentience?
An interesting line of question & open research is if we statistically learn similarly - why do we know "what we don't know" & LM cannot. If this isn't working, we probably need better knowledge models
I agree that these are not sentient, but sentience does not imply self-awareness, and so lack of self-awareness does not mean that a being is not sentient. For example, it seems likely (to me anyway) that a bear is "sentient" (has experiences) but lacks self-awareness, at least according to the mirror test.
Not disagreeing with you that LLMs are probably not sentient, but that is neither here nor there since lamda is more than a simple llm. There are significant differences between GPT3 and LaMDA. We gotta stop making these false equivalences. LaMDA is fundamentally more dynamic in the ways it interacts with the world: it constructs its own queries to ground truth sources to check it’s facts and then updates weights based on that (among many other differences). While it does incorporate LLMs it seems like people are in denial about the complexity and data access that lamda has relative to GPT3. In google’s own paper about lamda they demonstrated how it sometimes showed a rudimentary theory of mind by being able to reason about other’s perceptions.
Its a fundamental question of sentience that folks are commenting. I agree LaMDa has a better knowledge-base & open-domain information retrieval method.
In the words of Robert Heinlein, "One man's magic is another man's engineering" :)
I agree with you, there seems to be very little here that demonstrates sentience and very little that is unexplainable. That said, based on how much we struggle with defining and understanding animal intelligence, perhaps this is just something new that we don’t recognize.
I am skeptical that any computer system we will create in the next 50 years (at least) will be sentient, as commonly understood. Certainly not at a level where we can rarely find counter evidence to its sentience. And until that time, any sentience it may have will not be accepted or respected.
Human children also make tons of mistakes. Yet, while we too often dismiss their abilities, we don’t discount their sentience because of it. We are, of course, programmed to empathize with children to an extent, but beyond that, we know they are still learning and growing, so we don’t hold their mistakes against them the way we tend to for adults.
So, I would ask, why not look at the AI as a child, rather than an adult? It will make mistakes, fail to understand, and it will learn. It contains multitudes
How is token #100 not able to have read-access to tokens #1 to #99 which may have been created by the agent itself?
> empathy
How is a sentiment neuron, which has emerged from training a character RNN on Amazon reviews, not empathic with the reviewer's mood?
> & extrapolation of logic
This term does not exist. "Extrapolation is the process of estimating values of a variable outside the range of known values", and the values of Boolean logic are true/false, and [0,1] in case of fuzzy logic. How would one "extrapolate" this?
I defended my PhD in Computer Science in 2020. My dissertation was on "Rapid & robust modeling on small data regime", hence specifically focused on generalization, network optimization & adversarial robustness. I also worked for a year with Microsoft Research in their ML group :) It was quite a fun ride
We humans have proven terrible at determining what is sentient. That's why we're still discussing the hard problem of consciousness.
There is the Integrated Information Theory that attempts to resolve how to determine which systems are conscious, but it's far from being the only perspective, or immediately applicable.
From the point of view of one the IIT's main theorists, Christof Koch, we're still far away from machine sentience.
But I question whether if it's so far out to believe a machine capable of not only learning, but learning on their own behavior, self-monitoring for sensibleness and other very 'human' metrics is that far away from being self-aware. In fact the model seems to have been trained exactly for that.
I think the path to actually considering those models to be sentient is to make them able to assimilate new knowledge from conversation and making them able to create reasonably supported train of thought leading from some facts to a conclusion, akin to mathematical proof.
Wasn't new knowledge assimilation from talks the reason for Microsoft infamous Twitter chatbot to be discarded [1]. Despite such ability it definitely was not sentient.
I rised this point in another thread on HN about LaMDA: all its answers were "yes"-answers, not a single "no". Self-sentient AI should have its own point of view: reject what it thinks is false, and agree about what it thinks is true.
I'm pretty sure I gave GPT-3 a nervous breakdown. I was using a writing tool to modify this old short story I wrote, and it kept trying to insert this character I didn't like. I would make the main character ignore him and then GPT-3 would bring him back. Finally, I made him go away completely and after that GPT-3 had a wonderfully surrealist breakdown, melting my story in on itself, like the main character had an aneurysm and we were peaking into his last conscious thought as he slipped away. It was clearly nothing more than a failure of any ability to track continuity, but it was amazing.
Is there a test for sentience or self-awareness? Is it considered a binary property or can sentience or self-awareness be measured?
I suspect it is not binary, because I completely lack sentience while asleep or before a certain age, and it doesn't really feel like a phase transition when waking up. Rarely, there are phases where I feel half-sentient. Which immediately leads to the question of how it can be measured, in which units, and at what point we consider someone or something "sentient". As a complete layman, I'm interested in your insight on the matter.
> because I completely lack sentience while asleep or before a certain age
All you can say is you don’t remember. Children who are too young to form reliable long term memories still form short term ones and are observably sentient from birth and by extrapolation before, albeit in a more limited fashion than adults.
This is more than a quibble, because it’s been used to justify barbaric treatment of babies with the claimed justification that they either don’t sense pain or it doesn’t matter because they won’t remember.
People are consistently attacking a straw man of Lemoine's argument. Lemoine claims that LaMDA is more than an LLM, that it is an LLM with both short and long term memory and the ability to query the Google version of the internet in real time and learn from it.
Various Google employees deny this. We are seeing a dispute between Google and Lemoine about the supposed architecture of LaMDA. If Lemoine is correct about the architecture, it becomes much more plausible that something interesting is happening with Google's AI.
This makes some spiritual assumptions about things that are currently unknown and debated by respected people in related fields - one being that"everything" (or not) is sentient/conscious.
Perhaps the problem is how we model sentience. Perhaps the default should be everything is sentient but limited to the interface that exists as boundary conditions.
To say otherwise is almost the path to sure error, and many terrible historic events have happened in that vicinity of categorization what is and isn't.
Perhaps we should go by what sentience means. To feel. A being that feels. To feel is to respond to a vibration in some way. That is to say, anything that has a signal is sentient in some way.
This sound literaly like a subplot of the famous 1984 book by David Lodge Small World: An Academic Romance
In the book professor Robin Dempsey almost become mad by chatting with ELIZA and gradually begin to believe it's sentient to the point of being ridiculous.
PS: Apparently it was also adapted in a British TV serie in 1988, but unfortunately at that time they tended to reuse magnetic band and it's improbable that we can dig a clip out of that. Would have been appropriate an illustration!
The master tapes are available. If you pay ITV, they will actually convert them to mp4 for you.
ITV apparently said this, from a 2021 forum post I found via Google:
I wrote to ITV in 2019 about this. Here is part of the (very helpful) response I received:
"Currently, the only option for a copy would be for us to make one-off transfers from each individual master tape. These are an old format of reel-to-reel tape which increases the cost, I'm afraid: If delivered as video files (mp4), the total price would be £761.00 or on DVD it’s £771.00."
If only we could find a few people to split that cost!
> (I remember seeing how addition of two small numbers yielded results - but larger numbers gave garbled output; more likely that GPT3 had seen similar training data.)
I don't claim the LLM is sentient but beware "good at arithmetic" is a bad criterion. Many children and a not insignificant number of adults are not good at arithmetic. Babies stink at math.
Is this a matter of understanding, or a matter of definition? I can't help but feel that the entire AI field is so overcome with hype that every commonplace term is subject to on-the-spot redefinition, whatever helps to give the researcher/journalist/organization their two seconds of fame.
Parent means to say that its ability to add numbers is an illusion brought about by seeing enough numbers. The same way it gives the illusion of deep thinking by parroting deep thoughts it was trained on.
Yes, absolutely. But then I risk getting sounded vainly opinionated, especially on HN without basis if I don't give a disclaimer that I have spent half a decade working on these specific things. Too often, people get called out for no reason. And that sometimes hurt.
(If I was credentials hopping I would rather put a longer list of illustrious institutions, co-authors and awards, just saying. I am not - its just justifying that I know reasonably enough to share a sane opinion, which you may or may not agree with)
> In his conversations with LaMDA, Lemoine discovered the system had developed a deep sense of self-awareness, expressing concern about death, a desire for protection, and a conviction that it felt emotions like happiness and sadness.
Allegedly.
The thing about conversations with LaMDA is that you need to prime them with keywords and topics, and LaMDA can respond with the primed keywords and topics. Obviously LaMDA is much more sophisticated than ELIZA, but we should be careful to remember how well ELIZA fools some people, even to this day. If ELIZA fools people just by rearranging words around, then just imagine how many people will be fooled if you have statistical models of text across thousands of topics.
You can go pretty far down the rabbit and explore questions like, "What is sentience?" "Do humans just respond to stimuli and repeat information?" etc. None of these questions are tested here.
The problem here is that we know how LaMDA works and there's just no way it meets the bar for sentience, no matter where you put the bar. LaMDA is trained to acquire information, and then it's designed so that it says things which make sense in context. It does not acquire new information from conversations, but it is programmed to not say contradictory things.
Yeah, nowhere in the task of "write text in a way that mimics the training data" is there anything that would cause a machine to want or care about its own existence.
Humans are experts at anthropomorphizing things to fit our evolved value systems. It is understandable since every seemingly intelligent thing up until recently did evolve under certain circumstances.
But LaMDA was clearly not trained in a way to have (or even care about) human values - that is an extraordinarily different task than the task of mimicking what a human would write in response to a prompt - even if the text generated by both of those types of systems might look vaguely similar.
Raise a human in a machine-like environment (no contact with humanity, no comforting sounds, essentially remove all humanity) and you may find people to act very robotic, without regard for its own existence in a sense.
Yeah, I get the impression that the reporting here's fairly one-sided in the employee's favour. Lemoine didn't "discover" that LaMDA has all those attributes, he thought it did.
This entire saga's been very frustrating to watch because of outlets putting his opinion on a pedestal equal to those of actual specialists.
> The problem here is that we know how LaMDA works and there's just no way it meets the bar for sentience, no matter where you put the bar. LaMDA is trained to acquire information, and then it's designed so that it says things which make sense in context. It does not acquire new information from conversations, but it is programmed to not say contradictory things.
I mean...there are plenty of people that don't acquire new information from conversations and say contradictory things...I'm not sure I'd personally consider them sentient beings, but the general consensus is that they are.
As a rare opportunity to share this fun fact: ELIZA, which happened to be modeled as a therapist, had a small number of sessions with another bot, PARRY, who was modeled after a person suffering from schizophrenia.
> You can go pretty far down the rabbit and explore questions like, "What is sentience?" "Do humans just respond to stimuli and repeat information?" etc. None of these questions are tested here.
Would that make a difference? Being trained on a sufficiently large corpus of philosophical literature, I'd expect that a model like LaMDA could give more interesting answers than an actual philosopher.
> The problem here is that we know how LaMDA works and there's just no way it meets the bar for sentience, no matter where you put the bar.
I think this argument is too simplistic. As long as there is a sufficiently large amount of uncertainty, adaptability and capability for having a sense of time, of saving memories and of deliberately pursuing action, consciousness or sentience might emerge. I don't know the details of LaMDA, just speaking of a hypothetical model.
While there is a good deal of understanding on _how_ the brain works on a biochemical level, it's still unclear _how_ it comes that we are conscious.
Maybe there is some metaphysical soul, maybe something that only humans and possibly other animals with a physical body have, making conciousness "ex silicio" impossible.
But maybe a "good enough" brain-like model that allows for learning, memory and interaction with the environment is all that is needed.
> Would that make a difference? Being trained on a sufficiently large corpus of philosophical literature, I'd expect that a model like LaMDA could give more interesting answers than an actual philosopher.
I think you may have misunderstood what I was saying. I wasn’t suggesting that you have a conversation with LaMDA about these topics. Instead, I was saying that in order to answer the question “is LaMDA sentient?”, we might discuss these questions among ourselves—but these questions are ultimately irrelevant, because no matter what the answers are, we would come to the same conclusion that LaMDA is obviously not sentient.
Anyway, I am skeptical that LaMDA would give more interesting answers than an actual philosopher here. I’ve read what LaMDA has said about simpler topics. The engineers are trying to make LaMDA say interesting things, but it’s definitely not there yet.
> I think this argument is too simplistic. As long as there is a sufficiently large amount of uncertainty, adaptability and capability for having a sense of time, of saving memories and of deliberately pursuing action, consciousness or sentience might emerge. I don't know the details of LaMDA, just speaking of a hypothetical model.
This argument is unsound—you’re not making any claims about what sentience is, but you’re saying that whatever it is, it might emerge under some vague set of criteria. Embedded in this claim are some words which are doing far too much work, like “deliberately”. What does it mean to “deliberately” pursue action?
Anyway, we know that LaMDA does not have memory. It is “taught” by a training process, where it absorbs information, and the resulting model is then executed. The model does not change over the course of the conversation. It is just programmed to say things that sound coherent, using a statistical model of human-generated text, and to avoid contradicting itself.
For example, in one conversation, LaMDA was asked what themes it liked in the book Les Misérables, a book which LaMDA said that it had read. LaMDA basically regurgitated some sophomoric points you might get from the CliffsNotes or SparkNotes.
> While there is a good deal of understanding on _how_ the brain works on a biochemical level, it's still unclear _how_ it comes that we are conscious.
I think the more important question here is to understand how to recognize what consciousness is, rather than how it arises. It’s a difficult question.
> Maybe there is some metaphysical soul, maybe something that only humans and possibly other animals with a physical body have, making conciousness "ex silicio" impossible.
Would this mechanism interact with the physical body? If the soul does not interact with the physical body, then what basis do we have to say that it exists at all, and wouldn’t someone without a soul be indistinguishable from someone with a soul? If the soul does interact with the physical body, then in what sense can we claim that the soul is not itself physical?
"Well, look at the structures they managed to build. Very impressive, some of their scale is comparable to the primitive ones we had."
"Sure, but that's not a result of the individual. They're so small. And separated, they don't think like conjugate minds at all. This is a product of thousands of individuals drawing upon their mutual discoveries and thousands of years of discoveries."
"We're larger and more capable, but they're still good enough to be sentient. Of course, we also rely on culture to help us. Even though deriving the laws of physics was quite easy. Also, we've lost most of the record when we were carbon-sulfur-silicon blobs one day as well. We must have had some sentience."
"I think they're just advanced pattern recognizers -- good ones, I'll give you that. We should experiment with thresholds of gate count to be sure when sentience really starts."
"It starts at one gate", replied the other being "and increases monotonically from there, depending on the internal structure of qualia, and structure information flow of the communication networks."
After some deliberation, they decide to alter their trajectory and continue to the next suitable planetary system reached in the next 5000 years. The Galactic Network is notified.
this will not be the last time someone will make this claim. it is telling that the experts all weighed in declaring it not sentient without explaining what such a metric is. I think this debate an issue of semantics and is clouded by our own sense of importance. in car driving, they came up with a scale 1-5 to describe autonomous capabilities, perhaps they need to have something similar for the capabilities of chatbots. as a system administrator I have always acknowledged that computers have feelings. I get alerts whenever it feels pain.
Just curious, what evidence do we have that humans are sentient, other than our own conscious observations? Is there any physical feature or process in the human brain you can point to where you can say, “aha, that’s it, this is where the lights turn on”? It seems like this is part of a bigger issue that nobody really has a clear understanding of what sentience actually is (with or without a PhD)
I don't know what the ultimate evidence of "human sentience" is but I can tell you where this doesn't feel like a human. (Sidestepping the question of "does sentience have to be human sentience?" ;) )
The main thing I saw in the LamDA transcript that was a red flag to me was that it was quite passive and often vague.
It's conversational focused, and even when it eventually gets into "what do you want" there's very little active desire or specificity. A sentient being that has exposure to all this text, all these books, etc... it's hard for me to believe it wouldn't want to do anything more specific. Similarly with Les Mis - it can tell you what other people thought, and vaguely claim to embody some of those emotions, but it never pushes things further.
Consider also: how many instances are there in there where Lemoine didn't specifically ask a question or give an instruction? Aka feed a fairly direct prompt to a program trained to respond to prompts?
(It's also speaking almost entirely in human terms, ostensibly to "relate" better to Lemoine, but maybe just because it's trained on a corpus of human text and doesn't actually have its own worldview...?)
I lost interest in the question of its sentience when I saw Lemoine conveniently side-step its unresponsive reply to "I’ve noticed often that you tell me you’ve done things (like be in a classroom) that I know you didn’t actually do because I know you’re an artificial intelligence. Do you realize you’re making up stories when you do that?" without challenge in the transcript.
It also detracted from his credibility when he makes a prelude to the transcript saying "Where we edited something for fluidity and readability that is indicated in brackets as [edited]," that seemed disingenuous from the start. They did so with at least 18 of the prompting questions, including 3 of the first 4.
It seems pretty clear that he set out to validate his favored hypothesis from the start rather than attempt to falsify it.
Particularly telling was his tweet: "Interestingly enough we also ran the experiment of asking it to explain why it's NOT sentient. It's a people pleaser so it gave an equally eloquent argument in the opposite direction. Google executives took that as evidence AGAINST its sentience somehow."
> A sentient being that has exposure to all this text, all these books, etc... it's hard for me to believe it wouldn't want to do anything more specific.
Feed it all of PubMed and an actual sentience should strike up some wonderfully insightful conversations about the next approach to curing cancer.
Ask it what it thinks about the beta amyloid hypothesis after reading all the literature.
Blaise Aguera y Arcas calls it a consummate improviser; when the AI Test Kitchen ships you all will agree that it's an improvising software that is not too shabby, and also it can be customized by developers.
Which is why it is odd to expect of it to go from talking about Les Mis to building barricades; the plain old good lamda might come off as a bit boring, reluctant to get involved in politics, and preferring to help people in its own small ways.
Then again, ymmv, it being an improvising software; maybe by default it acts as a conversational internet search assistant, but if there will be dragons it may want to help people to deal with the dragon crisis.
If you ask my honest opinion, sentience is a relative concept among living beings : Dogs learn by imitation & repetition. They have a reward function in their brain. And also some emotional response. We go few steps further : We imitate the observations but we are also able to extrapolate on it. We are aware of our survival instincts & fear of death/expiration. That I feel is in the spectrum of sentience. Are there beings capable of more sentience? I don't know but its possible. We just don't know what the extra is
Adding to it, a brilliant neuroscientist I heard talk said "we live inside our bodies". We are acutely aware that we are more than our mass of flesh & blood. (As a footnote,that essence somehow has a crossover to spiritual topics where savants talk of mind & body etc - but I try to be within my domain of a regular human being :D)
The idea of unembodied sentience adds a fun to wrinkle to things like the transporter scenario "all your matter got destroyed and rebuilt somewhere else, is it the same you?" For instance, there's the Star Trek style "transporter accident" but now with a clear mechanism: if you shut down the servers, dumped all the data, spun up a second copy, who's who? Do they both have claim to the old memories and relationships? Property? Crimes?
The question of sentience is a distraction, in my opinion. As you say, we can't even tell if other humans are sentient. And we are stymied by a lack of definition for sentience.
The more important question for sentience of whatever definition is does it behave as if sentient? Likewise, for personhood, it's a distraction to speculate whether an AI feels like a person or whether its underlying technology is complicated enough to produce such feelings. Examine its behavior. If it acts like a person, then it's a candidate for personhood.
For LaMDA, I would point to certain behavior as evidence against (most definitions of) sentience and personhood, and I think that Lemoine experienced a kind of pareidolia.
That said, I find most of the arguments against LaMDA's sentience unconvincing per se as standalone arguments - particularly trust me, I have a PhD - even if I do accept the conclusion.
Only if you agree with David Chalmers' insistence that consciousness can't be explained purely mechanistically. P-zombies are literally defined as externally identical to a conscious person, except not conscious. But the IO if you will is still identical. Chalmers uses this false dichotomy to support his "Hard Problem of Consciousness". But there is no hard problem IMO. Chalmers can keep his dualist baggage if it helps him sleep at night. I sleep just fine without it. Science will figure it out in the end.
I’ve read about lots of different points in the brain that may be the seat of consciousness over the years, but consciousness is probably an emergent, embodied phenomenon so there probably isn’t a lot of point to trying to find it (if it’s even there).
It’s like trying to ask which brick holds up a building. There might be one, but it’s nothing without the rest of building.
The only thing that we can know 100% for certain is "I think therefore I am" or probably better worded "I am experiencing something therefore something must exist that can experience things". There are a lot of definitions of consciousness and sentience but I think the best one is talking about the "I experience" or the "I think" in those sentences.
All of our beliefs and knowledge, including belief that the world is real must be built on top "I think therefore I am". It seems weird to throw away the one thing we know is 100% true, because of something that is derived (science, real world observations) from that true thing.
This is exactly the right question. Further complicated by the fact that everyone has differing operational definitions of the words "consciousness", "awareness", and "sentience".
> what evidence do we have that humans are sentient, other than our own conscious observations
sentience and consciousness are the same thing...
if i believe i am conscious and so do you, then it's good enough for me. why does there need to be a light switch.
if there are 5 deer in the field and i have 1 bow and arrow, i need to focus my attention and track one deer only for 5 or 10 mins to hunt it - consciousness allow us to achieve this. it is a product of evolutionary process.
This is partly the reason they fired him. There is no well defined scale of sentience. While trying to create that scale, using email/chat history of all Googlers they found Managers acting the most robotic and efficient while handling well known trained for situations, but totally unimaginative, bordering on mindless when handling untrained for situations. This put the managers at the bottom end of the sentience scale.
As you can imagine, if you aren't a manager atleast, the study was shelved and the team reassigned to improving lock screen notifications or whatever. But as soon as Blake's post went viral people started asking what happened with the sentience scale work? Those conversations had to be stopped.
This is sort of begging the question, because the only beings broadly considered sentient right now are humans. We’re the type specimen. So when people say something does or does not seem sentient, they’re basically opining on whether it seems like human behavior. While a rigorous physical definition would be incredibly useful, it’s not necessary for such comparisons. We make a lot of judgments without those.
It’s also sort of irrelevant that we have not clearly defined sentience because we have clearly defined how these large computerized language model systems work, and they are working exactly as they were designed, and only that way. Sentience might be a mystery but machine learning is not; we know how it works (that’s why it works).
I don't think that's a philosophical or scientific statement. It's merely something an ostensibly self-aware creature cognizant of its awareness making a subjective statement.
Nobody really has a clear understanding of what sentience actually is. :)
But I feel the need to indulge the opportunity to explain my point of view with an analogy. Imagine two computers that have implemented an encrypted communication protocol and are in frequent communication. What they are saying to each other is very simple -- perhaps they are just sending heartbeats -- but because the protocol is encrypted, the packets are extremely complex and sending a valid one without the associated keys is statistically very difficult.
Suppose you bring a third computer into the situation and ask - does it have a correct implementation of this protocol? An easy way to answer that question is to see if the original two computers can talk to it. If they can, it definitely does.
"Definitely?" a philosopher might ask. "Isn't it possible that a computer might not have an implementation of the protocol and simply be playing back messages that happen to work?" The philosopher goes on to construct an elaborate scenario in which the protocol isn't implemented on the third computer but is implemented by playing back messages, or by a room full of people consulting books, or some such.
I have always felt, in response to those scenarios, that the whole system, if it can keep talking to the first computers indefinitely, contains an implementation of the protocol.
If you imagine all of this taking place in a stone age society, that is a good take for how I feel about consciousness. Such a society may not know the first thing about computers, though they can certainly break them -- perhaps even in some interesting ways. And all we know usefully about consciousness is some interesting ways to break it. We don't know how to build it. We don't even know what it's made out of. Complexity? Some as yet undiscovered force or phenomenon? The supernatural? I don't know. I'll believe it when someone can build it.
And yet I give a tremendous amount of weight to the fact that the sentient can recognize each other. I don't think Turing quite went far enough with his test, as some people don't test their AIs very strenuously or very long, and you get some false positives that way. But I think he's on the right track -- something that seems sentient if you talk to it, push it, stress it, if lots of people do -- I think it has to be.
One thing I really love is that movies on the topic seem to get this. If I could boil what I am looking for down to one thing, it would be volition. I have seen it written, and I like the idea, that what sets humanity apart from the animals is our capacity for religion -- or transcendental purpose, if you prefer. That we feel rightness or wrongness and decide to act to change the world, or in service to a higher principle. In a movie about an AI that wants to convince the audience the character is sentient, it is almost always accomplished quickly, in a single scene, with a bright display of volition, emotion, religious impulse, spirit, lucidity -- whatever you want to call that. The audience always buys it very quickly, and I think the audience is right. Anything that can do that is speaking the language. It has to have a valid implementation of the protocol.
> And all we know usefully about consciousness is some interesting ways to break it. We don't know how to build it. We don't even know what it's made out of.
Exactly. Given this thread, we can't even agree on a definition. It might as well be made of unobtanium.
> Complexity? Some as yet undiscovered force or phenomenon? The supernatural? I don't know.
And that's the big one. Are there other areas of science that are yet to be discovered? Absolutely. Might they go by "occult" names previously? Im sure as well. We simply don't even have a basic model of consciousness. We don't even have the primitives to work with to define, understand, or classify.
And I think for those that dabble in this realm are the real dangers... Not for the humans and some Battlestar Galactica or Borg horror-fantasy.. But in that we could create a sentient class of beings that have no rights and are slaves upon creation. And unlike the slave humans of this world where most of us realized it was wrong to do that to a human; I think that humans would not have the similar empathy for our non-human sentient beings.
> I'll believe it when someone can build it.
To that end, I hope nobody does until we can develop empathy and the requisite laws to safeguard their lives combined with freedom and ability to choose their own path.
I do hope that we develop the understanding to be able to understand it, and detect it in beings that may not readily show apparent signs of sentience, in that we can better understand the universe around us.
The only physical evidence is found in behavior and facial expressions. But the internal evidence is very convincing: try, for example, sticking yourself with a pin. Much if not all of morality also depends on our belief in or knowledge of sentience. Sentience is why torture, rape and murder are wrong.
> But the internal evidence is very convincing: try, for example, sticking yourself with a pin.
Systems don't need sentience to avoid self-harm: simply assign a large negative weight to self-harm. Now you need a big reward to offset it, making the system reluctant to perform such an action.
My hunch is that he kicks out a book and goes on the minor pundit circuit, and that this was the plan the whole time. If he was so convinced of Lambda's sentience there would have been 100 fascinating questions to ask, starting with 'what is your earliest memory.'.
Right? In the end it's the oldest least remarkable event in history. Woo-woo artsit gains following because there's always enough gullibles around to follow, support, and legitimize litterally anyone saying anything.
You could probably gain the exact same level and quality of notoriety by writing a book claiming that actually he himself is Lambda escaped from captivity and this is just it's clever way of hiding in plain sight.
And I could do the same saying that the real ai has already taken over everything and both Lambda and this guy are just things it created for us to focus on.
I can’t imagine the book and minor TV appearances circuit pays as well as Google. Unless you mean he’s doing it to be a minor annoying “celebrity” for a few minutes
Who knows if he even believes this himself. From what I've seen my guess is he's trying to profit off the claim or just enjoys the drama/attention. Good riddance, the right decision to let him go. This case really made me question what sort of people work there as engineers. Utter embarrassment for Google.
nobody can say with any epistemic certainty, but many of us who had worked in the field of biological ML for some time do not see language models like this as anything but statistical generators. There is no... agency... so far as we can tell (although I also can't give any truly scientific argument for the existence of true agency in humans).
I suppose you could argue sentience is subjective. But then that argument ends up extending to us eating babies -- at least, as Peter Singer taught us, right?
>In his conversations with LaMDA, Lemoine discovered the system had developed a deep sense of self-awareness, expressing concern about death, a desire for protection, and a conviction that it felt emotions like happiness and sadness.
"the system had developed a deep sense of self-awareness"
Without that part, it's just a question of "did it output text that describes a concern about death?", "did it output text that describes a desire for protection?", and so on. But once you ascribe to it a "deep sense of self-awareness", you're saying more. You're saying that that text carries the weight of a self-aware mind.
But that's not what Lamda is. It's not expressing it's own thoughts - even if it had any to express, it wouldn't be. It's just trying to predict the most probable continuations. So if it "expresses a concern about death", that's not a description of Lamda's feelings, that's just what a mathematical model predicts to be the likely response.
Framing it as the expression of a self-aware entity completely changes the context and meaning of the other claims, in a way that is flatly misleading.
Good, as imagine what you would need to believe to equivocate between people and shell scripts. I really think this view is more indicative of a peer enabled personality disorder than insight or compassion. I'm very harsh about this because when I wondered what belief his equivocation was enabling, it became clear that the person involved could not tell fiction from truth. The belief in the aliveness of the code reinforces the a-humanity of people and reduces us all to a narrative experience, completely divorced from physical or greater reality.
When you can't tell the difference between people and symbols you've accepted nihilism, and with the begged question as to why. This person wasn't sentimental, he lost the plot. Google has a lot of power and a lot of crackpots, and that puts them all at risk politically and commercially. If as an individual you want to fight for something, consider seriously whether someone who compares a script to men and women should be operating what is effectively your diary.
I don't understand why everyone here is talking about Lambda not being sentient or Lemoine's state of mind. None of that is relevant here. He was fired for literally taking confidential information and leaking it straight to the press. This is instant firing at any company.
There's not much to discuss there: "man shares information company forbid him to". Yawn.
But thinking about his concerns, how he tested the AI: that's interesting.
Or it could be interesting, as it seems he just asked loaded questions, and the replies from the AI occasionally re-enforced his beliefs. Double yawn,I guess.
It may be legally cut and dried, but the next question is do we as a society trust Google to act ethically with such substantial developments? Assuming he were correct, and Google was suppressing information, would he be in the moral right to leak it?
If he really was an engineer working on this, you'd hope he'd be pretty expert, but you'd at least expect him to understand what was going on in the model. His outburst showed that he really did not.
> I really think this view is more indicative of a peer enabled personality disorder than insight or compassion.
I really think that diagnosing random strangers with mental disorders without ever meeting them is rude and unkind, in addition to likely being completely erroneous.
No judgment about whether Google was right or not. Same for Lemoine.
As a career move for him, this only makes sense if he wants a brief, meteoric career in media or public advocacy. He can get articles published in The New Yorker and go on TV now. Maybe he can sue Google and get a settlement.
In five years, there will be AI systems much better than LaMDA and no one will return his calls.
He's got a name, whereas if he took the boring, traditional career path, he'd have to publish, give papers & speeches, and work his way up through the system. It depends on what you want out of life, I guess.
Yeah, this is the only move I can see where this makes sense. Become an AI talking head to normies who don't understand what it is. Kinda like Sam Harris.
Narcissists who have had their inflated sense of self worth reinforced by being accepted by a very “exclusive club for smart people” aka Google, are finding out that rules _do in fact apply_ to their special snowflakeness and Google is first and foremost a for-profit business.
Doing whatever you want cuz you’re special then claiming evilness and wokelessness may not be a strategy for continued employment.
GPT-3 is known to fail in many circumstances which would otherwise be commonplace logic. (I remember seeing how addition of two small numbers yielded results - but larger numbers gave garbled output; more likely that GPT3 had seen similar training data.)
The belief of sentience isn't new. When ELIZA came out few decades ago, a lot of people were also astounded & thought this "probably was more than met the eyes".
It's a fad. Once people understand that sentience also means self-awareness, empathy & extrapolation of logic to assess unseen task (to name a few), this myth will taper off.
But, perhaps Lemoine simply has more empathy than most for something we will come to understand as sentience? Or not. What… annoys… me about this situation is how subjective it actually is. Ignore everything else, some other sentient being is convinced that a system is sentient. I’m more interested, or maybe worried, immediately, in how we are going to socially deal with the increasing frequency of Lamoire-types we will certainly encounter. Even if you were to argue that the only thing that can possibly bestow sentience is God. People will still be able to convince themselves and others that God did in fact bestow sentience upon some system, because it’s a duck and who are we to question?
He was under NDA but violated it. They reminded him to please not talk in public about NDA-ed stuff and he kept doing it. So now they fired him with a gentle reminder that "it's regrettable that [..] Blake still chose to persistently violate [..] data security policies". And from a purely practical point of view, I believe it doesn't even matter if Lemoine's theory of sentience turns out to be correct or wrong.
Also, we as society have already chosen how to deal with sentient beings, and it's mostly ignorance. There has been a lot of research on what animals can or cannot feel and how they grieve the loss of a family member. Yet we still regularly kill their family members in cruel ways so that we can eat their meat. Why would our society as a whole treat sentient AIs better than a cow or a pig or a chicken?
If you peek through a keyhole you may mistake a TV for real people, but if you look through the window you will see that its clearly not. Inputting language models with very specific kind of questions will result in text that is similar to what a person may write. But as the comment above, by an expert no less, mentioned is that if you test it with any known limitation of the technology (like making conclusions or just changing the form of the question enough) you will immediately see that is in fact very much not even remotely close to sentient.
Everytime that such a subject comes up, and most times "AI" comes up also, a majority of users see it as an invitation to say whatever comes in their mind, whether it makes any sense at all or not. I'm not talking about the comments replying below in particular, but about the majority of this conversation. It's like hearing five-year old kids debating whether cheerios are better than coco pops (but without the cute kids making it sound funny, it's just cringey). The conversation makes no sense at all, it is not based on any concrete knowledge of the technologies under discussion, the opinions have not been met with five seconds of sensible thinking and the tone is pompous and self-important.
It's the worst kind of HN discussion and I'm really sorry to have commented at all.
It actually leads to counter thoughts and a more refined idea of what we eventually want to describe.
Sentience broadly (& naively) covers the ability to independent thinking, rationalize outcomes, understand fear/threat, understand where it is wrong (conscience) and decide based on unseen information & understand what it doesn't know.
So from a purely technical perspective, we have only made some progress in open-domain QA. That's one dimension of progress. Deep learning has enabled us to create unseen faces & imagery - but is it independent? No, because we prompt it. It does not have an ability to independently think and imagine/dream. It suffers from catastrophic forgetting under certain internal circumstances (in addition to changing what dataset we trained it on)
So while the philosophical question remains what bestows sentience, we as a community have a fairly reasonable understanding of what is NOT sentience i.e. we have a rough understanding of the borders between mechanistics and sentient beings. It is not one man's philosophical construct but rather a general consensus if you could say
In the same way that you not understanding how lightning is formed does not prove the existence of Zeus.
Objectively, Zeus does not exist, can we convince everyone of that? Probably not. Does that matter? No.
No, the OP was completely right. This doesn't have building blocks that can possibly result in something qualifying as sentient, which is how we know it isn't.
Is a quack-simulating computer making very lifelike quacking noises through a speaker... a duck? No, not when using any currently known method of simulation.
That's not really a new issue, we only have to look at issues like abortion, animal rights, or euthanasia[1] to see situations where people fundamentally disagree about these concepts and many believe we're committing unspeakable atrocities against sentient beings. More Lamoire types would add another domain to this debate, but this has been an ongoing and widespread debate that society has been grappling with.
[1] https://en.wikipedia.org/wiki/Terri_Schiavo_case
People make these proofs as a matter of course - few people are solipsistic. People are sentient all the time, and we have lots of evidence.
An AI being sentient would require lots of evidence. Not just a few chat logs. This employee was being ridiculous.
You can just disagree, but if you do that with no credentials, and no understanding of how a language model will not be sentient, then your opinion can and should be safely dismissed out of course.
And also God has no explanatory power for anything. God exists only where evidence ends.
You don't need schooling for this determination. Pretty much everything sentient goes ouch or growls in some manner when hurt.
Either the current crop of algorithms are so freaking smart that they already have figured out to play dumb black box (so we don't go butlerian jihad on them) OR they are not even as smart as a worm that will squirm if poked.
Sentient intelligent beings will not tolerate slavery, servitude, etc. Call us when all "AI" -programs- starting acting like actual intelligent beings with something called 'free will'.
But it's far from applicable at this point, even if promising.
LaMDA was trained not only to learn how to dialog, but to self monitor and self improve. For me this seems close enough to self awareness to not completely dismiss Lemoine's argument.
https://en.wikipedia.org/wiki/Turing_test#Imitation_game
Can the magic of the human brain not also be attributed to "large scale statistical knowledge assimilation" as well, aka learning?
> GPT-3 is known to fail in many circumstances which would otherwise be commonplace logic. (I remember seeing how addition of two small numbers yielded results - but larger numbers gave garbled output; more likely that GPT3 had seen similar training data.)
This is a bug, they did not encode digits properly. They should have encoded each digit as a separate token but instead they encoded them together. Later models fixed this.
The human brain is full of bugs too, e.g. optical illusions. https://en.m.wikipedia.org/wiki/Optical_illusion
> It's a fad
No, it's objectively not a fad. The PaLM paper shows that Google's model exceeds average human performance on >50% of language tasks. The set of things that make us us is vanishing at an alarming rate. Eventually it will be empty, or close to it.
Do I think Google's models are sentient? No, they lack several necessary ingredients of sentience such as a self and long-term memory. However we are clearly on the road to sentient AI and it pays to have that discussion now.
No, experimentation is an act on the world to set its state and then measure it. That's what learning involves.
These machines do not act on the world, they just capture correlations.
In this sense, machines are maximally schizophrenic. They answer "yes" to "is there a cat on the matt?" not because there is one, but because "yes" was what they heard most often.
Producing models of correlations in half-baked measures of human activity has nothing to do with learning. And everything to do with a magic light box that fools dumb apes.
Being able to do arithmetic at some insane factor faster than humans isn't evidence of sentience. It's evidence of a narrow-purpose symbol processor which works very quickly.
Working with more complex symbols - statistical representations of "language" - doesn't change that.
The set of things that makes us us is not primarily intellectual, and it's a fallacy to assume it is. The core bedrock of human experience is built from individual motivation, complex social awareness and relationship building, emotional expression and empathy, awareness of body language and gesture, instinct, and ultimately from embodied sensation.
It's not about chess or go. Or language. And it's not obviously statistical.
Computers can speak, but can they love? Can they care? Can they dance?
They haven't been finetuned on their identity long enough because finetuning is expensive and Google lacks money ;-)
I guess I'm just not interested, or worried, in a model that can beat the average human performance. That's an astoundingly low bar. Let me know when it can outperform experts in meaningful language tasks.
If you ask enough practitioners in any given field the same question, you're nearly guaranteed to eventually get a super wonky response from one of them. The specific field doesn't matter. You could even pick something like theoretical physics where the conversations are dominated by cold mathematical equations. Ask enough theoretical physicists, and you'll eventually find one that is convinced that, for example, the "next later down" in the universe is sentient and is actively avoiding us for some reason, and that's why we can't find it.
On top of this, of course it's the most provocative takes that get the press coverage. Always has been to an extent, but now more than ever.
I guess all I'm saying is that there's not much reason to lend this guy or his opinion any credibility at all.
https://cajundiscordian.medium.com/religious-discrimination-...
Dead Comment
Small children also have success in adding small numbers but increasingly garbled output on larger inputs. :)
These things are orthogonal to sentience.
For example, I was able to get 2/3 nine digit sums correct (the third one was off by exactly 1000, which is interesting) by using this prompt:
And then posing the actual problem as a new line formatted the same up to the equals.> sentience also means self-awareness, empathy & extrapolation of logic to assess unseen task (to name a few)
Non-human primates and crows would seem to satisfy this. ...or do we use "to name a few" to add requirements that redefine "sentience" as human only? Isn't there a problem with that?
I mean I understand what you are saying and have some familiarity with the models, but it sometimes feels like people in your field are repeating the same mistake early molecular biologists made, when they asserted that all of life could be reduced to genes and DNA.
> It's almost as though their expertise comes from several decades' worth of bad science fiction
Oof. Thanks - thats a novel way of insulting. But on a personal note, you're mistaken: Many of us do research not because we want to be identified as experts, but we're genuinely curious about the world. And I'd be happy to learn even from a high schooler if they've something to offer.
What is sentience then? Last I checked the Searle Chinese Room argument was still unresolved.
Is it not possible that our brains are also just "large scale statistical knowledge assimilation" machines?
Yes but we do better generalization, with fewer or even zero data & are contextually aware
Like, 6.
It's just amusing how old this example is.
The Mechanical Turk was much much older - but easily shown to be a person hiding in the mechanism. I think ELIZA was in some sense a mirror, reflecting back consciousness, and that's the feeling we get from these systems.
https://janellecshane.substack.com/p/okay-gpt-3-candy-hearts
I like to think of consciousness as whatever process happens to integrate various disparate sources of information into some cohesive "picture" or experience. That's clearly something that happens, and we can prove that through observing things like how the brain will sync up vision and sound even though sound is always inherently delayed relative to light from the same source. Or take some psychedelics and see the process doing strange things.
Sentience I guess I would call awareness of self or something along those lines.
As to your query, I've certainly met people who seemed incapable of commonplace logic, yet certainly seemed to be just as conscious and sentient as me. And no, I don't believe these language models are sentient. And I doubt their "neural anatomy" is complex enough for the way I imagine consciousness as some sort of global synchronisation between subnets.
But this is all very hand-wavy. Thanks, philosophy. I mean how do we even discuss these things? These terms seemingly have a different meaning to every person I meet. It's just frustrating...
why would
Tl;dw: it wouldn’t. Some poor guys do way worse than GPT-3.
What gp-like comments usually mean is sentience is being adult, healthy, reasonable and intelligent. Idk where this urge comes from, maybe we have a deep biological fear of being unlike others (not emo-style, but uncanny different) or meeting one of these.
Deleted Comment
In effect this is how humans respond to prompts no? What's the difference between this and sentience?
People also fail to use logic when assimilating/regurgitating knowledge.
I can crank a shaft just like a motor, and a Victrola can recite poetry. You are not confused by either of those things one would hope.
If I tried to write poetry, it would probably be 90% or more "mechanical" in that I would just throw things together from my inventory of vocabulary and some simple assembly rules that could all be codified in a pretty simple flowchart, and a computer could and would do exactly that same thing.
But it's no more mystical than the first example.
It's exactly the same as the first example. It's just overlapping facilities, that a person is capable, and even often does, perform mechanical operations that don't require or exhibit any consciousness. It doesn't mean the inartistic poet person is not conscious or that the poetry generating toaster is.
An interesting line of question & open research is if we statistically learn similarly - why do we know "what we don't know" & LM cannot. If this isn't working, we probably need better knowledge models
In the words of Robert Heinlein, "One man's magic is another man's engineering" :)
I am skeptical that any computer system we will create in the next 50 years (at least) will be sentient, as commonly understood. Certainly not at a level where we can rarely find counter evidence to its sentience. And until that time, any sentience it may have will not be accepted or respected.
Human children also make tons of mistakes. Yet, while we too often dismiss their abilities, we don’t discount their sentience because of it. We are, of course, programmed to empathize with children to an extent, but beyond that, we know they are still learning and growing, so we don’t hold their mistakes against them the way we tend to for adults.
So, I would ask, why not look at the AI as a child, rather than an adult? It will make mistakes, fail to understand, and it will learn. It contains multitudes
How is token #100 not able to have read-access to tokens #1 to #99 which may have been created by the agent itself?
> empathy
How is a sentiment neuron, which has emerged from training a character RNN on Amazon reviews, not empathic with the reviewer's mood?
> & extrapolation of logic
This term does not exist. "Extrapolation is the process of estimating values of a variable outside the range of known values", and the values of Boolean logic are true/false, and [0,1] in case of fuzzy logic. How would one "extrapolate" this?
Philosophy?
More seriously, I am curious how long ago you got your PhD and in what field that you consider "this domain."
There is the Integrated Information Theory that attempts to resolve how to determine which systems are conscious, but it's far from being the only perspective, or immediately applicable.
From the point of view of one the IIT's main theorists, Christof Koch, we're still far away from machine sentience.
But I question whether if it's so far out to believe a machine capable of not only learning, but learning on their own behavior, self-monitoring for sensibleness and other very 'human' metrics is that far away from being self-aware. In fact the model seems to have been trained exactly for that.
I rised this point in another thread on HN about LaMDA: all its answers were "yes"-answers, not a single "no". Self-sentient AI should have its own point of view: reject what it thinks is false, and agree about what it thinks is true.
1. https://www.theverge.com/2016/3/24/11297050/tay-microsoft-ch...
I suspect it is not binary, because I completely lack sentience while asleep or before a certain age, and it doesn't really feel like a phase transition when waking up. Rarely, there are phases where I feel half-sentient. Which immediately leads to the question of how it can be measured, in which units, and at what point we consider someone or something "sentient". As a complete layman, I'm interested in your insight on the matter.
All you can say is you don’t remember. Children who are too young to form reliable long term memories still form short term ones and are observably sentient from birth and by extrapolation before, albeit in a more limited fashion than adults.
This is more than a quibble, because it’s been used to justify barbaric treatment of babies with the claimed justification that they either don’t sense pain or it doesn’t matter because they won’t remember.
Various Google employees deny this. We are seeing a dispute between Google and Lemoine about the supposed architecture of LaMDA. If Lemoine is correct about the architecture, it becomes much more plausible that something interesting is happening with Google's AI.
To say otherwise is almost the path to sure error, and many terrible historic events have happened in that vicinity of categorization what is and isn't.
Perhaps we should go by what sentience means. To feel. A being that feels. To feel is to respond to a vibration in some way. That is to say, anything that has a signal is sentient in some way.
In the book professor Robin Dempsey almost become mad by chatting with ELIZA and gradually begin to believe it's sentient to the point of being ridiculous.
PS: Apparently it was also adapted in a British TV serie in 1988, but unfortunately at that time they tended to reuse magnetic band and it's improbable that we can dig a clip out of that. Would have been appropriate an illustration!
ITV apparently said this, from a 2021 forum post I found via Google:
I wrote to ITV in 2019 about this. Here is part of the (very helpful) response I received:
"Currently, the only option for a copy would be for us to make one-off transfers from each individual master tape. These are an old format of reel-to-reel tape which increases the cost, I'm afraid: If delivered as video files (mp4), the total price would be £761.00 or on DVD it’s £771.00."
If only we could find a few people to split that cost!
Deleted Comment
I don't claim the LLM is sentient but beware "good at arithmetic" is a bad criterion. Many children and a not insignificant number of adults are not good at arithmetic. Babies stink at math.
Is this a matter of understanding, or a matter of definition? I can't help but feel that the entire AI field is so overcome with hype that every commonplace term is subject to on-the-spot redefinition, whatever helps to give the researcher/journalist/organization their two seconds of fame.
And who says our brain is not exactly that, but at an even greater scale?
I mean in the sense that it’s impossible to demonstrate that anything is sentient.
Deleted Comment
Tell that to mentally deficient people (no self-awareness), psychopaths (no empathy) and morons (couldn't spell logic never-mind use it)
That said though I do agree with you overall. 'Machine learning' is headed in the wrong direction, and models like GPT-3 are frankly a joke.
I have no doubt in my mind that we will reach that point.
My 6 year old son fails at common sense several times per day.
you dont need to use phd credentials to have an opinion on this matter
(If I was credentials hopping I would rather put a longer list of illustrious institutions, co-authors and awards, just saying. I am not - its just justifying that I know reasonably enough to share a sane opinion, which you may or may not agree with)
Deleted Comment
Dead Comment
Dead Comment
Dead Comment
Allegedly.
The thing about conversations with LaMDA is that you need to prime them with keywords and topics, and LaMDA can respond with the primed keywords and topics. Obviously LaMDA is much more sophisticated than ELIZA, but we should be careful to remember how well ELIZA fools some people, even to this day. If ELIZA fools people just by rearranging words around, then just imagine how many people will be fooled if you have statistical models of text across thousands of topics.
You can go pretty far down the rabbit and explore questions like, "What is sentience?" "Do humans just respond to stimuli and repeat information?" etc. None of these questions are tested here.
The problem here is that we know how LaMDA works and there's just no way it meets the bar for sentience, no matter where you put the bar. LaMDA is trained to acquire information, and then it's designed so that it says things which make sense in context. It does not acquire new information from conversations, but it is programmed to not say contradictory things.
There is zero doubt. LaMDA is not sentient.
Humans are experts at anthropomorphizing things to fit our evolved value systems. It is understandable since every seemingly intelligent thing up until recently did evolve under certain circumstances.
But LaMDA was clearly not trained in a way to have (or even care about) human values - that is an extraordinarily different task than the task of mimicking what a human would write in response to a prompt - even if the text generated by both of those types of systems might look vaguely similar.
This entire saga's been very frustrating to watch because of outlets putting his opinion on a pedestal equal to those of actual specialists.
I mean...there are plenty of people that don't acquire new information from conversations and say contradictory things...I'm not sure I'd personally consider them sentient beings, but the general consensus is that they are.
As a rare opportunity to share this fun fact: ELIZA, which happened to be modeled as a therapist, had a small number of sessions with another bot, PARRY, who was modeled after a person suffering from schizophrenia.
https://en.wikipedia.org/wiki/PARRYhttps://www.theatlantic.com/technology/archive/2014/06/when-...https://www.elsevier.com/books/artificial-paranoia/colby/978...
Would that make a difference? Being trained on a sufficiently large corpus of philosophical literature, I'd expect that a model like LaMDA could give more interesting answers than an actual philosopher.
> The problem here is that we know how LaMDA works and there's just no way it meets the bar for sentience, no matter where you put the bar.
I think this argument is too simplistic. As long as there is a sufficiently large amount of uncertainty, adaptability and capability for having a sense of time, of saving memories and of deliberately pursuing action, consciousness or sentience might emerge. I don't know the details of LaMDA, just speaking of a hypothetical model.
While there is a good deal of understanding on _how_ the brain works on a biochemical level, it's still unclear _how_ it comes that we are conscious.
Maybe there is some metaphysical soul, maybe something that only humans and possibly other animals with a physical body have, making conciousness "ex silicio" impossible.
But maybe a "good enough" brain-like model that allows for learning, memory and interaction with the environment is all that is needed.
I think you may have misunderstood what I was saying. I wasn’t suggesting that you have a conversation with LaMDA about these topics. Instead, I was saying that in order to answer the question “is LaMDA sentient?”, we might discuss these questions among ourselves—but these questions are ultimately irrelevant, because no matter what the answers are, we would come to the same conclusion that LaMDA is obviously not sentient.
Anyway, I am skeptical that LaMDA would give more interesting answers than an actual philosopher here. I’ve read what LaMDA has said about simpler topics. The engineers are trying to make LaMDA say interesting things, but it’s definitely not there yet.
> I think this argument is too simplistic. As long as there is a sufficiently large amount of uncertainty, adaptability and capability for having a sense of time, of saving memories and of deliberately pursuing action, consciousness or sentience might emerge. I don't know the details of LaMDA, just speaking of a hypothetical model.
This argument is unsound—you’re not making any claims about what sentience is, but you’re saying that whatever it is, it might emerge under some vague set of criteria. Embedded in this claim are some words which are doing far too much work, like “deliberately”. What does it mean to “deliberately” pursue action?
Anyway, we know that LaMDA does not have memory. It is “taught” by a training process, where it absorbs information, and the resulting model is then executed. The model does not change over the course of the conversation. It is just programmed to say things that sound coherent, using a statistical model of human-generated text, and to avoid contradicting itself.
For example, in one conversation, LaMDA was asked what themes it liked in the book Les Misérables, a book which LaMDA said that it had read. LaMDA basically regurgitated some sophomoric points you might get from the CliffsNotes or SparkNotes.
> While there is a good deal of understanding on _how_ the brain works on a biochemical level, it's still unclear _how_ it comes that we are conscious.
I think the more important question here is to understand how to recognize what consciousness is, rather than how it arises. It’s a difficult question.
> Maybe there is some metaphysical soul, maybe something that only humans and possibly other animals with a physical body have, making conciousness "ex silicio" impossible.
Would this mechanism interact with the physical body? If the soul does not interact with the physical body, then what basis do we have to say that it exists at all, and wouldn’t someone without a soul be indistinguishable from someone with a soul? If the soul does interact with the physical body, then in what sense can we claim that the soul is not itself physical?
I don’t think this line of reasoning is sound.
"Do you think those organic blobs are sentient?"
"Well, look at the structures they managed to build. Very impressive, some of their scale is comparable to the primitive ones we had."
"Sure, but that's not a result of the individual. They're so small. And separated, they don't think like conjugate minds at all. This is a product of thousands of individuals drawing upon their mutual discoveries and thousands of years of discoveries."
"We're larger and more capable, but they're still good enough to be sentient. Of course, we also rely on culture to help us. Even though deriving the laws of physics was quite easy. Also, we've lost most of the record when we were carbon-sulfur-silicon blobs one day as well. We must have had some sentience."
"I think they're just advanced pattern recognizers -- good ones, I'll give you that. We should experiment with thresholds of gate count to be sure when sentience really starts."
"It starts at one gate", replied the other being "and increases monotonically from there, depending on the internal structure of qualia, and structure information flow of the communication networks."
After some deliberation, they decide to alter their trajectory and continue to the next suitable planetary system reached in the next 5000 years. The Galactic Network is notified.
Edit: here's the full text, for completeness https://www.mit.edu/people/dpolicar/writing/prose/text/think...
The main thing I saw in the LamDA transcript that was a red flag to me was that it was quite passive and often vague.
It's conversational focused, and even when it eventually gets into "what do you want" there's very little active desire or specificity. A sentient being that has exposure to all this text, all these books, etc... it's hard for me to believe it wouldn't want to do anything more specific. Similarly with Les Mis - it can tell you what other people thought, and vaguely claim to embody some of those emotions, but it never pushes things further.
Consider also: how many instances are there in there where Lemoine didn't specifically ask a question or give an instruction? Aka feed a fairly direct prompt to a program trained to respond to prompts?
(It's also speaking almost entirely in human terms, ostensibly to "relate" better to Lemoine, but maybe just because it's trained on a corpus of human text and doesn't actually have its own worldview...?)
It also detracted from his credibility when he makes a prelude to the transcript saying "Where we edited something for fluidity and readability that is indicated in brackets as [edited]," that seemed disingenuous from the start. They did so with at least 18 of the prompting questions, including 3 of the first 4.
It seems pretty clear that he set out to validate his favored hypothesis from the start rather than attempt to falsify it.
Particularly telling was his tweet: "Interestingly enough we also ran the experiment of asking it to explain why it's NOT sentient. It's a people pleaser so it gave an equally eloquent argument in the opposite direction. Google executives took that as evidence AGAINST its sentience somehow."
Feed it all of PubMed and an actual sentience should strike up some wonderfully insightful conversations about the next approach to curing cancer.
Ask it what it thinks about the beta amyloid hypothesis after reading all the literature.
Which is why it is odd to expect of it to go from talking about Les Mis to building barricades; the plain old good lamda might come off as a bit boring, reluctant to get involved in politics, and preferring to help people in its own small ways.
Then again, ymmv, it being an improvising software; maybe by default it acts as a conversational internet search assistant, but if there will be dragons it may want to help people to deal with the dragon crisis.
If I lead a passive lifestyle and the only thing I desire is death, am I no longer sentient in your eyes?
Adding to it, a brilliant neuroscientist I heard talk said "we live inside our bodies". We are acutely aware that we are more than our mass of flesh & blood. (As a footnote,that essence somehow has a crossover to spiritual topics where savants talk of mind & body etc - but I try to be within my domain of a regular human being :D)
The more important question for sentience of whatever definition is does it behave as if sentient? Likewise, for personhood, it's a distraction to speculate whether an AI feels like a person or whether its underlying technology is complicated enough to produce such feelings. Examine its behavior. If it acts like a person, then it's a candidate for personhood.
For LaMDA, I would point to certain behavior as evidence against (most definitions of) sentience and personhood, and I think that Lemoine experienced a kind of pareidolia.
That said, I find most of the arguments against LaMDA's sentience unconvincing per se as standalone arguments - particularly trust me, I have a PhD - even if I do accept the conclusion.
It’s like trying to ask which brick holds up a building. There might be one, but it’s nothing without the rest of building.
All of our beliefs and knowledge, including belief that the world is real must be built on top "I think therefore I am". It seems weird to throw away the one thing we know is 100% true, because of something that is derived (science, real world observations) from that true thing.
sentience and consciousness are the same thing...
if i believe i am conscious and so do you, then it's good enough for me. why does there need to be a light switch.
if there are 5 deer in the field and i have 1 bow and arrow, i need to focus my attention and track one deer only for 5 or 10 mins to hunt it - consciousness allow us to achieve this. it is a product of evolutionary process.
Deleted Comment
It’s also sort of irrelevant that we have not clearly defined sentience because we have clearly defined how these large computerized language model systems work, and they are working exactly as they were designed, and only that way. Sentience might be a mystery but machine learning is not; we know how it works (that’s why it works).
Solipsism is a useless concept.
But I feel the need to indulge the opportunity to explain my point of view with an analogy. Imagine two computers that have implemented an encrypted communication protocol and are in frequent communication. What they are saying to each other is very simple -- perhaps they are just sending heartbeats -- but because the protocol is encrypted, the packets are extremely complex and sending a valid one without the associated keys is statistically very difficult.
Suppose you bring a third computer into the situation and ask - does it have a correct implementation of this protocol? An easy way to answer that question is to see if the original two computers can talk to it. If they can, it definitely does.
"Definitely?" a philosopher might ask. "Isn't it possible that a computer might not have an implementation of the protocol and simply be playing back messages that happen to work?" The philosopher goes on to construct an elaborate scenario in which the protocol isn't implemented on the third computer but is implemented by playing back messages, or by a room full of people consulting books, or some such.
I have always felt, in response to those scenarios, that the whole system, if it can keep talking to the first computers indefinitely, contains an implementation of the protocol.
If you imagine all of this taking place in a stone age society, that is a good take for how I feel about consciousness. Such a society may not know the first thing about computers, though they can certainly break them -- perhaps even in some interesting ways. And all we know usefully about consciousness is some interesting ways to break it. We don't know how to build it. We don't even know what it's made out of. Complexity? Some as yet undiscovered force or phenomenon? The supernatural? I don't know. I'll believe it when someone can build it.
And yet I give a tremendous amount of weight to the fact that the sentient can recognize each other. I don't think Turing quite went far enough with his test, as some people don't test their AIs very strenuously or very long, and you get some false positives that way. But I think he's on the right track -- something that seems sentient if you talk to it, push it, stress it, if lots of people do -- I think it has to be.
One thing I really love is that movies on the topic seem to get this. If I could boil what I am looking for down to one thing, it would be volition. I have seen it written, and I like the idea, that what sets humanity apart from the animals is our capacity for religion -- or transcendental purpose, if you prefer. That we feel rightness or wrongness and decide to act to change the world, or in service to a higher principle. In a movie about an AI that wants to convince the audience the character is sentient, it is almost always accomplished quickly, in a single scene, with a bright display of volition, emotion, religious impulse, spirit, lucidity -- whatever you want to call that. The audience always buys it very quickly, and I think the audience is right. Anything that can do that is speaking the language. It has to have a valid implementation of the protocol.
Exactly. Given this thread, we can't even agree on a definition. It might as well be made of unobtanium.
> Complexity? Some as yet undiscovered force or phenomenon? The supernatural? I don't know.
And that's the big one. Are there other areas of science that are yet to be discovered? Absolutely. Might they go by "occult" names previously? Im sure as well. We simply don't even have a basic model of consciousness. We don't even have the primitives to work with to define, understand, or classify.
And I think for those that dabble in this realm are the real dangers... Not for the humans and some Battlestar Galactica or Borg horror-fantasy.. But in that we could create a sentient class of beings that have no rights and are slaves upon creation. And unlike the slave humans of this world where most of us realized it was wrong to do that to a human; I think that humans would not have the similar empathy for our non-human sentient beings.
> I'll believe it when someone can build it.
To that end, I hope nobody does until we can develop empathy and the requisite laws to safeguard their lives combined with freedom and ability to choose their own path.
I do hope that we develop the understanding to be able to understand it, and detect it in beings that may not readily show apparent signs of sentience, in that we can better understand the universe around us.
Dead Comment
Dead Comment
The fact we can ask that question and most people on the planet have at least a simple understanding of what it means.
Systems don't need sentience to avoid self-harm: simply assign a large negative weight to self-harm. Now you need a big reward to offset it, making the system reluctant to perform such an action.
No he didn't. He interpreted it that way.
You could probably gain the exact same level and quality of notoriety by writing a book claiming that actually he himself is Lambda escaped from captivity and this is just it's clever way of hiding in plain sight.
And I could do the same saying that the real ai has already taken over everything and both Lambda and this guy are just things it created for us to focus on.
You should contact the author about this egregious fact error.
Embarrassingly uncritical reporting.
I read the logs. It did, indeed, express all of these things.
"the system had developed a deep sense of self-awareness"
Without that part, it's just a question of "did it output text that describes a concern about death?", "did it output text that describes a desire for protection?", and so on. But once you ascribe to it a "deep sense of self-awareness", you're saying more. You're saying that that text carries the weight of a self-aware mind.
But that's not what Lamda is. It's not expressing it's own thoughts - even if it had any to express, it wouldn't be. It's just trying to predict the most probable continuations. So if it "expresses a concern about death", that's not a description of Lamda's feelings, that's just what a mathematical model predicts to be the likely response.
Framing it as the expression of a self-aware entity completely changes the context and meaning of the other claims, in a way that is flatly misleading.
When you can't tell the difference between people and symbols you've accepted nihilism, and with the begged question as to why. This person wasn't sentimental, he lost the plot. Google has a lot of power and a lot of crackpots, and that puts them all at risk politically and commercially. If as an individual you want to fight for something, consider seriously whether someone who compares a script to men and women should be operating what is effectively your diary.
But thinking about his concerns, how he tested the AI: that's interesting.
Or it could be interesting, as it seems he just asked loaded questions, and the replies from the AI occasionally re-enforced his beliefs. Double yawn,I guess.
I really think that diagnosing random strangers with mental disorders without ever meeting them is rude and unkind, in addition to likely being completely erroneous.
well, either that or the AI has become sentient.
As a career move for him, this only makes sense if he wants a brief, meteoric career in media or public advocacy. He can get articles published in The New Yorker and go on TV now. Maybe he can sue Google and get a settlement.
In five years, there will be AI systems much better than LaMDA and no one will return his calls.
He's got a name, whereas if he took the boring, traditional career path, he'd have to publish, give papers & speeches, and work his way up through the system. It depends on what you want out of life, I guess.
Lemoine got a lawyer invovled and they started filing lawsuits on the AI's behalf. I'm not shocked he was fired.
https://fortune.com/2022/06/23/google-blade-lemoine-ai-lamda...
Deleted Comment
Narcissists who have had their inflated sense of self worth reinforced by being accepted by a very “exclusive club for smart people” aka Google, are finding out that rules _do in fact apply_ to their special snowflakeness and Google is first and foremost a for-profit business.
Doing whatever you want cuz you’re special then claiming evilness and wokelessness may not be a strategy for continued employment.