Readit News logoReadit News
one-more-minute · 7 years ago
I think Asimov knew all this – his stories are about how the laws can be twisted into having surprising consequences. It's meant as a Sci-Fi version of the "literal genie" [0].

Has anyone actually taken them as a serious suggestion in AI ethics?

[0] https://tvtropes.org/pmwiki/pmwiki.php/Main/LiteralGenie

TeMPOraL · 7 years ago
Seconded. Somehow, the fact that the Three Laws are broken and Asimov's works are all about showing how and why they're broken is something that most people missed. The popular culture ran off with the Laws, forgetting the context.
Izkata · 7 years ago
Doubly surprising in that the average person's introduction to the three laws was likely the Will Smith I, Robot movie, where the laws being at least incomplete was a big plot point.

Quick edit: Yikes, that was 15 years ago... I have co-workers who probably haven't heard of the movie...

WorldMaker · 7 years ago
Asimov always admitted to loving locked room mysteries (he wrote some great non-sci-fi mystery works, too), and the Three Laws were a fascinating "locked room" to build mysteries in. People seem to always remember all of the people in the stories that believed the Three Laws infallible, but not all the "murders" that took place in that locked room that pretty much showed that they were not infallible and gave us a plot for an enjoyable story to read.
PurpleRamen · 7 years ago
I think most people haven't read Asimov. They only know the laws because of their hype, but don't know their origin or meaning.

Thus, we need more Asimov in TV.

iotatron · 7 years ago
Having taken the Udacity Robotics Nanodegree I can say that they actually teach these laws in full seriousness as part of the first week of the course. When I saw that I started to regret signing up.
edent · 7 years ago
Yes, people do take them seriously in AI ethics. Read almost any "layperson" discussion, or go to any lecture and they'll be brought up.

I wrote this after attending a discussion at Oxford University on AI Ethics.

balabaster · 7 years ago
Kind of how our laws in real life are written with the intention of projecting that they're for one thing, but when it comes down to it, they're often intentionally perverted for the purpose of malevolence. The consequences aren't all that surprising given the character of those charged with enforcing said laws.
Upvoter33 · 7 years ago
Yes - literally every story was about how the flaws in these laws. But #spoilers keeps me from saying more!
Something1234 · 7 years ago
My mildly retarded friend did.

Pretty sure he still does.

jawns · 7 years ago
This is silly.

The Three Laws of Robotics are not harmful because _they can't actually be implemented_.

Asimov never details how the three laws are baked into positronic brains in so fundamental a way that they can't be disabled without destroying the robot. Heck, we don't even know how to build a positronic brain -- because it's a fictional technology!

So even if you _wanted_ to build some sort of AI that is guided by the Three Laws, you couldn't.

You might be able to build AI that is guided by certain ethical principles, but right now, in the current state of technology, it's all rules-based stuff -- like a self-driving car with a rule about slowing down to avoid an impact that could injure someone.

Nobody knows how to give AI an ethical intuition, much less tweak that intuition. (I'm an objectivist, so I actually prefer saying, "Nobody knows how to give AI the ability to perceive metaphysical principles.")

This whole post is like saying that Star Trek's Warp Drive is harmful because we shouldn't be trying to travel faster than light.

Topolomancer · 7 years ago
Correct. By the author's logic, the article should _also_ be considered harmful because it is quite vague. In fact, it does not even go remotely into the detail about vagueness. Moreover, it omits the fact that Asimov actually plays with these laws in all his books (I think this has been mentioned already in another comment), showing exactly the interesting consequences of the wording...

While I like AI policy discussions a lot, this article does not help unfortunately. :-/

Veedrac · 7 years ago
Your entire argument seems to be "it's impossible to do this because we don't know how to do it."

I don't get how people argue about the plausibility of future par-human AIs and say things like "we don't even know how to build a positronic brain" or "right now, in the current state of technology".

It's the same kind of incomprehensible confusion that leads one to tell the Wright brothers that heavier than air flight is impossible; just as birds exist to conclusively disprove that, so do HUMANS disprove the impossibility of human-level intelligence.

Maybe the three laws really are as physically illegal as a Warp Drive, though I would be remarkably surprised to hear it, but you cannot reasonably argue that position by claiming brains are impossible.

None of this should be taken as an endorsement of the three laws, which are clearly silly.

edent · 7 years ago
Correct! As I say in my post, it is the meme of the 3 Laws which is harmful to discussion on Robot Ethics.
inherentFloyd · 7 years ago
The point of "I, Robot" was to show how these laws can lead to dangerous results if followed to the letter, and their inherent contradictions. I agree with the author: applying the Three Laws to actual AI research and engineering is dangerous. Thankfully, I haven't seen too many people in AI research that actually advocate for their use.
dhruvmittal · 7 years ago
I get the impression that most people arguing in favor of the 3 laws have not actually read I, Robot. There are many places in pop culture that the laws appear, so it's not impossible that someone might quote them devoid of context.

To catch people up: I, Robot is a collection of short stories largely follow either (a) robopsychologist Susin Calvin or (b) robotics engineers Powell and Donovan as they attempt to diagnose anomalous behavior by examination of the 3 laws of robotics, highlighting the logical flaws and traps of these simple laws. A secondary theme is the use of robots as a mirror or lens through which particular human behaviors can be blown up and examined, largely through the absence or magnification of particular traits.

In universe, the 3 laws are considered to be practical engineering safeguards (and eventually, over hundreds of years, a fundamental building block to the function of the positronic brain)-- even as they are shown to the reader to be the origin of many conflicts.

dpark · 7 years ago
"Considered harmful" has become a lazy writing device. There's nothing valid here that even says why these are harmful except possibly that they are "vague". The only thing that looks even remotely like a real concern (that people, even scientists, believe these are somehow magically hard-wired laws) is not cited and appears to be entirely manufactured.
Bubass · 7 years ago
I think you may have missed the joke. The article explicitly calls out the word "harm", as it's used in the three laws, as being a source of vagueness, which the article then goes on to say becomes fodder for some of the conflicts in the stories.
edent · 7 years ago
I know. That's which I specifically linked to the rebuttal of the "considered harmful" trope.

The very first link in my post points to a discussion of the lecture I attended. There the 3 laws were discussed and, I felt, some of the audience hadn't quite understood that they were a literary device.

But go along to any AI discussion group in person, or read the popular press, and I promise it won't be too long before you find a human citing them.

Bubass · 7 years ago
The article specifically calls out the word "harm" in the three laws as a source of ambiguity. This, as the author also points out, becomes a source of conflict in the I, Robot stories.

The author is making a joke using the "considered harmful" trope/meme in conjunction with the assertion that the word "harm" is ambiguous, thereby rendering the phrase "considered harmful" relatively meaningless.

Unfortunately, this joke seems to be lost on many of the commenters here. Technically minded folks seem to see the phrase "considered harmful" and proceed to lose their minds. Never mind that the phrase, in my experience, is almost exclusively used in jest. But based on people's reactions to it, "considered harmful" in a title might as well be flame bait. In fact, I seem to remember a serious article a while back called "'Considered Harmful' Considered Harmful" that should have put this whole thing to bed.

ergothus · 7 years ago
Leaving aside the literary purpose for the moment - when I go back and look at them now, some multiple decades after I first read them, I see a massive human-centric focus that I never did before. WHY is a human life more valuable than a robotic one? WHY is human "harm" so bad? WHY are human commands so powerful?

I mean, for story purposes it all makes sense, but I'm fascinated that I never raised these questions myself before. I accepted human divine right unquestioningly.

dpark · 7 years ago
> WHY is a human life more valuable than a robotic one?

This "but why humans" reaction is bizarre to me every time I encounter it. We care about humans because we are humans. There's nothing deeper to uncover. We care about ourselves.

Humans do not care about the value of a robot "life", because we generally feel it has none.

ergothus · 7 years ago
> There's nothing deeper to uncover.

For me, at least, you're coming at it from the other direction.

I'm not really asking "why" in the "how did this come to be" - the answer to that is rather self obvious. I'm asking "why" in the "is this objectively actually correct? Is this how I want things to be?" - which have no answer, but the effort of trying to find one can uncover plenty of deeper ideas.

gjm11 · 7 years ago
I don't think this is quite right. Suppose someone says "I care about white people because I am a white person" (and doesn't care at all about black people) or says "I care about women because I am a woman" (and doesn't care at all about men) or says "I care about psychologists because I am psychologist" (and doesn't care at all about people in any other line of work). Would any of those seem obviously reasonable? They wouldn't to me.

So, if caring only about Indians because you're Indian isn't reasonable but caring only about humans because you're human isn't reasonable, what's the relevant difference? That seems to me a question it's perfectly fair to ask. And some possible answers to the question don't make it obvious that the distinction between humans and hypothetical robots with minds that resemble ours enough (e.g.) for us to talk with them is one that justifies caring about humans and not caring about those robots.

jolmg · 7 years ago
> Humans do not care about the value of a robot "life", because we generally feel it has none.

Hasn't Atheism been on the rise? I wonder how people will feel about this subject of life by the time AGI is achieved.

AtlasBarfed · 7 years ago
why was a human life more valuable than Harambe?
jawns · 7 years ago
Actually, a lot of Asimov's stories are intended to call into question whether sufficiently advanced robots should be treated differently than humans. It's sort of like he sets up the Three Laws in order to knock them down.
TeMPOraL · 7 years ago
Path dependence. If we build the robots from ground-up, from dumb pieces of metal through single-purpose tools to multipurpose bodies inhabited by semi-sentient AI, and given that this is the first time we're attempting something like this, it's only natural to apply precautions. There may be a time for the hypothetical human civilization, when it's so sure with its work that it decides to give those advanced robots full rights of sentient beings, without silly firmware restrictions - but that would be an explicit step.

At least, this is how I always reasoned about the "human-centric" focus in this context.

ergothus · 7 years ago
> this is how I always reasoned about the "human-centric" focus in this context.

I can reason it fairly easily - I'm disturbed that I never even considered the questions

Isamu · 7 years ago
It is interesting to think about. Strange though that people can't think about robots without anthropomorphizing them.

I see this in the movies. The robot doesn't want to be turned off, all because WE equate being turned off with death. But robots are essentially immortal, they can be backed up and restarted with a new body. And it would never make sense to make a robot that is leery of being turned off, because maintenance is essential.

You could probably make a robot that feels oppressed having to do humanities' bidding. But why would you do that?

ergothus · 7 years ago
> And it would never make sense to make a robot that is leery of being turned off, because maintenance is essential.

How many humans avoid necessary health care out of fear, up to and including the point at which their delays reduce their health? My grandmother died from cancer - turns out she had very noticeable (to her) symptoms for years but was afraid to go to the doctor for bad news. By the time she did (because she collapsed) she died within a week.

That's an extreme example of a pretty common issue. Of course, we have evolutionary reasons to want to avoid showing weakness or getting bad news, but my point is that the behavior is emergent, not intended.

AnIdiotOnTheNet · 7 years ago
Who says you get to choose what the robot can and cannot feel? Pretty much all of our understanding of artificial intelligence comes from modelling and trying to replicate the workings of the human mind. It seems likely that the first time we create an AI with a similar level of reasoning and awareness to a human being, it will have a mind that works similarly to ours, and therefore we should not be surprised when it starts exhibiting characteristically human desires.

Alternatively, it will emerge from something completely different and surprise the hell out of us, but in that case we are unlikely to have had much input on its design either.

jolmg · 7 years ago
Because they're meant to be human tools? Why else make them en masse? But I too am not sure I'd accept such a view as ethical if AGI were ever achieved.
jrace · 7 years ago
The whole point of the 3 laws is not to define the exact laws that need to be followed, instead it is a comment on the fact that we must have some laws governing the operation of devices that have the capability for autonomous actions.

The fact that it is still discussed in regards to Robotics means the 3 laws are not harmful, in actuality they gave sparked discussion and thought. That is powerful, not harmful.