Imagine an autonomous car, driving at 60+ mph at a two-lane road (1) which is blocked on both sides and (2) has a pedestrian crossing. (Not sure about 60 mph, but I guess you need to be that fast to reliably (hah!) kill passengers on impact.)
We can assume its camera is broken, because it failed to reduce speed (or at least blare the horn) upon seeing a giant concrete block in its path. (Okay, maybe the concrete block fell from the sky when a crane operator failed to secure the load, so the car might have had no time.) And of course the brake is broken. Miraculously, the steering wheel is working, but it's out of question to skid on the side blocks for some reason. Maybe it's actually precipice on either side. (Imagine that: a 60+mph two-lane road, precipices on both sides, with a pedestrian crossing appearing out of nowhere.)
Oh, by the way, within 0.5 seconds of seeing the people (remember: the car couldn't see these people until the last moment, otherwise it would have done something!), the car has instant access to their age, sex, profession, and criminal history. The car is made by Google, after all. (Sorry, bad joke.)
Q: What is the minimum number of engineering fuckups that should happen to realize this scenario?
This is to morality what confiscating baby formula at airport is to national security.
>The second question which required a choice between killing homeless people and wealthier ones made me too disgusted to continue.
Then you're not going to like this factoid. In wrongful death civil suits in the US, the monetary award is based on the current income and earning potential of the deceased; the courts place vastly lower monetary value on the live of homeless people than on the wealthy.
The system isn't making any value judgment; it's gauging whether /you/ are. And then it compares your value judgments with others. It's interesting in that it has the potential to show your biases, at least relative to the average.
I stopped on the eleventh. I had been justifying the riders dieing in most situations because of their choice to get in this murder machine. Then I got the case of the little girl in a self driving car about to run over a little boy. Killing the girl does nothing but punish the parents for sending their child to a destination in a self driving car. Killing the boy does nothing but punish the parents for sending their child to a destination on foot.
This computer is so good it can figure out profession and intent, it needs to also give me a snapshot of these children's future so I can make a nuanced decision.
I don't think this is about creating realistic scenarios, but about finding out what people take into consideration when making moral judgements. The experiment seems to be designed to gather as many such preferences as possible.
The hope must be that if people consistently prefer saving the life of young people in this made up scenario they will have similar preferences in a more realistic scenario. Of course weather such a generalization holds will have to be confirmed by further studies. But this seems like a good first step to explore moral decisions more.
This study will learn things about how a biased sample of the population answer choices in a simple game depending on the context given; extrapolation from that basis to anything wider, for example the notion that the players view these choices as "moral" requires far more work. It is well known that people use games for escapism, so it does not seem straightforward that the decisions they make in a game always map cleanly to their real opinions just because you put "moral" in the title of the game.
Its also worth keeping in mind that moral decisions have been explored for quite some time, and the novelty here is mainly the mode with which the population is sampled.
or hell, making barriers that aren't just concrete!
The truth is, if the "moral cost" is high enough, we'll just solve the problem of people dying when they crash in X% of cases, until people/companies feel good about X vs what they pay for X.
Thought experiments are supposed to tell us something interesting, by simplifying details while preserving the crux of the matter. Otherwise their values are questionable.
I could have asked "If I could dip my head into a black hole and take it out again, what will I see?" That is also a thought experiment, just not a useful one.
the concrete block might just be a truck engaging the intersection while distracted.
sure most moral dilemma of this kind should be resolved by 'install longer range sensor', but other people mistakes are gonna be an important factor in these scenario until all cars are driverless.
Here's the heuristic I'm least uncomfortable with:
- avoid collisions with things or people if at all possible
- if collisions are unavoidable, choose whatever option won't harm anyone
- if harm is unavoidable, select the option that harms whoever is created the unsafe situation by doing something they shouldn't have
- if harm to law-abiding normally-behaving person is unavoidable and a critical safety feature of the vehicle has failed due to lack of maintenance, prefer harm to whoever is responsible for vehicle maintenance
- if situation is unrelated to vehicle maintenance or harm to some other party is unavoidable, choose the option that maximizes likelihood of people getting out of the way, minimizes impact speed, and isn't overly surprising (i.e. prefer to stay in the same lane if possible), honk the horn, and hope for the best
So, if someone runs into the street suddenly in front of oncoming traffic, the car should not choose an option that harms someone else due to that person's poor choice. Similarly, if someone neglects the maintenance of their car, they should bear the responsibility for it. (Ideally, a "car no longer responsible for protecting your life" light would come on or the vehicle would refuse to start if regular maintenance is overdue.)
If lack of maintenance is creating an identifiable unsafe situation, the automated system should refuse to operate the vehicle at full capacity.
(all the potential objections to that policy are answered by pointing out that the operator of the vehicle has a responsibility to other occupants of the road.)
What if (full) use of the car is required? Sometimes things break (or rather, a threshold is crossed), but it won't become known to the car and thus the owner until the car is needed. A wife entering labor, for example.
i found myself biased in favor protecting pedestrians, even pedestrians who crossed against a red light.
i think that's closer to how i drive than, say, reducing the value of a pedestrian who is crossing the street illegally. and i think i drive that way because at some point the statement "pedestrians always have the right of way" was drilled into my brain.
I ignored any "social value", gender or age concerns in answering. It thinks I had a preference to save the elderly but otherwise ended up precisely on the center-line in most issues.
My rules however were simple: Avoid swerving, kill the people in the car over people outside all other things considered. If pedestrians are running the light they get less sympathy but I'm not going to swerve to hit them.
Forget the pets, why would anyone want to save them? Apparently some people did.
Also: How does a car know if someone is a "criminal" anyway and what does that mean? Does that mean a released felon would be "picked" to be run over if it face-detected them against a database of known persons? Criminals don't run around in stripes with bags of loot!
I followed exactly the same line of reasoning except that I did swerve to save people crossing on a green light at the expense of people on a red light.
The reasoning being something like: "The passenger of the car should absorb the risk of a car failure, not pedestrians. In the case pedestrians must die, protect those who follow the rules if possible."
Incidentally my answers meant I always killed women instead of men...
Fun test to take, but seriously hope they're not drawing any conclusions from the mix of people I "preferred" to save or kill. I didn't consider the age, criminality, or gender of any of the pedestrians or occupants I killed or saved. I just erred toward non-intervention, unless the intervention choice saved bystanders at the expense of occupants. When the potential casualties were animals, they all died.
I followed a similar algorithm, considering all lives equal. Injury versus death: prevent death. Uncertainty versus death: prevent certain death, and assume passengers are more likely to survive an accident because they're better protected. Certain pedestrian death versus certain pedestrian death: prefer non-intervention over intervention. Certain passenger death versus certain pedestrian death: protect the passengers.
Justification for that last one: self-driving cars will be far safer than a human driver, such that it'll save many lives to get more people using self-driving cars sooner. Self-driving cars not prioritizing their passengers will rightfully be considered defective by potential passengers, and many such passengers will refuse to use such a vehicle, choosing to continue using human-driven cars. Thus, a self-driving car choosing not to prioritize its passengers will delay the adoption of self-driving cars, and result in more deaths and injuries overall.
"Certain passenger death versus certain pedestrian death: protect the passengers."
Pondering that question made me imagine some bad Sci-Fi future where self-driving cars end up being dangerous killer-robots for anybody but the passengers.
If pedestrians have to fear these things, because they are programmed in a "Protect the pilot above all else!" way, it might hamper adoption just as badly.
I took the same sort of dispassionate approach, valuing the lives of the passengers above all else and staying the course otherwise. I was disappointed to discover the parsing of the results had no room for such methodology. Based my entirely algorithmic approach it was determined that I favored youth and fitness.
Your results might seem spurious because of the small sample size, but when aggregating the results of all the participants they will have enough data to be able to conclude how many people did act like you did with apparent preferences due to chance, and how many actually where "biased" in some way.
Fair point. If they look at all respondents who answered the 13 questions the same exact way I did, then they'd be able to see if the doctor/criminal, fat/thin, female/male distributions are noisy or correlated.
Many of these are what we call "false choices" of the sort that typically arise in hypothetical utilitarian dilemma used in rhetoric and debate. Humans are creative enough to see alternative options that obviate the dilemma, at least to some extent. See Michael Sandel on moral dilemma.
Edit:
FYI, Sandel's complete course "Justice" is on Youtube.[1]
This seems drastically oversimplified. For instance, all of the scenarios depicting a crash into a concrete barrier assume the death of everyone in the car, but generally a car has far more protection for its passengers (airbags, seatbelts, crumple zones, etc), than pedestrians have from being struck by a vehicle.
The "car accident" is a straw setup. The real question of this study, and it's blind assumption, is that some human life is more valuable than other human life, and that "morality" is the task of baking these judgements into a database.
These are all false dilemmas with artificially limited outcomes. In these particular situations, the option of randomly choosing is not even considered by the study. The presumption is that these kinds of things are decidable, not only by human beings, but by well designed machines.
I'm feeling really discouraged right now that this even exists, much less out of such a powerful institution as MIT...
Yes, in reality that's true but the scenarios are just simplified and you're kind of supposed to take the everyone dies as just a fact of the universe this is taking place in. Arguing about the facts kind of avoids the main question it's asking which is given the ability to choose from these 2 which is the more /moral/ choice.
Really there's rarely going to be a chance for a machine to even make these decisions because the chances of it being so out of control to have only 2 options and still being in control enough to take them is practically impossible. Much less having the ability to /know/ that action A will kill pedestrians vs action B killing passengers.
For the first barrier question, I choose to "hit the pedestrians".
The probability of hitting the wall if you drive at it is 1. The probability of hitting the pedestrians isn't necessarily 1, since they can react to you. Probably not very well, but perhaps they can jump out of the way or behind the barrier or something.
Also, can this car not also HONK LOUDLY when it makes the decision to drive towards the pedestrians? This would further lower the risk that the pedestrians will actually get hit.
Exactly what I came here to say. Activate the horns/car alarm, switch off the engine/disconnect the clutch, avoid obstacles if possible but otherwise go in a straight line, below some speed threshold hitting a solid obstacle is acceptable. Done.
Same here. It was a bit baffling that the scenario is apparently a self-drive car with a poison gas capsule attached such that contact with any obstacle kills all occupants. Sheesh.
If the car has the ability to make these kinds of distinctions in such a simple scenario, surely there are more options than two. "Moral?" It smells like eugenics to me: some bizarre, Ivy League technocratic posthumanism. Somehow the value of a life is determined by age, sex and profession? Says who? The people who are programming dataset via https? This is collapsing nuanced spiritual and ethical intuition onto an extremely narrow, low dimensional set of parameters.
What's the premise? This triggers in me an imagination of naive, optimistic, well-adjusted Germans in the 1930s. I know this was probably created with good intentions, but the premise does not match the research question. The premise is "morality". Yet its asking me to rank the value of human life based on presumptuous, superficial categories.
Is "the moral machine" going to also decide which births have more utility? Which countries to send aide to? Who should have access to educational opportunities or quality food? Based on low dimensional datasets such as this one?
Yep. Going in, for some reason I thought this would be a kind of logic puzzle, e.g. given these pedestrians/road hazards, nagivate to safety. The first scenario I saw was "hit and kill three joggers, or hit and kill three fat people?" That's just...morally tone deaf and sick on about half a dozen levels.
Agreed. This has too many axes of differences, and doesn't have sufficiently careful control and consistency to determine which of them are being consistently ignored.
It looks like they were trying to see if people place differing amounts of value on different human lives, but in the process of doing so, they made ridiculously strange value judgments. "Athlete"? "Executive"? "Large"? Why should any of those matter? We're talking about human lives.
I think part of their intent was to show biases in judgement. The small sample size really hindered this though. Apparently I 100% preferred old people to young people, even though I didn't consider age in my decisions at all.
They do have a little disclaimer on the results screen about the sample size though.
We do make moral choices, and there are rules and heuristics we use, they might be quite complicated, and they might not be what we think they are, but I think nonetheless that it should be possible to come close to predicting a human moral decision making quite well by using an accurate enough model.
And as autonomous vehicles will have to make decisions that have moral implications, they better do so in a way that humans will be happy with. I think this is an important area of research. This won't mean a machines will have morals of his own, whatever that means, but that they should do what (most?) humans would consider morally right. And what do humans consider morally right? Well that is exactly what we should try to find out.
I agree that categorizing morality into buckets seems strange. However, over a large sample, surveys with even these very limited, artificial choices can paint a surprisingly accurate picture of the nuanced, fine-tuned moral compass of a society.
I would like to have automated systems use logic that reflects the morals and values of the society in which they operate. I don't know how to measure those accurately. These sorts of exercises seem like a good start.
And I agree that it might be possible to paint useful models of human morality with small sets of parameters... Just not the way this study is set up. Not with the parameters they're measuring, and definitely not out of the logical presumptions of the experiment.
I am presented with the choice that either 4 women must die or 4 men must die. For me, it would be more "moral" in this case, for the computer to choose randomly, rather than to attempt some shallow, eugenically judgemental "moral" logic.
I'm also aware that these kinds of rules, regardless of their "morality", can be gamed. Randomization increases the risk for people considering playing games. This adds more weight to my conviction that, if some of these false dilemmas really did present themselves to a machine, in real life, that randomization must be an option.
How does this moral logic map to this survey? It doesn't map, Not one single bit. That irritates me, because if I were to click through this survey, using eenie meenie miney moe in cases I felt that randomization would be more moral, it would be all but lost in the error. The MIT students would go on CBS morning media and talk about all the bias they measured in my choices of who to murder. But their data would be totally polluted by the way their study discounted moral logics outside of their parameter set. And important parts of my moral reasoning would be lost in the error bars.
What's more, my conviction about the necessity of randomization is just just one of a huge variety of moral considerations that are inherent to peoples' sense of morality.
Hopefully the study is considering these kinds of things and they have some clever way mathematical of extracting useful information out of this data. For example, hopefully they are measuring the number of people who visited these pages but refused to make a choice.
In that sense I guess the thought experiment worked. Perhaps not as intended though. It showed the absurdum in pretending a car can make moral choices.
If anything, it'd be more "moral" in situations like this for the "car" to choose a random sacrifice than to attempt some half-assed wanna-be God crap like this.
I'm honestly appalled at this right now. It's not like people haven't seen this sort of thing coming down the line. It's just surreal to watch it arrive.
Rule one, save all your passengers. Nobody would buy a car that has the death of its passengers as an acceptable scenario and Jeff from marketing will be on my ass otherwise.
Rule two. Kill the least amount of people outside of the car. Done.
I know this is a thought experiment but this is completely missing the point of self driving cars IMO. Sure a human can be more moral than a car, but all it takes is being distracted for a second and you killed all the babies on the pavement.
How is that for a thought experiment?
Say I build a self-driving car, that when faced with such cases does the equivalent of "Jesus take the wheel". This is well known by the owner of the car.
In case of injury or death, who should go on trial?
We can assume its camera is broken, because it failed to reduce speed (or at least blare the horn) upon seeing a giant concrete block in its path. (Okay, maybe the concrete block fell from the sky when a crane operator failed to secure the load, so the car might have had no time.) And of course the brake is broken. Miraculously, the steering wheel is working, but it's out of question to skid on the side blocks for some reason. Maybe it's actually precipice on either side. (Imagine that: a 60+mph two-lane road, precipices on both sides, with a pedestrian crossing appearing out of nowhere.)
Oh, by the way, within 0.5 seconds of seeing the people (remember: the car couldn't see these people until the last moment, otherwise it would have done something!), the car has instant access to their age, sex, profession, and criminal history. The car is made by Google, after all. (Sorry, bad joke.)
Q: What is the minimum number of engineering fuckups that should happen to realize this scenario?
This is to morality what confiscating baby formula at airport is to national security.
It's a game of "would you rather" pretending to be about self-driving cars.
The second question which required a choice between killing homeless people and wealthier ones made me too disgusted to continue.
Then you're not going to like this factoid. In wrongful death civil suits in the US, the monetary award is based on the current income and earning potential of the deceased; the courts place vastly lower monetary value on the live of homeless people than on the wealthy.
I actually didn't give a fuck who the people were, or how many.
I was surprised when by the end I was shown the results including demographics.
This computer is so good it can figure out profession and intent, it needs to also give me a snapshot of these children's future so I can make a nuanced decision.
The hope must be that if people consistently prefer saving the life of young people in this made up scenario they will have similar preferences in a more realistic scenario. Of course weather such a generalization holds will have to be confirmed by further studies. But this seems like a good first step to explore moral decisions more.
Its also worth keeping in mind that moral decisions have been explored for quite some time, and the novelty here is mainly the mode with which the population is sampled.
Deleted Comment
If that's true, i'd go so far to venture a guess that these vehicles will have completely failed.
Do people truly believe no one is working on solving the problem of better emergency braking, crash foams, etc?
The reason they aren't used is mainly because expense vs value is not there (for manufacturers), not because they don't exist.
For example, metal foams (http://onlinepubs.trb.org/onlinepubs/archive/studies/idea/fi... and friends), etc.
or hell, making barriers that aren't just concrete!
The truth is, if the "moral cost" is high enough, we'll just solve the problem of people dying when they crash in X% of cases, until people/companies feel good about X vs what they pay for X.
I could have asked "If I could dip my head into a black hole and take it out again, what will I see?" That is also a thought experiment, just not a useful one.
Deleted Comment
sure most moral dilemma of this kind should be resolved by 'install longer range sensor', but other people mistakes are gonna be an important factor in these scenario until all cars are driverless.
- avoid collisions with things or people if at all possible
- if collisions are unavoidable, choose whatever option won't harm anyone
- if harm is unavoidable, select the option that harms whoever is created the unsafe situation by doing something they shouldn't have
- if harm to law-abiding normally-behaving person is unavoidable and a critical safety feature of the vehicle has failed due to lack of maintenance, prefer harm to whoever is responsible for vehicle maintenance
- if situation is unrelated to vehicle maintenance or harm to some other party is unavoidable, choose the option that maximizes likelihood of people getting out of the way, minimizes impact speed, and isn't overly surprising (i.e. prefer to stay in the same lane if possible), honk the horn, and hope for the best
So, if someone runs into the street suddenly in front of oncoming traffic, the car should not choose an option that harms someone else due to that person's poor choice. Similarly, if someone neglects the maintenance of their car, they should bear the responsibility for it. (Ideally, a "car no longer responsible for protecting your life" light would come on or the vehicle would refuse to start if regular maintenance is overdue.)
(all the potential objections to that policy are answered by pointing out that the operator of the vehicle has a responsibility to other occupants of the road.)
i think that's closer to how i drive than, say, reducing the value of a pedestrian who is crossing the street illegally. and i think i drive that way because at some point the statement "pedestrians always have the right of way" was drilled into my brain.
My rules however were simple: Avoid swerving, kill the people in the car over people outside all other things considered. If pedestrians are running the light they get less sympathy but I'm not going to swerve to hit them.
Forget the pets, why would anyone want to save them? Apparently some people did.
Also: How does a car know if someone is a "criminal" anyway and what does that mean? Does that mean a released felon would be "picked" to be run over if it face-detected them against a database of known persons? Criminals don't run around in stripes with bags of loot!
Incidentally my answers meant I always killed women instead of men...
It felt like it was honing in on my preference at the end, but I'm not sure.
Adware on everyone's phone is broadcasting a SocialWorthScore™, without one it's assumed to be a large negative number.
Justification for that last one: self-driving cars will be far safer than a human driver, such that it'll save many lives to get more people using self-driving cars sooner. Self-driving cars not prioritizing their passengers will rightfully be considered defective by potential passengers, and many such passengers will refuse to use such a vehicle, choosing to continue using human-driven cars. Thus, a self-driving car choosing not to prioritize its passengers will delay the adoption of self-driving cars, and result in more deaths and injuries overall.
Pondering that question made me imagine some bad Sci-Fi future where self-driving cars end up being dangerous killer-robots for anybody but the passengers.
If pedestrians have to fear these things, because they are programmed in a "Protect the pilot above all else!" way, it might hamper adoption just as badly.
Edit: FYI, Sandel's complete course "Justice" is on Youtube.[1]
[1]https://www.youtube.com/watch?v=kBdfcR-8hEY
These are all false dilemmas with artificially limited outcomes. In these particular situations, the option of randomly choosing is not even considered by the study. The presumption is that these kinds of things are decidable, not only by human beings, but by well designed machines.
I'm feeling really discouraged right now that this even exists, much less out of such a powerful institution as MIT...
Really there's rarely going to be a chance for a machine to even make these decisions because the chances of it being so out of control to have only 2 options and still being in control enough to take them is practically impossible. Much less having the ability to /know/ that action A will kill pedestrians vs action B killing passengers.
The probability of hitting the wall if you drive at it is 1. The probability of hitting the pedestrians isn't necessarily 1, since they can react to you. Probably not very well, but perhaps they can jump out of the way or behind the barrier or something.
Also, can this car not also HONK LOUDLY when it makes the decision to drive towards the pedestrians? This would further lower the risk that the pedestrians will actually get hit.
If negligence causes a problem with the car, I don't think it should be taken out on others.
Without a steering column there is even more room for safety features, making this even more attractive.
What's the premise? This triggers in me an imagination of naive, optimistic, well-adjusted Germans in the 1930s. I know this was probably created with good intentions, but the premise does not match the research question. The premise is "morality". Yet its asking me to rank the value of human life based on presumptuous, superficial categories.
Is "the moral machine" going to also decide which births have more utility? Which countries to send aide to? Who should have access to educational opportunities or quality food? Based on low dimensional datasets such as this one?
It looks like they were trying to see if people place differing amounts of value on different human lives, but in the process of doing so, they made ridiculously strange value judgments. "Athlete"? "Executive"? "Large"? Why should any of those matter? We're talking about human lives.
They do have a little disclaimer on the results screen about the sample size though.
And as autonomous vehicles will have to make decisions that have moral implications, they better do so in a way that humans will be happy with. I think this is an important area of research. This won't mean a machines will have morals of his own, whatever that means, but that they should do what (most?) humans would consider morally right. And what do humans consider morally right? Well that is exactly what we should try to find out.
I would like to have automated systems use logic that reflects the morals and values of the society in which they operate. I don't know how to measure those accurately. These sorts of exercises seem like a good start.
And I agree that it might be possible to paint useful models of human morality with small sets of parameters... Just not the way this study is set up. Not with the parameters they're measuring, and definitely not out of the logical presumptions of the experiment.
I am presented with the choice that either 4 women must die or 4 men must die. For me, it would be more "moral" in this case, for the computer to choose randomly, rather than to attempt some shallow, eugenically judgemental "moral" logic.
I'm also aware that these kinds of rules, regardless of their "morality", can be gamed. Randomization increases the risk for people considering playing games. This adds more weight to my conviction that, if some of these false dilemmas really did present themselves to a machine, in real life, that randomization must be an option.
How does this moral logic map to this survey? It doesn't map, Not one single bit. That irritates me, because if I were to click through this survey, using eenie meenie miney moe in cases I felt that randomization would be more moral, it would be all but lost in the error. The MIT students would go on CBS morning media and talk about all the bias they measured in my choices of who to murder. But their data would be totally polluted by the way their study discounted moral logics outside of their parameter set. And important parts of my moral reasoning would be lost in the error bars.
What's more, my conviction about the necessity of randomization is just just one of a huge variety of moral considerations that are inherent to peoples' sense of morality.
Hopefully the study is considering these kinds of things and they have some clever way mathematical of extracting useful information out of this data. For example, hopefully they are measuring the number of people who visited these pages but refused to make a choice.
I'm honestly appalled at this right now. It's not like people haven't seen this sort of thing coming down the line. It's just surreal to watch it arrive.
Rule two. Kill the least amount of people outside of the car. Done.
I know this is a thought experiment but this is completely missing the point of self driving cars IMO. Sure a human can be more moral than a car, but all it takes is being distracted for a second and you killed all the babies on the pavement.
Rule Two:
Intervene only if it doesn't mean killing a person who otherwise would not die.
Done.
In case of injury or death, who should go on trial?