I mean I understand Tesla has to make a statement here and I understand they want to ensure everyone that it's not really their fault but to title a post "A Tragic Loss" and then spend the majority of the post discussing all of your car's safety features and how it wasn't your fault just seems tone deaf and distasteful to me.
Maybe they had to do it for legal reasons I don't know (I'm certainly not a lawyer) and I'd love to own a Tesla but couldn't they have worded this a little more sympathetic and a little less lawyer?
I feel 'A Tragic Loss' sets the tone for the remaining highly technical overview, instead of the technical breakdown seeming tone-deaf, it added tone until getting to the concluding paragraph.
Unfortunately, fortunately, Tesla is having to educate people and being very clear to not allow room for panic and unwarranted fear mongering.
I would say it is as to the point as it can be, and that it is heartfelt.
In addition it had important information for how other Tesla drivers can use the auto-pilot feature more safely. The article combines condolences and helpful safety information to prevent this kind of thing from happening again. Also, you don't see GM making a blog post every time someone passes away in a car they make.
All that matters is, did the technology fail. I don't care about crash rates in other cars, if the impact had been different, the only fact that matters in this case is, did the tech fail. If so then is it safe to leave on or should it be disabled across the board until it cannot fail this scenario again.
one failure and they will take a minor publicity and money hit, two and its going to devastating
They have to defend autopilot not only to protect the brand but to protect the public's perception of autonomous vehicles in general.
Self driving tech is poised to save many many lives. So from a utilitarian perspective, it's probably justified to take extraordinary measures to make sure reactionary media and public whim doesn't kill it off, however uncomfortable that might seem in the short term.
Whilst this case is incredibly sad (and I don't want to downplay that in any way), if you're trying to minimise the overall amount of fatal crashes, exonerating the tech is the priority (if it is truly not at fault).
They could defend the public's perception of autonomous vehicles in general by not rushing to market a beta that is less technically capable then other autonomous systems being worked on by their competitors.
> They have to defend autopilot not only to protect the brand but to protect the public's perception of autonomous vehicles in general.
Why? Why not just build the cars people want? (including cars that people want but don't realize they want yet)
There shouldn't be a political agenda associated with engineering. Build what is needed. Build what people want. Build what people will need. But never "defend my reputation and the reputation of this device that I'm making"
EDIT: I mean, I get why Public Relations are important and so forth. So Tesla is certainly free to do what they want here. But lets not pretend that this carefully crafted "condolence" piece that has come out roughly one and a half months late is anything but damage control for this company.
> Self driving tech is poised to save many many lives.
The computer would have to be 99.99999% reliable to do that.
The accident rate is around 74 per 100 million miles (and fatalities is 1.13).
It's unclear exactly how to turn that into a percentage, but no matter how you do it it's quite high.
Say an accident takes 5 minutes, and people drive 30 miles/hour. Then that works out to 99.999% for humans. If you use the numbers for fatalities then it's 99.99999%.
I.e. 99.99999% of the time, as whole across all [US] humans, people drive in a way that does not cause a fatality.
That's the bar computers have to cross in order to save any lives at all.
I agree. I'm starting to get really weirded out by Tesla's immediate "not our fault!" blog posts every time a Tesla is involved in any kind of accident. I could see the necessity the first few times back when it was new technology to a lot of people (and Top Gear had it in for them), but with it happening over and over it seems whiny and a bit callous. How long to they plan to keep doing this?
It is because, if Tesla allows it to happen, the media will attack them incessantly over these things because they smell a story (and because a lot of vested interests are willing to pay for PR). If you don't defend yourself in that kind of situation, you die.
Yeah the thing that rubbed me the wrong way was how they responded to the NYT review about cold weather performance. I've been highly skeptical about Elon and Tesla ever since.
As long as media continues to blame their new cars for clickbait articles and until they are treated the same as any other car manufacturer that's not blamed for every incident involving their vehicles.
I half expected the final paragraph offering condolences to end with "but remember, this really wasn't Tesla's fault!" The third paragraph reads like Tesla is trying to wash their hands as much as possible. "Remember, he clicked all these buttons. He knew what he was getting into!"
It really makes you consider why you would even want such an autopilot. From the description in this eulogy, it sounds like you must work even harder with autopilot on than if you were driving manually. Not only do you need to have the same alertness as manual driving, you also have to be ready to take over at any moment and try to detect if the autopilot has made any mistakes that might guillotine you, or worse.
Yes. Last time, their excuse was "but the driver should have known that in firmware revision 6.2 and later, tapping the brake disables automatic braking".
This time, there's an interesting question. Did Tesla remotely access the crash data after the crash? Did they alter any data? Is that verifiable? The NTSB will probably explore that issue. The crash data record in an airbag controller becomes read-only when the airbag fires.
Press Releases like this are for investors and media outlets, not for general consumers. Of course they are going to focus on damage control, because if they didn't focus on the safety features of autopilot and the hands-on requirement some media outlet would contort the situation and say "What safety features does this car to prevent an accident like this in the future? Is this technology actually ready to be used on our roads?".
I think they did a fine job of handling this, and with the high visibility of this incident due to the use of autopilot they really had no other option.
I would not call a sentence in a manual a "requirement"
A requirement would be when the autopilot didn't work unless one has at least one hand on the steering wheel. (Just like other manufactures do that)
Tesla knows people are not going to use autopilot hands-on (especially after the novelty wears off)
Reminds me of their complaint blog post why the German government initiative is capped at 60k where it seems like half the point was to advertise a new lease deal.
>couldn't they have worded this a little more sympathetic and a little less lawyer?
Sure, if all the lawyers will promise not to take some statement out of context and sue them over that.. As long as such lawyers exist and that's the way the legal system works this is what can be expected out of statements from companies..
Yeah, I had the same issue with the writing. I think it would have gone much better if the meat of the concluding paragraph came first, possibly with a matching paragraph at the bottom.
Personally, I would have used a more neutral title, led with sympathies for the family, and then gone into the technical detail.
I was even left with the sense that the tragedy actually felt by the writer was not so much the death of "a friend," as the fact of a blemish on the near-perfect safety record that Tesla has made part of their brand's cachet.
Yes this is in really poor taste. I'm sure it's very comforting to the deceased's family to know that safety features could have saved their life, had the angle of impact been different.
IMO, they should've had their explanatory intro, then the paragraph they closed with, then a gap, and then the technical explanation. As it was, it read like hand-washing to me. Especially when you openly call it AutoPilot but require hands on the wheel at all time.
If their detectors don't see a white car against a bright background, that's obviously a serious problem.
They could have titled it "Tesla Safety" -- which is what the article is about, and opened with an acknowledgement of the tragedy, instead of the tacky bait-and-switch headline.
This is why driving AI is 'all or nothing' for me.
Assisted systems will lead to drivers paying less attention as the systems get better.
The figures quoted by Tesla seem impressive but you have to assume the majority of the drivers is still paying attention all the time. As auto-pilots get better you'll see them paying attention less and then the accident rate will go up, not down for a while at least until the bugs are ironed out.
Note that this could have happened to a non-electric car just as easily, it's a human-computer hybrid issue related to having to pay attention to some instrument for a long time without anything interesting happening. The longer the interval that you don't need to act the bigger the chance that when you do need to act you will not be in time.
This is what I've now said 3 or so times in various autopilot threads. It has to be an all or nothing thing. Part of responsible engineering is engineering out the many and varied ways that humans can fuck it all up. Look at how UX works in software. Good engineering eliminates users being able to do the wrong thing as much as possible.
You don't design a feature that invites misuse and then use instructions to try to prevent that misuse. That's irresponsible, bad engineering.
The heirachy of hazard control [1] in fact puts administrative controls at the 2nd-to-bottom, just above personal protective equipment. Elimination, substitution and engineering controls all fall above it.
Guards on the trucks to stop cars going under are an engineering control and also perhaps a substituion - you go from decapitation to driving into a wall instead. It's better than no guards and just expecting drivers to be alert - that's administration - but it's worse than elimination which is what you need if you provide a system where the driver is encouraged to be inattentive.
User alertness is a very fucking difficult problem to solve and an extremely unreliable hazard control. Never rely on it, ever. That's what they're doing here and it was only a matter of time that this happened. It's irresponsible engineering.
edit: My source for the above: I work in rail. We battle with driver inattention constantly because like autopilot, you don't steer but you do have to be in control. I could write novels on the battles we've gone through just to keep drivers paying attention.
> I could write novels on the battles we've gone through just to keep drivers paying attention.
Please do, and link them here. I'd be very interested in reading about your battles and I figure many others would too. This is where the cutting edge is today and likely will be for years to come so your experience is extremely valuable and has wide applicability.
I understand your point that it has to be all-or-nothing, but if you were asked to redesign the UX to make autopilot (as it currently stands) safer, how would you change it?
Philip Greenspun's impressions after trying out the Model X for a weekend:
"You need to keep your hands on the steering wheel at all times during autosteering, yet not crank the wheel hard enough to generate what the car thinks is an actual steering input (thereby disconnecting autosteer). I found this to be about the same amount of effort as simply driving."
It's pretty well established that humans (e.g. [1]) that humans have a lot of trouble paying when they don't need to be actively engaged most of the time and also have trouble taking back control. In a consumer driving context, I have zero doubt that, as systems like these develop, people will start watching videos and reading absent draconian monitoring systems to ensure they keep their eyes on the road. I'm not sure how we get past that "uncanny valley."
Its worth pointing out that Prof. Missy Cummings, who authored the paper, is a former F/A-18 pilot who specializes in human-machine interaction research.
One option is the Tesla autopilot should have an indication when it approaches "low confidence" areas without disengaging, so the driver is not startled if they have to take back manual control.
In the aviation community, there is the major concerns over pilots becoming over-reliant cockpit automation instead of flying the jet.
Asiana 214 [0] is a classic example of crashing a perfectly good airliner into a seawall on landing.
In the Boeing 777, one example of the (auto)pilot interface showing safety critical information is the stall speed indication on the cockpit display [1], warning the pilot if they are are approaching that stall speed.
Hopefully Tesla will optimize the autopilot interface to minimize driver inattention, without becoming annoying.
In aviation, autopilots became successful because the human-machine handoff latency required is relatively large --- despite how fast planes fly, the separation between them and other objects is large and there is usually still time (seconds) to react when the autopilot decides it can't do the job ( https://www.youtube.com/watch?v=8XxEFFX586k )
On the road, where relative separation is much less (and there's even been talk of how self-driving cars can reduce following distances significantly, which just scares me more), the driver might not have even a second to react when he/she needs to take over from the autopilot.
My understanding of the Asiana crash was that the autopilot would have landed the plane fine, and that it was the humans turning it off that caused the problem.
Your point is still valid, but perhaps we approach a time when over-reliance is better than all but the best human pilots (Sully, perhaps).
Wow, that is some very damning criticism: "The distinction is that a Level 3 [Tesla] autonomous system still relinquishes the controls back to the driver in the event of extreme conditions the computer can no longer manage, which Victor (and Volvo) finds extremely dangerous."
As someone very excited about this space, I unfortunately have to agree that Tesla is playing fast and loose with autonomous safety (and more importantly, public opinion!) to be first to market. You can't be half in and half out, which is what these "assist" features are.
They're adding new features to inadequate/improperly configured hardware for what they're asking the car to do, and waiving away all liability for stupid peoples' actions with disclaimers (always be ready to take over).
Whether that's right or wrong is really subjective, especially when you take natural selection into account.
Tesla's (only!) radar sensor is located at the bottom of the bumper, if I'm not mistaken. Compare this with Google's, which is located in the arguably correct position, the roof. Also compare other manufacturers' solutions that are utilizing 2-3 radar sensors, as well as sonar.
Precisely. This half-way 'self driving but if something goes wrong it's the human's fault' is a terrible idea, because the more reliable it gets (while still being a 'driver assist' rather than committing to actually being a fully autonomous control system) the more likely the human is to not be paying attention when the shit hits the fan.
Apart from the software part of it. I wonder how they handle issues like sensor malfunction.
If your eyes aren't at their best, you know well to go to a doctor and not be driving in the meanwhile. Will the car with autopilot refuse to start or go on autopilot if the camera/sensor/radar has an issue?
So yes you are right, its either full AI or nothing.
Toyota is taking the opposite approach to Tesla: they are introducing automated features as a backstop against human error, rather than a substitute for human attention. Your Toyota (or Lexus) won't drive itself, but it might slam on the brake or swerve to avoid an obstacle you couldn't see.
When the "overhead sign" comes down below overhead clearance of the vehicle the signal should not be masked. There should have been some braking action in this case. If there was not then the tesla autopilot is unsafe. This is the same blind spot discussed a few months ago that caused a tesla to run into a parked trailer using summon mode:
The Prius uses cameras not radar, and I've driven under hundreds of overpasses without incident.
More likely someone confusing the brake warning with the auto-braking shutdown warning. Because it uses optical cameras, driving into bright sunlight or into deep shadows can cause the system to shut down momentarily, and it beeps to let you know.
Even best case scenario that tweet has nothing to do with this incident because the technology and systems are different. Worst case it never applied the brakes at all, and the driver got confused.
The radar might not be able to clearly distinguish between objects at the height of the cab and objects at signage heights. Just about everything on the road is mostly on the road.
If the system can bias its sampling towards the ground, it would make a lot of sense to do so. Lump "above the ground" and "very above the ground" into the same category and use other sensors (for example, the camera..) to detect the edge cases where it matters.
I agree it probably can't distinguish between those heights, which is a danger either way : not recognizing going under a trailer or accidentally braking at overpasses. Neither are very safe or pass "human level".
Of course we should have undercarriage bars regardless.
It also seems strange that the rest of the tractor trailer was not picked up by the radar when it was entering the road in the first place - the front part that is pulling the trailer is typically lower and should not have been mistaken for a road sign or otherwise.
It may have. AI isn't quite to the level of understanding "object permanence" yet. So, if you would let a 2 year old drive you should be fine with autopilot. The more I think about this the more I think it will set self-driving back 10 years.
Or that radar cannot detect an object moving perpendicularly in front of you regardless of overhead clearance. I assumed that auto pilot was more advanced than what is now being described. They have a lot of work to do here.
Seems odd to me that the autopilot doesn't use them together, as it would seem that the autopilot would easily be able to see with the camera whether it was a sign, as at least in the United States they seem a somewhat uniform green.
I think the likely answer is Tesla "autopilot" is a glorified cruise control + lane assist + data collection for future intelligent controls.
Google probably has close to this capability, but they are at the leading edge of image recognition.
In order to depend on it the AI would have to be able to distinguish 99.9+% and would have to be able to tell the difference between a green truck and a road sign. I honestly think self driving cars will require intelligence indistinguishable from artificial general intelligence.
This was state of the art in 2014 (improved in 2015, but this paper is on arxiv and gives the idea):
Even top scoring Google had classification errors over 6% of the time in 2014. Object localization was much worse. Even if it is at 99% that means 1 of every 100 object will be misidentified. You probably identify that many random (non-car, non-sign, etc) objects in a week driving, maybe in a day. That kind of error is unacceptable in self-driving cars and it is state of the art. That is one of the reasons they are still testing and not selling self-driving cars at Google.
Better than human may not require quite 99.9+% object classification, but I do wonder how great the self-driving car records would be if someone wasn't always there to take the controls. It certainly wouldn't be "better than human" at this point.
The top comment in another comment thread, which has been "duped"[0] pointed out how marking a feature in cars as "beta" is irresponsible.
What's beyond the pale IMO is that when auto-pilot was first demonstrated (at the unveil event) - "hands on the wheel" was not part of the story. Journalists and (what appeared to be) Tesla employees were using the feature without hands on the wheel. It looked like Tesla cashing-in on the positive PR without correctly framing the limitations of the tech.
Furthermore, Tesla includes sensors to map the entire surroundings of their cars, but why can't they include sensors to ensure customers have hands on the wheel? (update: comment says they do, but the check frequency is low. why can't it be high?!) It's not just the driver's life at stake, it's everyone else on the road--Tesla should disable this feature on cars [unless it ensures] drivers' hands are on the wheel. Engineers/execs at other companies taking a more responsible approach must be furious at the recklessness on display. One death is too many.
> hey do the check frequency is low. why can't it be high?!
Because that makes it less useful. I am a Tesla owner. I am an adult capable of monitoring the car and taking control when autopilot gets confused. My hand being on the wheel at all times is neither a necessary nor a sufficient condition for verifying that I am paying attention and am ready to take over control.
> One death is too many.
I am sick and tired of absolutist statements about risk. Why do you allow cars on the road at all? Why allow cars to have cupholders? Why are drive-through restaurants legal? You make utility-risk trade-offs all the fucking time.
> My hand being on the wheel at all times is neither a necessary nor a sufficient condition for verifying that I am paying attention and am ready to take over control.
Yes it is. Set your ego aside for a second, forget about how good or bad of a driver you are, and consider how long it would take anyone to move their hand back to the wheel and take control.
Lets be generous and assume half a second. 60 mph * 0.5 seconds = 44 feet, or roughly 3 car lengths before you've even begun to re-assume control of the vehicle, let alone take the appropriate action to handle whatever is going wrong.
> My hand being on the wheel at all times is neither a necessary nor a sufficient condition for verifying that I am paying attention and am ready to take over control.
Then you are not using the feature the way Tesla says you should, the only way Tesla says it's safe to be used.
> I am sick and tired of absolutist statements about risk.
Recklessly rolling out tech is screwing the industry-at-large, given the regulatory hurdles that must be overcome.
>> One death is too many.
You took this out of context. When Tesla makes no genuine, up-front attempt to educate users on how to use Auto-Pilot--yes, one death is too many.
Correctly-deployed autonomous driving stands to save thousands of lives annually; what's at risk is some overeager company @#!$ing the regulatory efforts by being irresponsible at scale.
That's the real scandal here if there is one. Tesla has been capitalizing on their autonomous driving feature, and promoting it in an unsafe way to do so (after all, it'd look a lot less impressive if all the demo videos had alert drivers with their hands on the wheel).
The truth is that autonomous driving is something every major car manufacturer could develop and demo tomorrow, but large companies like Ford/GM/Toyota are way too risk-averse (or some would say, responsible), to promote it in such a way.
Here's hoping that Tesla doesn't poison the well for autonomous driving.
This is on the road to being off topic, but still relevant given some of the commentary in this thread:
It makes me a bit sad that the political zeitgeist in the tech community is leaning towards "acceptable losses" when it comes to accidents in automated cars, to the point of pre-emptively expressing disdain at ordinary people reacting negatively to such news. I sense it's going to become harder and harder for us to talk about our worries and skepticism regarding automated driving, since the louder voices claim it will all be worth it in the end. Surely — surely — you're on the side of less death? But personally, I find the utilitarian perspective distasteful. We're perfectly happy to let technology (literally) throw anonymous individuals under the bus as long as less people die overall, but what if it's you that gets hit by an auto? What if it's someone you care about, not Anonymous Driver On TV? The point is that humanity is not a herd to be taken as a whole; every life has rights, including the right not to be trampled by algorithmic decisions or software bugs for the betterment of all. (Sure, you could argue just as well that we have the right not to be run over by drunk and otherwise negligent drivers, but at least this kind of death is not methodical and has some legal recourse.) I feel this perspective needs a strong voice in the tech community too, to counter the blind push forward at the expense of human lives.
Now, this isn't necessarily what happened in this case, but I find Tesla's behavior in these kinds of situations to be creepy and self-serving, at best. Is every death going to come with a blog post describing how much safer automated features are compared to human drivers? Every auto-related casualty is, and should be, a massive event, not a minus-one-point on some ledger in Elon Musk's office.
When I was in college, the I-35W bridge collapsed and many people died. In my differential equations class the next day, my professor was handing back our tests and told us–a class of engineers–that every red mark on our tests was another dead body.
You might scoff and say that's hyperbole, or scaremongering, and you might be right, but I took it as a warning. It is our responsibility as ethical engineers to ensure that people aren't harmed by the technology we create. It's true that self-driving cars will likely be safer than human-driven cars, but that doesn't absolve us of responsibility. In fact, it makes it all the more poignant.
Heard something similar in a class ~"Your code could lose a company a couple of million dollars or get someone killed. If your not going to be a professional, then please chose your jobs carefully or leave the profession".
I-35W was seriously bad news, plus the cell network went down.
>It makes me a bit sad that the political zeitgeist in the tech community is leaning towards "acceptable losses" when it comes to accidents in automated cars
What. "Acceptable losses" is, and has always been the rule when it comes to accidents in all cars, ever. We've made absolutely massive safety improvements in the last few decades, cutting the death rate to around 1/3 of its historical maximum, yet even so tens of thousands of people will die in America alone this year (probably in the neighborhood of ~30k). And next year. And the year after. According to the World Health Organization, there were 1.25 million road traffic deaths in 2013, and if anything I'd expect that to have continued to rise as more and more people worldwide gain access to vehicles.
Yet we will absolutely continue to support car use, because flexible mechanized point-to-point personal transportation is insanely valuable to us (and in fact American society at least simply could not function without it at all at this point). You can dress it up however you like, but the objective fact of the matter is that literally millions of deaths are considered "acceptable losses" here.
Full automation represents our absolutely best shot to reduce the horrendous, annual loss of human life. So personally, I find your appeal to emotion and thinly veiled "what about the children" shtick disgusting and immoral. You've invented imaginary "rights" that absolutely do not and should not exist. Your so called
>"blind push forward at the expense of human lives"
is literally the opposite of what automation aims to accomplish. And I will freely assert will accomplish, because humans are awful drivers. Really, automation is merely filling in the missing piece of personalized transport that we should have had from the start, except that our mechanical technological development was far ahead of our information gathering and processing technological development. Having a human handling that has always been a hack, and getting rid of it has such vast positives that it absolutely justifies a very strong push forward as soon as possible. Every year without driverless vehicles when we could have had them is literally hundreds of thousands more people dead.
This tech does not exist in a vacuum, and context absolutely matters.
> So personally, I find your appeal to emotion and thinly veiled "what about the children" shtick disgusting and immoral.
How do you counter a utilitarian argument without humanizing the situation? If my friend or loved one were killed by an ill-considered and poorly-implemented automated driving algorithm, no amount of statistics would convince me that the tradeoff was worth it. Who bears responsibility in that case? No one? I suspect that ordinary people won't accept that for a long time to come. At least with human drivers, the courts can decide culpability. At least with a human in the mix, we're all still operating within the same social and legal order.
>> And I will freely assert will accomplish, because humans are awful drivers.
If humans are such awful drivers and replacing them with automation is the moral thing to do, why is it not the moral thing to do to stop them from driving altogether, right now and for the time being, until we have automation that is better than "awful"?
If we're such rubbish drivers that we'll always kill others if we're driving, why are we allow ourselves to drive?
Acceptable losses is the only possibility, if we are to have cars at all.
Everyone is on the side of less death, but since there are some inherent dangers in zooming along at 120km/h, the losses can only approach zero and will never reach it. The fact that cars are used at all shows that we make something like a utilitarian decision already. They are very, very useful, with a small possibility of personal catastrophe. Same with air travel, drinking beer, or leaving the house ever, all to different degrees.
If the autocars are less likely to kill me than human drivers, I'm for the autocars.
> If the autocars are less likely to kill me than human drivers, I'm for the autocars.
I think this situation is a bit different than the arrival of air travel or even the automobile, because with all those technologies, humans were still making, or validating, all the life-and-death decisions (barring technical failure). I also think the kind of death we're talking about matters a whole lot. What if autos were incredibly safe, but far more prone than ordinary drivers to running over small children and pets? Would society accept that?
Many people are not comfortable with impenetrable algorithms managing their safety on a mass scale, or indeed, with making any conscious decisions at all when it comes to "trolley problem" issues. And that discomfort is well within their rights. Technological progress is not a divine edict, but something that society has to negotiate and agree upon. The debate should be respected.
> It makes me a bit sad that the political zeitgeist in the tech community is leaning towards "acceptable losses" when it comes to accidents in automated cars
Yes, this is another thing that makes me so angry about this situation.
Back in the day, I worked on a project to give autonomy to power wheelchair systems. In order to get it to market, we had to go through FDA approval, which involved testing the equipment in every conceivable scenario. We had to show how it performed in rain, snow, dust storms, sunlight, etc. and it had to be incredibly rigorous, defining all limitations, and how the user would be impacted if the systems failed.
If we came the FDA and said "Our system works in many scenarios, but we've found a pathological case where on certain sunny days it will crash into certain brightly colored objects, resulting in almost certain death for the user" we would have been summarily rejected. Non starter.
And yet we're willing to put a system with the above disclaimer (which should have been known to Tesla engineers. This is sensors 101.) on the market with absolutely no oversight? Seriously?
From the description of this accident it sounds as if the Tesla was traveling normally down a lane, and happened to cross UNDER a vehicle which was bisecting that lane (the precise conditions were not clear from my reading; it could be that the trailer was at a crossroads of somekind, or that it was veering across said lane).
In the case of any normal crossing or intersection the driver should have taken over manual control. In the case of a sudden deviation of other vehicles planned paths, I have doubts the driver could have reacted in a timely and safe manor, even without the automation. Though an even more complete sensor system may have allowed the automation to detect and react faster than a human could.
The human behind the wheel had been explicitly instructed to keep hands on the wheel and feet on the pedals -- and that it was his responsibility to override the machine's judgement if circumstances warranted.
>> The point is that humanity is not a herd to be taken as a whole;
That's a great point but the truth is that we already treat human lives as a resource to be spent: for instance, we allow planes to fly and automobiles to be driven even though their safety is not perfect. We allow that, out of say a million flights, one plane will fatally crash killing usually everybody on board (I don't know what the real number is- I made the 1/1mil up).
So basically we accept that in order for each of us to fly to our destination, 300 ish people must die every so often. Like a kind of sacrifice.
I don't see how any of this is going to stop any time soon, especially when people are always quick to tell you how safe flying is, or how, although road accidents are common and people get killed in them, "what can you do, stop driving?". It seems to me that at some level we all accept that in order for us to have our fast transport, some people must die.
We actually don't allow planes to crash. Crashes are a result of human error or catastrophic failure. When a crash occurs, the cause is investigated so that another crash never happens for the same reason. If a defect is ever found, all affected planes are grounded until they have been fixed.
I expect Autopilot to be disabled soon.
Edit:
>So basically we accept that in order for each of us to fly to our destination, 300 ish people must die every so often. Like a kind of sacrifice.
Your position is...crazy.
No plane will take off if there is a calculated chance that it could crash.
Every life is special. Every life is sacred. Every life has rights and hopes and dreams and its own special contribution to humanity.
Yet, losses have to be weighed against the alternatives. Would you think it ethical to not put out seat belts, because they don't save lives in all cases? Or may even be harmful in some circumstances?
"Acceptable losses" is certainly a distasteful and dehumanizing perspective. Yet how else should we reason about seat belts? Or anything similar that is going to help save lives in the vast majority of cases, but may fail in some?
We don't always have the option of perfection. And is it really ethical to stand by as special, sacred lives end because we can't save every life, every time?
And let's recall that this isn't a choice between "human drivers, no lives lost" vs. "AI drivers, some lives lost".
It's a choice between "human drivers, MANY MANY lives lost" and "AI drivers, fewer lives lost".
What would archagon have us do? Not have AI drivers until a perfect solution is possible? The "crash-unavoidable" scenario is an unavoidable reality as long as stopping distances/acceleration/reaction times are non-zero.
It sounds as if archagon would have us spend all our time looking at trees -- and none of it looking at the forest -- all while thousands of people die every year under the status quo.
I think perhaps the reason archagon is so aghast though is a combination of two things:
1) The illusion of [human] control and judgement. Implicit in archagon's reasoning is the idea that the outcome MIGHT have been different had a human been making a decision. Yet a human WAS in control here, and DIDN'T make a different decision. That happens. A lot. Archagon seems to be so distracted by the novelty of this situation that he/she has not noticed that.
2) The normalization of automotive deaths and safety engineering in the current industry. 90 people die per day in the U.S. due to errors made by human drivers. Every day human drivers are on the road, 90 more lives are snuffed out. If we follow archagon's hand-wringing, then how could we justify having cars AT ALL? Of course, the reality that many people would die because of the lack of cars makes even the abolition of cars a less than perfect solution with (according to archagon's apparent line of reasoning) a body count attached and all the moral implications thereof.
Would we let a doctor murder a random healthy person to harvest organs to save ten lives? When we talk about weighing alternatives, it's like you say: It won't save lives in all cases, but it will decrease the loss of innocent lives overall.
The whole utilitarian "well fewer people died" ignores the distinction between me driving recklessly and killing myself, and me driving recklessly and killing someone else. Blame matters. If we can decrease the negative impact on innocent bystanders, making not-at-fault fatalities decrease, I'm not sure if I can argue against it.
"Suppose that a judge or magistrate is faced with rioters demanding that a culprit be found for a certain crime and threatening otherwise to take their own bloody revenge on a particular section of the community. The real culprit being unknown, the judge sees himself as able to prevent the bloodshed only by framing some innocent person and having him executed."
By killing one innocent, you can save many. Likewise for self-driving cars. By killing many innocents, you can save even more. If google said, "We're going to round up a bunch of people at random and murder them, but our cars will never crash," we wouldn't say "Innocent deaths have gone down and so it's acceptable." But if the same people died via malfunctions and chance, our response might change. The trolley problem isn't very palatable, but we'll need to collectively answer it.
I still don't find your perspective persuasive because I don't see why I'm not any more of an individual at risk when another human hits me, and society won't hold that human liable due to absence of mens rea. Many people get hurt or killed in a situation with no criminal element. I don't see how it's any more or less personal if I'm maimed or killed by a machine versus a bounded human mind. I also don't care to blame either, because I don't see why blaming the one person who hits or kills somebody I care about is going to protect the next person who gets hit or killed. Moral blame of the stressed cognitive system seems so useless and self-indulgent.
Is it less creepy if a caring mom or dad of 3 boys maims or kills me? Is it interesting to the victim that hypothetical Joe the Dad expresses remorse over the media and then business goes on for society as usual?
What matters to me is that less people get hurt or killed from preventable accidents (the simple-minded utilitarian perspective that does not weigh rich vs poor, young vs old, but just wants less people dying), because I understand that the next anonymous individual getting killed by either Joe the Driver or a Google / Tesla Car could also be me or someone I care about. So I really do care about the numbers.
The utilitarian perspective actually establishes a negotiable definition as its goal and it discusses the relationship between means and goals -- a position to talk about, disagree with, and measure.
Discussing how over 30,000 people die a year through the lens of Driver Joe, Dad of 3 Girls, is the same as discussing economic disparity through the lens of Plumber Joe, and it's a game that media and narrative elites play better than you, and can swing in any direction they want -- they can say, here's Plumber Joe, now support my economic narrative, whatever it is, because it helps Plumber Joe (does it? who cares about creepy data-backed perspectives?).
I feel that citizen decision-making improves with basic descriptive statistics over Plumber Joe, but they really prefer discussing things in terms of Plumber Joe, or in this case, Driver Jane, Mom of 3 Boys.
I agree that many people would prefer living in more utilitarian society where machines have a hand in making these kinds of decisions. "Abstracting out" the damage caused by driving strictly into the domain of algorithms is surely a tempting prospect for the tech-minded, especially given that human behavior is so unpredictable and uncorrectable. But this shift into the world of semi-intelligent machines would affect all people, not just the drivers, and so it should be decided on collectively. Instead, it's at great risk of being fast-tracked onto the the rest of society by companies like Tesla — irresponsibly, at that.
And this is all before even questioning the base premise — that self-driving cars will inherently be better than human drivers. Despite their many miles on the road, their combined experience pales compared to human-driven miles, especially in edge cases and harsh weather conditions. There are huge risks in introducing this technology prematurely.
Every event is truly massive. It's made worse by the fact that it's an entirely preventable problem. Surely the problem is that at the moment there's so many that car companies don't even count the minus-one points. It's minus 33,000 points per year in the US alone. [1]
What worries me is that we're dreaming up massive fear scenarios. I see the aim of the automation is to prevent the scenarios from ever happening. Sensible speeds with sensible distances.
We can never make it to having zero casualties unless we can first get to a stage where we can treat accidents as an abnormality. Something that needs to be investigated in the same manner as an aircraft crash.
I too hate terms like "acceptable losses" as well as it brings up the truly horrible "cost-benefit" analysis that Ford did [2] with it being cheaper to pay out to the dead relatives rather than fix the problem. I too worry that if we accept lower standards for software now, it will set a precendent with 'acceptable' deaths that will only get lowered.
If Tesla can apply the level of analysis that approaches airplane crash investigation as well as fixes to prevent each death - then I believe we're on the best path.
It's also worth noting that Tesla used the word 'crash' rather than 'accident' in this report, something which has been highlighted [3] as another thing we can do to improve investigations into road deaths.
> We're perfectly happy to let technology (literally) throw anonymous individuals under the bus as long as less people die overall, but what if it's you that gets hit by an auto?
If it's me that gets hit by a car, what do I care whether it was driven by squishy grey matter or solid silicon? It's not like AI-driven vehicles hurt more. I'm dead either way.
So assuming it IS going to be me that gets hit, I'd prefer we use whichever driving mechanism is less likely to be checking Facebook or texting its friends while in control of a ton of metal moving at 100km/h.
This is the first known fatality in just over 130 million miles where Autopilot was activated. Among all vehicles in the US, there is a fatality every 94 million miles.
Given that Autopilot is generally activated under safer, more routine driving scenarios -- decent weather, regular suburban traffic and so on; for which we can naturally expect significantly lower fatality rates -- it doesn't sound like it's off to a particularly good "batting average" so far. Especially since we've been promised until now that self-driving cars will ultimately be not just incrementally safer, but categorically safer than human-piloted vehicles.
Also of note: Tesla often claims that their cars perform very well in an accident. It may be that a crash that would result in a fatality in the median vehicle on the road does not produce a fatality in a Tesla. So the salient number is not "fatalities per vehicle mile among all vehicles in the US," it is "fatalities per vehicle mile among non-autopilot-driven Teslas."
Neither Autopilot nor the driver noticed the white side of the tractor trailer against a brightly lit sky,
I'm intrigued that the color is relevant in the car's case - wouldn't it be using some sort of radar to detect and map objects rather than vision? I appreciate I am probably missing something.
Tesla's Autopilot uses a radar unit, a front-facing camera, and twelve ultrasonic sensors on the front and back bumper. The ultrasonic sensors are short range and are for detecting adjacent cars and obstacles while parking, the camera is used for detecting lanes, and the camera and radar work together to detect cars.
The radar is low to the ground and probably doesn't pick up a trailer that's high off the ground. The camera could, but not if contrast is too low. (And I'm not sure if the software is able to recognize the side of a trailer anyway.)
Yeah, the height seems to be the problem. There was a "similar" accident a few months ago where a Tesla was summoned out of the garage and it hit a trailer because it was to high for the radar to get it. http://s3.amazonaws.com/digitaltrends-uploads-prod/2016/05/T...
Do you think they will/should change this? I understand the radar can't see an obstruction above x feet, but that doesn't do any good when the car is x+y feet tall.
From what I read about humans, you need +/- 30 degrees up/down vision to get a license (varies by jurisdiction). And we kill a little over a million people each year on roads. Not sure what this should imply about a car's radar though.
Autopilot uses a forward facing camera & radar. It sounds like the truck came in at a perpendicular angle. It may have been in the cone of sight for the camera but not the radar depending on how those are setup.
The 360 degree sensors on the Tesla are for parking only (< 20 feet range).
Personally when I worked on a self driving car we were going with 360 degree camera and doing path planning based on that, but Tesla has opted not to do that.
"Neither Autopilot nor the driver noticed the white side of the tractor trailer against a brightly lit sky, so the brake was not applied."
Now that's blaming the user for a design flaw. There is no excuse for failing to detect the side of a semitrailer. Tesla has a radar. Did it fail, was its data misprocessed, or is the field of view badly chosen? The single radar sensor is mounted low on the front of the vehicle, and if it lacks sufficient vertical range, might be looking under a semitrailer. That's no excuse; semitrailers are not exactly an uncommon sight on roads.
I used an Eaton VORAD radar in the 2005 Grand Challenge, and it would have seen this. That's a radar from the 1990s.
I want to see the NTSB report. The NTSB hasn't posted anything yet.
Maybe they had to do it for legal reasons I don't know (I'm certainly not a lawyer) and I'd love to own a Tesla but couldn't they have worded this a little more sympathetic and a little less lawyer?
Unfortunately, fortunately, Tesla is having to educate people and being very clear to not allow room for panic and unwarranted fear mongering.
I would say it is as to the point as it can be, and that it is heartfelt.
one failure and they will take a minor publicity and money hit, two and its going to devastating
They have to defend autopilot not only to protect the brand but to protect the public's perception of autonomous vehicles in general.
Self driving tech is poised to save many many lives. So from a utilitarian perspective, it's probably justified to take extraordinary measures to make sure reactionary media and public whim doesn't kill it off, however uncomfortable that might seem in the short term.
Whilst this case is incredibly sad (and I don't want to downplay that in any way), if you're trying to minimise the overall amount of fatal crashes, exonerating the tech is the priority (if it is truly not at fault).
> They have to defend autopilot not only to protect the brand but to protect the public's perception of autonomous vehicles in general.
They could have done that in two separate posts. This was tone deaf at best.
Why? Why not just build the cars people want? (including cars that people want but don't realize they want yet)
There shouldn't be a political agenda associated with engineering. Build what is needed. Build what people want. Build what people will need. But never "defend my reputation and the reputation of this device that I'm making"
EDIT: I mean, I get why Public Relations are important and so forth. So Tesla is certainly free to do what they want here. But lets not pretend that this carefully crafted "condolence" piece that has come out roughly one and a half months late is anything but damage control for this company.
The computer would have to be 99.99999% reliable to do that.
The accident rate is around 74 per 100 million miles (and fatalities is 1.13).
It's unclear exactly how to turn that into a percentage, but no matter how you do it it's quite high.
Say an accident takes 5 minutes, and people drive 30 miles/hour. Then that works out to 99.999% for humans. If you use the numbers for fatalities then it's 99.99999%.
I.e. 99.99999% of the time, as whole across all [US] humans, people drive in a way that does not cause a fatality.
That's the bar computers have to cross in order to save any lives at all.
I personally can't wait for fully self-driving cars, but it's going to be a big PR battle in the US
Based on what?
So long as every Tesla accident makes the homepage of CNBC, unlike the millions that occur in other makers' vehicles.
Tesla puts out an immediate PR, and the breaking news that went up on all the financial sites included quotes from it nearly as fast.
http://www.cnbc.com/2016/06/30/us-regulators-investigating-t...
Already down $6/share in after hours trading.
This time, there's an interesting question. Did Tesla remotely access the crash data after the crash? Did they alter any data? Is that verifiable? The NTSB will probably explore that issue. The crash data record in an airbag controller becomes read-only when the airbag fires.
I think they did a fine job of handling this, and with the high visibility of this incident due to the use of autopilot they really had no other option.
https://www.teslamotors.com/de_DE/blog/teslas-antwort-auf-da...
Sure, if all the lawyers will promise not to take some statement out of context and sue them over that.. As long as such lawyers exist and that's the way the legal system works this is what can be expected out of statements from companies..
Personally, I would have used a more neutral title, led with sympathies for the family, and then gone into the technical detail.
Deleted Comment
If their detectors don't see a white car against a bright background, that's obviously a serious problem.
Edit: Given the down votes, I guess people really want the mental picture. Sad.
Assisted systems will lead to drivers paying less attention as the systems get better.
The figures quoted by Tesla seem impressive but you have to assume the majority of the drivers is still paying attention all the time. As auto-pilots get better you'll see them paying attention less and then the accident rate will go up, not down for a while at least until the bugs are ironed out.
Note that this could have happened to a non-electric car just as easily, it's a human-computer hybrid issue related to having to pay attention to some instrument for a long time without anything interesting happening. The longer the interval that you don't need to act the bigger the chance that when you do need to act you will not be in time.
You don't design a feature that invites misuse and then use instructions to try to prevent that misuse. That's irresponsible, bad engineering.
The heirachy of hazard control [1] in fact puts administrative controls at the 2nd-to-bottom, just above personal protective equipment. Elimination, substitution and engineering controls all fall above it.
Guards on the trucks to stop cars going under are an engineering control and also perhaps a substituion - you go from decapitation to driving into a wall instead. It's better than no guards and just expecting drivers to be alert - that's administration - but it's worse than elimination which is what you need if you provide a system where the driver is encouraged to be inattentive.
User alertness is a very fucking difficult problem to solve and an extremely unreliable hazard control. Never rely on it, ever. That's what they're doing here and it was only a matter of time that this happened. It's irresponsible engineering.
edit: My source for the above: I work in rail. We battle with driver inattention constantly because like autopilot, you don't steer but you do have to be in control. I could write novels on the battles we've gone through just to keep drivers paying attention.
[1]: https://en.wikipedia.org/wiki/Hierarchy_of_hazard_control
Please do, and link them here. I'd be very interested in reading about your battles and I figure many others would too. This is where the cutting edge is today and likely will be for years to come so your experience is extremely valuable and has wide applicability.
"You need to keep your hands on the steering wheel at all times during autosteering, yet not crank the wheel hard enough to generate what the car thinks is an actual steering input (thereby disconnecting autosteer). I found this to be about the same amount of effort as simply driving."
I thought that was an interesting observation.
http://blogs.harvard.edu/philg/2016/06/27/smug-rich-bastard-...
[1] http://hal.pratt.duke.edu/sites/hal.pratt.duke.edu/files/u7/...
One option is the Tesla autopilot should have an indication when it approaches "low confidence" areas without disengaging, so the driver is not startled if they have to take back manual control.
Asiana 214 [0] is a classic example of crashing a perfectly good airliner into a seawall on landing.
In the Boeing 777, one example of the (auto)pilot interface showing safety critical information is the stall speed indication on the cockpit display [1], warning the pilot if they are are approaching that stall speed.
Hopefully Tesla will optimize the autopilot interface to minimize driver inattention, without becoming annoying.
[0] https://en.wikipedia.org/wiki/Asiana_Airlines_Flight_214
[1] http://imgur.com/bGsFTCG
On the road, where relative separation is much less (and there's even been talk of how self-driving cars can reduce following distances significantly, which just scares me more), the driver might not have even a second to react when he/she needs to take over from the autopilot.
Your point is still valid, but perhaps we approach a time when over-reliance is better than all but the best human pilots (Sully, perhaps).
http://jalopnik.com/volvo-engineer-calls-out-tesla-for-dange...
They're adding new features to inadequate/improperly configured hardware for what they're asking the car to do, and waiving away all liability for stupid peoples' actions with disclaimers (always be ready to take over).
Whether that's right or wrong is really subjective, especially when you take natural selection into account.
Tesla's (only!) radar sensor is located at the bottom of the bumper, if I'm not mistaken. Compare this with Google's, which is located in the arguably correct position, the roof. Also compare other manufacturers' solutions that are utilizing 2-3 radar sensors, as well as sonar.
https://forums.teslamotors.com/forum/forums/model-s-will-be-...
If your eyes aren't at their best, you know well to go to a doctor and not be driving in the meanwhile. Will the car with autopilot refuse to start or go on autopilot if the camera/sensor/radar has an issue?
So yes you are right, its either full AI or nothing.
http://www.autonews.com/article/20160321/OEM11/160329991/toy...
"Radar tunes out what looks like an overhead road sign to avoid false braking events"
https://twitter.com/elonmusk/status/748620607105839104
EDIT:
When the "overhead sign" comes down below overhead clearance of the vehicle the signal should not be masked. There should have been some braking action in this case. If there was not then the tesla autopilot is unsafe. This is the same blind spot discussed a few months ago that caused a tesla to run into a parked trailer using summon mode:
https://news.ycombinator.com/item?id=11677760
This seems like a serious flaw in autopilot functionality. Trailers are not that rare.
I would be interested if "autobrake/autofollow" functions of other car companies have similar problems.
This is why trailers require safety guards in the EU. Cars simply cannot get underneath them.
https://twitter.com/jn_cn/status/748626793180069888
"our new Prius applied the brakes while driving under an overpass. I'm glad no one was following too close"
More likely someone confusing the brake warning with the auto-braking shutdown warning. Because it uses optical cameras, driving into bright sunlight or into deep shadows can cause the system to shut down momentarily, and it beeps to let you know.
Even best case scenario that tweet has nothing to do with this incident because the technology and systems are different. Worst case it never applied the brakes at all, and the driver got confused.
If the system can bias its sampling towards the ground, it would make a lot of sense to do so. Lump "above the ground" and "very above the ground" into the same category and use other sensors (for example, the camera..) to detect the edge cases where it matters.
Of course we should have undercarriage bars regardless.
Google probably has close to this capability, but they are at the leading edge of image recognition.
In order to depend on it the AI would have to be able to distinguish 99.9+% and would have to be able to tell the difference between a green truck and a road sign. I honestly think self driving cars will require intelligence indistinguishable from artificial general intelligence.
This was state of the art in 2014 (improved in 2015, but this paper is on arxiv and gives the idea):
http://arxiv.org/pdf/1409.0575
Even top scoring Google had classification errors over 6% of the time in 2014. Object localization was much worse. Even if it is at 99% that means 1 of every 100 object will be misidentified. You probably identify that many random (non-car, non-sign, etc) objects in a week driving, maybe in a day. That kind of error is unacceptable in self-driving cars and it is state of the art. That is one of the reasons they are still testing and not selling self-driving cars at Google.
Better than human may not require quite 99.9+% object classification, but I do wonder how great the self-driving car records would be if someone wasn't always there to take the controls. It certainly wouldn't be "better than human" at this point.
What's beyond the pale IMO is that when auto-pilot was first demonstrated (at the unveil event) - "hands on the wheel" was not part of the story. Journalists and (what appeared to be) Tesla employees were using the feature without hands on the wheel. It looked like Tesla cashing-in on the positive PR without correctly framing the limitations of the tech.
Furthermore, Tesla includes sensors to map the entire surroundings of their cars, but why can't they include sensors to ensure customers have hands on the wheel? (update: comment says they do, but the check frequency is low. why can't it be high?!) It's not just the driver's life at stake, it's everyone else on the road--Tesla should disable this feature on cars [unless it ensures] drivers' hands are on the wheel. Engineers/execs at other companies taking a more responsible approach must be furious at the recklessness on display. One death is too many.
Tesla Auto-pilot fail videos: https://www.youtube.com/results?search_query=tesla+autopilot...
It's incredibly unfair to other drivers on the road to let someone else use beta software that could cause a head-on-collision.
[0] - https://news.ycombinator.com/item?id=12011635
Because that makes it less useful. I am a Tesla owner. I am an adult capable of monitoring the car and taking control when autopilot gets confused. My hand being on the wheel at all times is neither a necessary nor a sufficient condition for verifying that I am paying attention and am ready to take over control.
> One death is too many.
I am sick and tired of absolutist statements about risk. Why do you allow cars on the road at all? Why allow cars to have cupholders? Why are drive-through restaurants legal? You make utility-risk trade-offs all the fucking time.
Yes it is. Set your ego aside for a second, forget about how good or bad of a driver you are, and consider how long it would take anyone to move their hand back to the wheel and take control.
Lets be generous and assume half a second. 60 mph * 0.5 seconds = 44 feet, or roughly 3 car lengths before you've even begun to re-assume control of the vehicle, let alone take the appropriate action to handle whatever is going wrong.
Then you are not using the feature the way Tesla says you should, the only way Tesla says it's safe to be used.
> I am sick and tired of absolutist statements about risk.
Recklessly rolling out tech is screwing the industry-at-large, given the regulatory hurdles that must be overcome.
>> One death is too many.
You took this out of context. When Tesla makes no genuine, up-front attempt to educate users on how to use Auto-Pilot--yes, one death is too many.
Correctly-deployed autonomous driving stands to save thousands of lives annually; what's at risk is some overeager company @#!$ing the regulatory efforts by being irresponsible at scale.
The truth is that autonomous driving is something every major car manufacturer could develop and demo tomorrow, but large companies like Ford/GM/Toyota are way too risk-averse (or some would say, responsible), to promote it in such a way.
Here's hoping that Tesla doesn't poison the well for autonomous driving.
Deleted Comment
They can and the car already does that. It just do not check very frequently compared to some of the competitors.
It makes me a bit sad that the political zeitgeist in the tech community is leaning towards "acceptable losses" when it comes to accidents in automated cars, to the point of pre-emptively expressing disdain at ordinary people reacting negatively to such news. I sense it's going to become harder and harder for us to talk about our worries and skepticism regarding automated driving, since the louder voices claim it will all be worth it in the end. Surely — surely — you're on the side of less death? But personally, I find the utilitarian perspective distasteful. We're perfectly happy to let technology (literally) throw anonymous individuals under the bus as long as less people die overall, but what if it's you that gets hit by an auto? What if it's someone you care about, not Anonymous Driver On TV? The point is that humanity is not a herd to be taken as a whole; every life has rights, including the right not to be trampled by algorithmic decisions or software bugs for the betterment of all. (Sure, you could argue just as well that we have the right not to be run over by drunk and otherwise negligent drivers, but at least this kind of death is not methodical and has some legal recourse.) I feel this perspective needs a strong voice in the tech community too, to counter the blind push forward at the expense of human lives.
Now, this isn't necessarily what happened in this case, but I find Tesla's behavior in these kinds of situations to be creepy and self-serving, at best. Is every death going to come with a blog post describing how much safer automated features are compared to human drivers? Every auto-related casualty is, and should be, a massive event, not a minus-one-point on some ledger in Elon Musk's office.
You might scoff and say that's hyperbole, or scaremongering, and you might be right, but I took it as a warning. It is our responsibility as ethical engineers to ensure that people aren't harmed by the technology we create. It's true that self-driving cars will likely be safer than human-driven cars, but that doesn't absolve us of responsibility. In fact, it makes it all the more poignant.
I-35W was seriously bad news, plus the cell network went down.
What. "Acceptable losses" is, and has always been the rule when it comes to accidents in all cars, ever. We've made absolutely massive safety improvements in the last few decades, cutting the death rate to around 1/3 of its historical maximum, yet even so tens of thousands of people will die in America alone this year (probably in the neighborhood of ~30k). And next year. And the year after. According to the World Health Organization, there were 1.25 million road traffic deaths in 2013, and if anything I'd expect that to have continued to rise as more and more people worldwide gain access to vehicles.
Yet we will absolutely continue to support car use, because flexible mechanized point-to-point personal transportation is insanely valuable to us (and in fact American society at least simply could not function without it at all at this point). You can dress it up however you like, but the objective fact of the matter is that literally millions of deaths are considered "acceptable losses" here.
Full automation represents our absolutely best shot to reduce the horrendous, annual loss of human life. So personally, I find your appeal to emotion and thinly veiled "what about the children" shtick disgusting and immoral. You've invented imaginary "rights" that absolutely do not and should not exist. Your so called
>"blind push forward at the expense of human lives"
is literally the opposite of what automation aims to accomplish. And I will freely assert will accomplish, because humans are awful drivers. Really, automation is merely filling in the missing piece of personalized transport that we should have had from the start, except that our mechanical technological development was far ahead of our information gathering and processing technological development. Having a human handling that has always been a hack, and getting rid of it has such vast positives that it absolutely justifies a very strong push forward as soon as possible. Every year without driverless vehicles when we could have had them is literally hundreds of thousands more people dead.
This tech does not exist in a vacuum, and context absolutely matters.
How do you counter a utilitarian argument without humanizing the situation? If my friend or loved one were killed by an ill-considered and poorly-implemented automated driving algorithm, no amount of statistics would convince me that the tradeoff was worth it. Who bears responsibility in that case? No one? I suspect that ordinary people won't accept that for a long time to come. At least with human drivers, the courts can decide culpability. At least with a human in the mix, we're all still operating within the same social and legal order.
If humans are such awful drivers and replacing them with automation is the moral thing to do, why is it not the moral thing to do to stop them from driving altogether, right now and for the time being, until we have automation that is better than "awful"?
If we're such rubbish drivers that we'll always kill others if we're driving, why are we allow ourselves to drive?
Everyone is on the side of less death, but since there are some inherent dangers in zooming along at 120km/h, the losses can only approach zero and will never reach it. The fact that cars are used at all shows that we make something like a utilitarian decision already. They are very, very useful, with a small possibility of personal catastrophe. Same with air travel, drinking beer, or leaving the house ever, all to different degrees.
If the autocars are less likely to kill me than human drivers, I'm for the autocars.
I think this situation is a bit different than the arrival of air travel or even the automobile, because with all those technologies, humans were still making, or validating, all the life-and-death decisions (barring technical failure). I also think the kind of death we're talking about matters a whole lot. What if autos were incredibly safe, but far more prone than ordinary drivers to running over small children and pets? Would society accept that?
Many people are not comfortable with impenetrable algorithms managing their safety on a mass scale, or indeed, with making any conscious decisions at all when it comes to "trolley problem" issues. And that discomfort is well within their rights. Technological progress is not a divine edict, but something that society has to negotiate and agree upon. The debate should be respected.
Yes, this is another thing that makes me so angry about this situation.
Back in the day, I worked on a project to give autonomy to power wheelchair systems. In order to get it to market, we had to go through FDA approval, which involved testing the equipment in every conceivable scenario. We had to show how it performed in rain, snow, dust storms, sunlight, etc. and it had to be incredibly rigorous, defining all limitations, and how the user would be impacted if the systems failed.
If we came the FDA and said "Our system works in many scenarios, but we've found a pathological case where on certain sunny days it will crash into certain brightly colored objects, resulting in almost certain death for the user" we would have been summarily rejected. Non starter.
And yet we're willing to put a system with the above disclaimer (which should have been known to Tesla engineers. This is sensors 101.) on the market with absolutely no oversight? Seriously?
In the case of any normal crossing or intersection the driver should have taken over manual control. In the case of a sudden deviation of other vehicles planned paths, I have doubts the driver could have reacted in a timely and safe manor, even without the automation. Though an even more complete sensor system may have allowed the automation to detect and react faster than a human could.
The human behind the wheel had been explicitly instructed to keep hands on the wheel and feet on the pedals -- and that it was his responsibility to override the machine's judgement if circumstances warranted.
That's a great point but the truth is that we already treat human lives as a resource to be spent: for instance, we allow planes to fly and automobiles to be driven even though their safety is not perfect. We allow that, out of say a million flights, one plane will fatally crash killing usually everybody on board (I don't know what the real number is- I made the 1/1mil up).
So basically we accept that in order for each of us to fly to our destination, 300 ish people must die every so often. Like a kind of sacrifice.
I don't see how any of this is going to stop any time soon, especially when people are always quick to tell you how safe flying is, or how, although road accidents are common and people get killed in them, "what can you do, stop driving?". It seems to me that at some level we all accept that in order for us to have our fast transport, some people must die.
I expect Autopilot to be disabled soon.
Edit:
>So basically we accept that in order for each of us to fly to our destination, 300 ish people must die every so often. Like a kind of sacrifice.
Your position is...crazy.
No plane will take off if there is a calculated chance that it could crash.
Yet, losses have to be weighed against the alternatives. Would you think it ethical to not put out seat belts, because they don't save lives in all cases? Or may even be harmful in some circumstances?
"Acceptable losses" is certainly a distasteful and dehumanizing perspective. Yet how else should we reason about seat belts? Or anything similar that is going to help save lives in the vast majority of cases, but may fail in some?
We don't always have the option of perfection. And is it really ethical to stand by as special, sacred lives end because we can't save every life, every time?
It's a choice between "human drivers, MANY MANY lives lost" and "AI drivers, fewer lives lost".
What would archagon have us do? Not have AI drivers until a perfect solution is possible? The "crash-unavoidable" scenario is an unavoidable reality as long as stopping distances/acceleration/reaction times are non-zero.
It sounds as if archagon would have us spend all our time looking at trees -- and none of it looking at the forest -- all while thousands of people die every year under the status quo.
I think perhaps the reason archagon is so aghast though is a combination of two things:
1) The illusion of [human] control and judgement. Implicit in archagon's reasoning is the idea that the outcome MIGHT have been different had a human been making a decision. Yet a human WAS in control here, and DIDN'T make a different decision. That happens. A lot. Archagon seems to be so distracted by the novelty of this situation that he/she has not noticed that.
2) The normalization of automotive deaths and safety engineering in the current industry. 90 people die per day in the U.S. due to errors made by human drivers. Every day human drivers are on the road, 90 more lives are snuffed out. If we follow archagon's hand-wringing, then how could we justify having cars AT ALL? Of course, the reality that many people would die because of the lack of cars makes even the abolition of cars a less than perfect solution with (according to archagon's apparent line of reasoning) a body count attached and all the moral implications thereof.
In that case, it comes back to an old philosophical question (the trolley problem https://en.wikipedia.org/wiki/Trolley_problem). A framing I like is as follows:
"Suppose that a judge or magistrate is faced with rioters demanding that a culprit be found for a certain crime and threatening otherwise to take their own bloody revenge on a particular section of the community. The real culprit being unknown, the judge sees himself as able to prevent the bloodshed only by framing some innocent person and having him executed."
By killing one innocent, you can save many. Likewise for self-driving cars. By killing many innocents, you can save even more. If google said, "We're going to round up a bunch of people at random and murder them, but our cars will never crash," we wouldn't say "Innocent deaths have gone down and so it's acceptable." But if the same people died via malfunctions and chance, our response might change. The trolley problem isn't very palatable, but we'll need to collectively answer it.
Is it less creepy if a caring mom or dad of 3 boys maims or kills me? Is it interesting to the victim that hypothetical Joe the Dad expresses remorse over the media and then business goes on for society as usual?
What matters to me is that less people get hurt or killed from preventable accidents (the simple-minded utilitarian perspective that does not weigh rich vs poor, young vs old, but just wants less people dying), because I understand that the next anonymous individual getting killed by either Joe the Driver or a Google / Tesla Car could also be me or someone I care about. So I really do care about the numbers.
The utilitarian perspective actually establishes a negotiable definition as its goal and it discusses the relationship between means and goals -- a position to talk about, disagree with, and measure.
Discussing how over 30,000 people die a year through the lens of Driver Joe, Dad of 3 Girls, is the same as discussing economic disparity through the lens of Plumber Joe, and it's a game that media and narrative elites play better than you, and can swing in any direction they want -- they can say, here's Plumber Joe, now support my economic narrative, whatever it is, because it helps Plumber Joe (does it? who cares about creepy data-backed perspectives?).
I feel that citizen decision-making improves with basic descriptive statistics over Plumber Joe, but they really prefer discussing things in terms of Plumber Joe, or in this case, Driver Jane, Mom of 3 Boys.
And this is all before even questioning the base premise — that self-driving cars will inherently be better than human drivers. Despite their many miles on the road, their combined experience pales compared to human-driven miles, especially in edge cases and harsh weather conditions. There are huge risks in introducing this technology prematurely.
What worries me is that we're dreaming up massive fear scenarios. I see the aim of the automation is to prevent the scenarios from ever happening. Sensible speeds with sensible distances.
We can never make it to having zero casualties unless we can first get to a stage where we can treat accidents as an abnormality. Something that needs to be investigated in the same manner as an aircraft crash.
I too hate terms like "acceptable losses" as well as it brings up the truly horrible "cost-benefit" analysis that Ford did [2] with it being cheaper to pay out to the dead relatives rather than fix the problem. I too worry that if we accept lower standards for software now, it will set a precendent with 'acceptable' deaths that will only get lowered.
If Tesla can apply the level of analysis that approaches airplane crash investigation as well as fixes to prevent each death - then I believe we're on the best path.
It's also worth noting that Tesla used the word 'crash' rather than 'accident' in this report, something which has been highlighted [3] as another thing we can do to improve investigations into road deaths.
[1]: http://blog.yhat.com/posts/traffic-fatalities-in-us.html
[2]: https://en.wikipedia.org/wiki/Ford_Pinto#Cost-benefit_analys...
[3]: http://www.citylab.com/commute/2015/09/why-we-say-car-accide...
If it's me that gets hit by a car, what do I care whether it was driven by squishy grey matter or solid silicon? It's not like AI-driven vehicles hurt more. I'm dead either way.
So assuming it IS going to be me that gets hit, I'd prefer we use whichever driving mechanism is less likely to be checking Facebook or texting its friends while in control of a ton of metal moving at 100km/h.
Given that Autopilot is generally activated under safer, more routine driving scenarios -- decent weather, regular suburban traffic and so on; for which we can naturally expect significantly lower fatality rates -- it doesn't sound like it's off to a particularly good "batting average" so far. Especially since we've been promised until now that self-driving cars will ultimately be not just incrementally safer, but categorically safer than human-piloted vehicles.
I would like to see how many accidents would have happened under Autopilot, but were only prevented because of the human.
I'm intrigued that the color is relevant in the car's case - wouldn't it be using some sort of radar to detect and map objects rather than vision? I appreciate I am probably missing something.
The radar is low to the ground and probably doesn't pick up a trailer that's high off the ground. The camera could, but not if contrast is too low. (And I'm not sure if the software is able to recognize the side of a trailer anyway.)
From what I read about humans, you need +/- 30 degrees up/down vision to get a license (varies by jurisdiction). And we kill a little over a million people each year on roads. Not sure what this should imply about a car's radar though.
Deleted Comment
The 360 degree sensors on the Tesla are for parking only (< 20 feet range).
Personally when I worked on a self driving car we were going with 360 degree camera and doing path planning based on that, but Tesla has opted not to do that.
Deleted Comment
Deleted Comment
It's going to get better, but it's crazy to assume it's currently much more than a highly-assistive cruise control.
Deleted Comment
Now that's blaming the user for a design flaw. There is no excuse for failing to detect the side of a semitrailer. Tesla has a radar. Did it fail, was its data misprocessed, or is the field of view badly chosen? The single radar sensor is mounted low on the front of the vehicle, and if it lacks sufficient vertical range, might be looking under a semitrailer. That's no excuse; semitrailers are not exactly an uncommon sight on roads.
I used an Eaton VORAD radar in the 2005 Grand Challenge, and it would have seen this. That's a radar from the 1990s.
I want to see the NTSB report. The NTSB hasn't posted anything yet.
Tesla is under a regulatory requirement to report all relevant mishaps to the NHTSA, even if they occur outside the US.
[0] http://www.ntsb.gov/investigations/AccidentReports/Pages/hig...
It seems to be on their radar now,
http://www.detroitnews.com/story/business/autos/2016/07/08/n...