Tesla: "the incident occurred as a result of the driver not being properly attentive to the vehicle's surroundings while using the Summon feature or maintaining responsibility for safely controlling the vehicle at all times."
That's the "deadly valley" I've written about before - enough automation to almost work, not enough to avoid trouble, and expecting the user to take over when the automation fails. That will not work. Humans need seconds, not milliseconds, to react to complex unexpected events. Google's Urmson, who heads their automatic driving effort, makes this point in talks.
There is absolutely no excuse for an autonomous vehicle hitting a stationary obstacle. If that happened, Tesla's sensors are inadequate and/or their vision system sucks.
Precisely. People can harp on about 'oh but it says to pay attention whilst using it' and 'oh but it's still in beta' all they want, but good engineering in the real world, where shit kills people regularly, means engineering out all possible human-machine failure modes.
This is the result of applying 'she'll be right' cavalier software engineering attitudes to real-world engineering (civil, mechanical, structural, etc.) because in the vast majority of cases, software engineering is completely erasable and is fixed with a simple refresh. Agile doesn't work in the real world. It hasn't ever worked in the real world and that's why waterfall approaches and FEED is so engrained - it's proven to be the best method to avoid killing people and creating garganutan cock-ups.
Tesla should not include features that are 'beta' in a car. Tesla should not include features that promote inattentive operation of the vehicle if they are not absolutely, several-sigma robust. It doesn't matter how many disclaimers you slap on it or how cool and futuristic it is, the feature fundamentally encourages people to stand away from their vehicle, press a button, do something else and have it magically arrive next to them. This is bad engineering design no matter how you try to spin it as 'innovation' and doesn't cut it in the real world.
edit: The entire aviation industry is a textbook in this concept.
- 7 million vehicles by multiple brands were equipped with Takata airbags that could blast out metal shards (2 deaths and 30 injuries reported).
- 30 million GM vehicles were recalled for faulty ignition switches that could shut down the engine while driving, plus prevent the airbag from deploying (at least 124 deaths in accidents where that happened).
"Stericycle, a recall consultant and service firm for automakers, said there have been 544 separate recalls announced [in 2014]"
The funny thing is that, for decades, there've been jokes back and forth about "if automotive engineering was like software development".
Now we're seeing what it's really like when automotive engineering and software development come together. And it's sometimes really similar to the jokes. :P
If the statistical reliability is better than a human driver, then this is good engineering.
"Man bumps car into trailer" would not have been a headline, because this happens so often it's completely boring. Notably the cost of failure here is property damage. If the car were moving fast enough to endanger human life, then a driver would be behind the wheel, and fully responsible by law and common sense for the motion of the vehicle.
These headlines are going to become more common as autopiloted cars become more popular. It is important to frame them in the context of transitioning from a system that is also unreliable-- the human nervous system.
We should also expect to see a few machine failures in specific situations where a human could have avoided damage, but we must also consider them against the easily avoidable mistakes that humans make every day, which a machine can avoid with near 100% reliability.
People are afraid of airplane crashes because they're dramatic, scary, newsworthy, and out of the passengers' control, but the complete story is that stepping on a plane is a safer activity than driving to the airport. Headlines about autopilot failures will get clicks for the same reasons, but if not framed with statistics, it's just noise.
In other words, move fast and break things doesn't apply to shit that can kill people when it does break. Many of SpaceX's problems seem to stem from the same mentality. I'm a fan of both, fwiw.
It bothers me, though, that the standard for automation is that it must /never/ hit a parked car, not "at least as good as the average human" or "at least as good as the 95th percentile human" etc.; I don't know enough to judge what's going on in this situation, but if the technology saves more lives/property/etc than it damages, IMO it's worth adopting.
Agreed. Zero tolerance (or 100% reliability) necessarily has infinite cost and/or takes infinite time. We need to be reasonable about our expectations for autonomous systems.
That said, what's the likely accident rate for a 95th percentile human, starting in a parked car, hitting another stationary vehicle parked directly in front of them? There must be a few "accidentally put it in drive instead of reverse" type incidents but I'd except it to be exceedingly rare.
Statistical analysis fails for small samples. In a single case, it is never possible to determine what would have happened if there was a human controlling the wheel instead of the autonomous system. With either of those, the accidents will happen, even if rarely. In case of a human controlling the wheel, the punishment meted out to the human acts as a signal to that human and others that they have to be more careful in how they control the vehicle. Therefore, in case of any accident by Tesla's autonomous system, Tesla(or any other autonomous control providing company) should be made to shoulder the blame. So that they are not only prodded to make their systems more robust, but also prodded to design the system to ask for human intervention in case it senses it cannot make a good judgement in the conditions.
True and I generally agree. But hitting a parked car I would expect to be extraordinarily rare for an autonomous vehicle. Isn't that the most basic test?
First, no one expect automation to be perfect, but people do have a reasonable expectation of it being much better than an average human driver. Most accidents (in good weather conditions) happen when drivers are distracted, tired or sick. This does not apply to an automated system, and even when something goes wrong the system should go into a fail-safe mode (in this case - stop).
Second, "at least as good as the average human" is a bad benchmark. Not because an average human is so bad at driving, but because people make high-level judgements about acceptable risks. For example, you are much, much less likely to dent your bosses Porch than some random car. AI is equally likely to hit either.
the human average is around 185 crashes and 1 fatality per 100 million miles, which is pretty damn impressive considering the huge variation in terrain and skill. I'll be very surprised if any self driving tech right now can even dream of coming close to these stats.
It bothers me, that Tesla can claim "beta" on a feature they've enabled on real-world consumer cars. This isn't about never hitting anything, this is about dodging liability by claiming the feature shouldn't have been used.
If it's available to a regular consumer (as opposed to, say, a test driver), it's deployed and will be used.
Reminds me of Air France 447. When you train people to do a task using a robot, and they're used to using the robot 90% of the time, when the robot decides its had enough and hands back control unexpectedly bad things can happen.
Asiana 214 [0] is also relevant. The aircraft went into that partly disabled the auto-throttle. The pilot expected the engines to spool up automatically, but they didn't.
A key factor was that Asiana pilots were actively discouraged from hand flying the jet throughout most of the approach. They sometimes refer to younger pilots as "Children of the Magenta Line" [1] because of their over-reliance on the LCD flight director to fly the jet.
A similar situation could easily occur with Tesla autopilot which works as expected 99.9% of the time, with the driver caught off guard in the 0.1% of the time it doesn't work properly and causing a crash.
AF447 story is amazing. The pilots where pushing pitch lever opposite directions and fly by wire system would cancel those out. Meanwhile the plane was tanking down into the ocean and captain was sleeping until last 10 seconds before crash.
I think autopilot confidence-mood light should be placed on such vehicles. This way you could predict what is going to happen with the vehicle.
It's not an autonomous vehicle. This mode does operate without a human in the driver's seat, but the human is still expected to observe and intervene if things go wrong. To accommodate the special nature of this, speeds are limited to 1MPH for this particular feature, and the car will move a maximum of 39ft before ending the maneuver.
The problem here is that it's too easy to activate the feature by accident and it's not sufficiently clear when you do so. IMO it needs another confirmation step on the touchscreen after double-clicking the Park button before it goes into the "auto-park after closing the door" mode. It's not a sensor problem; the sensors aren't intended to be foolproof here, they're just a backup.
>the human is still expected to observe and intervene if things go wrong. To accommodate the special nature of this, speeds are limited to 1MPH for this particular feature, and the car will move a maximum of 39ft before ending the maneuver
That's what it does? And it can crash into things? How was this hailed as ground-breaking technology? When it was announced it was on the front page of every technology site.
This is a basic UX failure. If you press the "Park" button one twice instead of once, the Autopark dialog appears asking to select forward or backward parking. But it's not clearly communicated is that if you don't select either option, forward is automatically selected and Autopark turned on. This is in contrast with all previous autopark features which required manual confirmation on the touch screen.
It's too easy for a driver to be momentarily distracted. Tesla should require the driver to opt in to self parking on the touch screen, rather than the current behavior which requires them to opt out. This seems only prudent for a feature that makes the vehicle suddenly move on its own.
The sensors really aren't ready for it. For example, when I tested it parallel parked to a curb, the Model S decided to turn the wheels on its own and ended up scraping the rear wheel because it ran into the curb instead of going straight as it was originally aimed.
The current sensors and software are absolutely not ready for true self driving, that much is clear to me after driving the S for 6 months.
10+ year ago for Grand Challenge it cost some money (and good luck getting decent resolution stereo with decent FPS from a pair of 1M sensors, so most relied on lidar - $3K and you have minimally decent 3D of the scene ahead). Today the tens-megapixel sensors cost like nothing, along with CPU power to process it. One can have reasonable infrared too. Ultrasound sensors - cost nothing. Short distance lidar cost close to nothing too. Millimeter radar still probably cost a bit just because no mass production. When i see Google cars - Lexus SUV - they have at least minimally reasonable set of sensors. Nobody else comes even close. I don't understand why.
I'm not sure if the biggest challenge is sensors or software. I don't know what Tesla is running, but I have a strong feeling that software in the large is not up to the task of autonomous driving. Most software (including automotive) has only very limited realtime behavior due to memory allocations, OS preemption, interrupts, ..., error-prone programming languages like are used and resource (memory) usage is often unbounded. I can't imagine that Tesla or anybody else that produces self-driving features at the moment is using something like Ada Ravenscar or advanced static validation techniques through all components that are involved in the self-driving features - and which are often quite complicated (image recognition, etc.) and therefore hard to run in such an environment.
Totally agree. I'm really sad to see Tesla jump the gun on this one and claim they have "autopilot". It seems similar to the debate around landing New Shepard / Falcon 9, except here the false claims and half-baked implementation could set self-driving cars back by another 5 years.
Are you worried about them setting self-driving back by 5 years versus a base case where Tesla didn't exist, or setting it back 5 years versus the 20 or so years of advancing and popularizing the possibilities that they've done?
Someone in the comment of the article posted the image of the trailer, it was pretty tall and the Tesla probably doesn't have sensors for this height:
http://img.ksl.com/slc/2590/259060/25906051.JPG
The sensors are inadequate for this particular situation. The problem for the car was that the obstacle was about five feet off the ground. The parking sensors aren't adequate to deal with something floating in the air like that, so the car was oblivious to the fact that anything was there. There is a camera that could have seen it, but Summon apparently doesn't use the cameras, only the sensors.
But with regards to how "humans need seconds, not milliseconds, to react to complex unexpected events": That actually doesn't seem to be a problem in this situation. This human evidently had seconds to deal with it. What appears to have happened here is that the guy somehow accidentally activated the feature, ignored the alert that came up, got out of the car and stood there as the car very slowly edged toward the trailer.
So Summon mode, which Tesla itself advertises as, amongst other things, a way by which your car can put itself in the garage (https://www.teslamotors.com/blog/summon-your-tesla-your-phon...), can't see objects that are not 'on the ground' but in the air, like, say, a Garage Door...
You are correct. Summon can be improved by using the camera as well as other sensors. I expect we'll see that in the future.
The other problem is a UI issue, from my perspective. Summon will automatically move forward after two presses of the stalk (one press is park). With being so easy to confuse with a park command, I think it would be imperative to have the user select "forward" or "backward" for summon to start, instead of assuming "forward". Otherwise perhaps better indication that the summon feature has been initiated?
This is what's called Artificial Stupidity. AI will never be achieved because fundamentally, a computer only does what it's told. There will be plenty of AS in the near future, though, due to misapplication of technology and wishful thinking.
That said, sure it should be able to stop on it's own, but I think they couldn't have been more clear that this is beta and the driver is still always responsible.
In my view the driver is just as liable if the put on cruise control and don't pay attention. Is it the manufacturer's fault the car slammed into a vehicle in front of them while cruise control was on? No, I think any reasonable person will be saying it's the driver's fault.
It's funny, I honestly had an entirely opposite reaction to that letter. It seems like a reasonable thing for the software developers to look at to confirm the systems were working as specified, but leaves me with a ton of questions about how this feature was designed overall.
It sounds like the only two mistakes that the guy made were to make a double press of the park button (which, as far as mistakes go, isn't the most unreasonable thing to do), and to assume that a car told to be in park would, well, be in park. He ignored warnings, yes, but he was likely worried about getting out of the car and doing other things by that point, which isn't wildly unreasonable either.
Summon mode is not turned on by default. That's the damning thing here. He turned on the ability to use Summon mode by hand, purposefully. At that point he should know the responsibility that comes with that. That's directly akin to manually disabling traction control on a normal car. At that point, you can't blame the manufacturer for losing control. You held the button down for three seconds and it dinged and the dashboard light came on and you KNEW that it would disable traction control.
He enabled Summon mode through the menu, then either accidentally or on purpose triggered Summon mode, then stepped out of the car and the car drove off. He shouldn't have assumed the car was in park when he knew that hitting the park button twice would activate this beta software that he had to manually enable in the first place.
He used Summon mode on purpose and didn't pay attention to its limitations or the rules saying "only use this on private property". The only blame Tesla has is selling a car to an irresponsible driver.
Interesting reading the follow up. This really really makes you wonder about the motives of the (non)driver. I agree with the various safety folks that the existing anti-collision features should take precedence over the summon feature so if nothing else I hope someone is back there re-ordering their subsumption behaviors to effect that.
Tesla has a big target painted on its back moving as fast as it is, and people will take advantage of that. This smells of that sort of thing but one can never know without being there. It looks pretty clear this person isn't going to get any sympathy from Tesla.
>> I agree with the various safety folks that the existing anti-collision features should take precedence over the summon feature
Well actually this is like executing something as a sudo user. The question of safety precedence doesn't arise because you have explicitly asked for it to be disabled. Now complaining that safety features should have still taken precedence is naive, its actually more like trying to dump the blame on somebody else's for what is very clearly your mistake.
I'm reminded of the IT security talk (Ugly bags of water), where the first dozen or so slides were about how the automotive industry had to make a lot of changes to account for human's error proneness.
I guess now that Software is sneaking back into automobiles, we're going to shift back into "blame the user" mode?
"But he used the feature wrong!" - To which I say, why was he able to use the feature the wrong way in the first place? Why are there so many limitations (won't sense a bike; won't sense a partially opened garage door) on a feature designed and advertised as "hands off summoning of the vehicle"?
I think the liability shifts a little bit because when you're in cruise control you're literally behind the steering wheel. In this case you're outside the vehicle.
I think as more autonomous features get developed it's going to be complicated for some time regarding who is culpable for a given accident.
Even outside the vehicle you're still in control. By default, Summon can only be used with the mobile app and with a 'dead man switch': lift your finger off the button in the app and the car stops. The driver had to specifically disable that protection to use the feature the way he did, and now claims no responsibility. Also, pressing any button on the key stops the car. Seems like a whole host of bad decisions.
My personal opinion is that it's your property and you're responsible for it no matter what. To me, it's not really any different than a person's dog biting another person or a tree falling on his neighbor's house. It's likely the owner did not intent for these events to happen, but they did and the owner should be held liable for it.
It sounds like Tesla are technically on the right side of their own usage instructions... but those instructions /stink/.
It's like the user manual for a microwave oven saying it "must never be used unattended" and "the user is responsible for shutting off the oven if it fails to stop when the timer reaches zero".
If Tesla's response to this is actually what the article says, then that's somewhat worrying. It's never a good idea to blame the user for a failing of the product like this, especially on something like a car, beta version or not. If the car can't reliably not collide with obstacles in Summon mode, then the mode shouldn't be available to the public yet.
This also points out a failing with Tesla's "we don't need LIDAR" strategy for sensors. Ultrasonic/IR sensors around the body might be reasonable for most driving situations, but clearly there are going to be incidents like this one if the car can't see at the full height of the body at close distance.
Honestly it's a design issue. The way that summon was activated(double tap on P) is very possible to slip and do. The screen where you cancel I've occasionally seen take 1-3s to pop up depending on what the rest of the SoC is doing.
I could totally see a scenario where this happened, screen popped up while he was exiting and wasn't able to hear/notice that summon was engaged.
The better fix here is to have a CONFIRM on the touchscreen rather than a CANCEL. It wouldn't hinder the experience since you already select forwards/back and catches this accident case.
For the record, love the car and almost everything that Tesla does but I really hope they revisit this and design it a bit more defensively.
"Unfortunately, these warnings were not heeded in this incident. The vehicle logs confirm that the automatic Summon feature was initiated by a double-press of the gear selector stalk button, shifting from Drive to Park and requesting Summon activation. The driver was alerted of the Summon activation with an audible chime and a pop-up message on the center touchscreen display. At this time, the driver had the opportunity to cancel the action by pressing CANCEL on the center touchscreen display; however, the CANCEL button was not clicked by the driver. In the next second, the brake pedal was released and two seconds later, the driver exited the vehicle. Three seconds after that, the driver's door was closed, and another three seconds later, Summon activated pursuant to the driver's double-press activation request. Approximately five minutes, sixteen seconds after Summon activated, the vehicle's driver's-side front door was opened again. The vehicle's behavior was the result of the driver's own actions and as you were informed through multiple sources regarding the Summon feature, the driver is always responsible for the safe operation and for maintaining proper control of the vehicle."
Basically, they designed an autonomous-operation mode that was easy to activate by accident and incapable of reliably avoiding crashing into things, it appears someone did and his shiny Tesla crashed into a trailer as a result, and they responded by accusing him of intentionally activating the feature and misusing it.
1-3s? That could really lead to serious problems for such features. Normally such stuff must be guaranteed to be displayed in less than 200ms or something around that.
I'm currently wondering if this a safety relevant feature (according to ASIL/ISO26262) and whether it would be even allowed to run such a feature on a component that is not designed for safety related environments which require realtime behavior (a QT UI running on Linux certainly doesn't provide that, and even lots of other automotive software stacks including Autosar give only limited guarantees).
They have a good point that if the trailer is at windshield level then the ultrasonic/radar system can't detect it, but any vision systems should be able to (like the one I imagine is used to find lane markings).
I agree completely with the Verge: this should never happen. 'Beta' is not an excuse for this kind of thing, ever.
A valid excuse? The mode was activated and a small land slide caused the car to slip down the side of a hill. THAT'S a valid excuse. 'We told you to be careful' isn't.
Yeah, we live in a web and app world where most software guys are used to having a great deal of latitude in these types of things. It's much different when you're dealing with multi-ton death machines like cars and planes.
I understand Tesla is clarifying that the driver misused the feature and that this is not normal operation, which is fine as far as it goes, but they simply shouldn't allow this to happen. If your product is not resilient against human error (or even a reasonable degree of human malice), it's not production-ready.
Tesla actually takes a great attitude on this with regard to vehicle crash safety. They take minimizing fatalities super seriously. The kinds of collisions that all other automakers would've written off as "Well man, we can't stop people from driving into poles at 80 mph", Tesla notes and does everything they can to make sure the occupants can walk away.
That's the kind of attitude we need here -- if your machine allows something bad to happen, you should not blame the user, but take every reasonable measure to correct the problem. This is the attitude that allowed the Model S to break the crash safety scale. You can't be perfect, but you can be pretty good. Saying "Well, don't press that button if there's a trailer in front of your car" isn't good enough.
I'm actually not sure if the camera in the rearview mirror on the Model S is stereoscopic or not - maybe someone on HN can confirm. I know it's used for reading speed limits and helping with lane keeping, but it can be surprisingly difficult to get accurate distance information to objects from a single lens if that's all it has.
I feel that pressing "park" should be idempotent. If I press "park" twice in my car, I don't want to drive away once I get out. Tesla really needs a dedicated "start autopilot" button to make the intention to use the feature explicit.
Yes, apparently the fact that this feature was activated was messaged on the instrument cluster, but that shouldn't be sufficient to absolve Tesla from the liability of this poor UI decision.
Especially when considering, as mentioned upstream, Tesla's UI can have significant latency issues. "Several seconds" to display a confirmation (or actually a "Cancel") easily means the difference between a catching of your breath and several thousand dollars damage or worse.
It does sound like there could be an improvement to the interface here but to be fair it is a parking mode and ran into a problem due to the specific environment it was started from. In a normal parking situation it would have understood its environment and not had an accident. There are many special cases that are being learned from every day by having some autonomous features in use by the general public.
> Yes, apparently the fact that this feature was activated was messaged on the instrument cluster, but that shouldn't be sufficient to absolve Tesla from the liability of this poor UI decision.
We're talking about a company which has installed a flat glass control panel in their cars — they clearly don't care about UI/UX.
Is it just me or if you can't approximate obstacles via a sensor within the complete bounding box of the car -- except for perhaps the top and bottom, can you really even have this feature work reliably?
From a technical perspective, you just don't have all the data necessary, and therefore any solutions will be guesses, hacks and "best efforts", and cannot be improved on via any manor of software update. This voids the "beta" claim made by the company, as no software update could remedy the situation.
Tesla has got to know this, and I think its negligent for them to release a feature (even in "beta") when they know there are hard technical limitations (sensors, not software) that prohibit it from working properly. It puts property and people's lives at risk unnecessarily.
At the minimum, Tesla cars equipped or enabled with these features represent a higher risk to the public, and the owners of these vehicles should be required to carry high risk insurance.
> In a statement to KSL, Tesla says that Summon "may not detect certain obstacles, including those that are very narrow (e.g., bikes), lower than the fascia
May as well rewrite that to read "May run over bikers or children". If you can't implement a feature properly, then don't implement it at all. If that means current Teslas can't do it because they lack the proper sensors, then they shouldn't do it.
There is a grave danger that Tesla's precocious push of autonomous features could result in a PR disaster for self-driving technology if it actually ends up killing someone.
We shouldn't have this in the wild until we're sure it's ready.
I read a really interesting book lately called "Empires of Light" about the early days of electrically. Basically, people got electrocuted all the freaking time before we really figured out how to wire things safely. At one point there was a huge tangle of telegraph and power wires haphazardly strewn together all over New York city. People would abandon old wires in place and just run new ones on top of them.
So, there's going to be some deaths. Without a doubt, before autonomous cars are fully integrated into society, there will be some deaths that would not have happened with a human driver. That's always the cost of human progress.
Of course we should do everything we can to minimize it as much as possible, but there's no way to guarantee a new technology will be 100% perfect on the first try, or the second try, or the 50th try. What scares me is that one of these deaths will happen and the public outcry will kill the whole endeavor before it ever gets off the ground. We shouldn't let that happen.
Aside from the obvious "What about (security) flaws in software of (semi-autonomous) cars", I'm especially thinking about scenarios where some sort of sensor jammer is used to blind/misguide the vehicle (laser pointers blinding pilots are already a thing, so clearly there's people willing to try it out). I have the feeling we'll hear about that in the future.
Someone is going to have to die sooner or later if this technology gets into production. In 100 years I bet people will still die due to software bugs -- but hopefully very few. The important thing is if the feature has a net reduction in total deaths, and I believe that can be said of the Autopilot features that ship today.
I think if autonomous technology ends up killing somebody, PR is the last angle we should worry about. Let's first worry about pushing a technology that, y'know, kills people.
That line of thinking is erroneous in my opinion. Autonomous technology only needs to kill a few less people than the existing manual technology to be worth debating, and it's a no brainer if it kills orders of magnitude less people.
The public may not react rationally to deaths caused by autonomous vehicles. If the technology kills people at a lower rate than the existing technology (manual control), then pushing it seems appropriate. Worrying about good PR could save lives.
>Let's first worry about pushing a technology that, y'know, kills people.
I agree. We need to get all car ads off TV and quit pushing for people to own automobiles. Pushing this technology into the hands of as many unqualified people as possible is a recipe for death and disaster, killing tens of thousands of people each year.
I'll push for any technology which will kill fewer people.
How hard is it to disable the “dead man's switch” for this feature? Can it be done without searching the forum for hours? Is it documented in the owner's manual?
The direction of my blame here kind of depends on the answer to those questions. Of course, it's technically the owner's fault, but a feature like this really needs to be 100% idiot proof.
This is a new feature to many people, and it's exactly the type of feature that people are going to “test” outside of the ideal operating conditions. It's not Tesla's responsibility to account for every stupid decision of its customers, but Tesla should have at least done everything in their power to ensure that critical safety features couldn't be disabled (which they may have done; I don't know).
Most critical safety features on cars can't be trivially disabled (ABS, airbags, automatic seatbelt locks, etc...). The only safety feature that I can think of that can be trivially disabled is traction/stability control, but there's a real reason for this (getting out of deep snow/mud). Also, disabling traction/stability control is a multi-stage process on many cars. On late model BMWs at least, pressing the “DTC” button once will partially reduce traction/stability control, but not completely disable it. To the average person, it appears to be completely disabled. However, if you do a little research, you'll find that if you hold it down for another 5 seconds, it disables completely (sort of). Even with it completely disabled, certain aspects of the system remain on. The only way to completely disable those portions would be to flash custom software to the car (which is well beyond the ability of the average person).
Single toggle in the normal Summon settings screen with a help message about the great convenience features it enables, apparently: https://youtu.be/Cg7V0gnW1Us It's like Tesla want people to disable it. (Their original version didn't even have a dead man's switch; they added one after Consumer Reports raised concern about its safety.)
That's the "deadly valley" I've written about before - enough automation to almost work, not enough to avoid trouble, and expecting the user to take over when the automation fails. That will not work. Humans need seconds, not milliseconds, to react to complex unexpected events. Google's Urmson, who heads their automatic driving effort, makes this point in talks.
There is absolutely no excuse for an autonomous vehicle hitting a stationary obstacle. If that happened, Tesla's sensors are inadequate and/or their vision system sucks.
This is the result of applying 'she'll be right' cavalier software engineering attitudes to real-world engineering (civil, mechanical, structural, etc.) because in the vast majority of cases, software engineering is completely erasable and is fixed with a simple refresh. Agile doesn't work in the real world. It hasn't ever worked in the real world and that's why waterfall approaches and FEED is so engrained - it's proven to be the best method to avoid killing people and creating garganutan cock-ups.
Tesla should not include features that are 'beta' in a car. Tesla should not include features that promote inattentive operation of the vehicle if they are not absolutely, several-sigma robust. It doesn't matter how many disclaimers you slap on it or how cool and futuristic it is, the feature fundamentally encourages people to stand away from their vehicle, press a button, do something else and have it magically arrive next to them. This is bad engineering design no matter how you try to spin it as 'innovation' and doesn't cut it in the real world.
edit: The entire aviation industry is a textbook in this concept.
- 7 million vehicles by multiple brands were equipped with Takata airbags that could blast out metal shards (2 deaths and 30 injuries reported).
- 30 million GM vehicles were recalled for faulty ignition switches that could shut down the engine while driving, plus prevent the airbag from deploying (at least 124 deaths in accidents where that happened).
"Stericycle, a recall consultant and service firm for automakers, said there have been 544 separate recalls announced [in 2014]"
Now we're seeing what it's really like when automotive engineering and software development come together. And it's sometimes really similar to the jokes. :P
"Man bumps car into trailer" would not have been a headline, because this happens so often it's completely boring. Notably the cost of failure here is property damage. If the car were moving fast enough to endanger human life, then a driver would be behind the wheel, and fully responsible by law and common sense for the motion of the vehicle.
These headlines are going to become more common as autopiloted cars become more popular. It is important to frame them in the context of transitioning from a system that is also unreliable-- the human nervous system.
We should also expect to see a few machine failures in specific situations where a human could have avoided damage, but we must also consider them against the easily avoidable mistakes that humans make every day, which a machine can avoid with near 100% reliability.
People are afraid of airplane crashes because they're dramatic, scary, newsworthy, and out of the passengers' control, but the complete story is that stepping on a plane is a safer activity than driving to the airport. Headlines about autopilot failures will get clicks for the same reasons, but if not framed with statistics, it's just noise.
That said, what's the likely accident rate for a 95th percentile human, starting in a parked car, hitting another stationary vehicle parked directly in front of them? There must be a few "accidentally put it in drive instead of reverse" type incidents but I'd except it to be exceedingly rare.
Second, "at least as good as the average human" is a bad benchmark. Not because an average human is so bad at driving, but because people make high-level judgements about acceptable risks. For example, you are much, much less likely to dent your bosses Porch than some random car. AI is equally likely to hit either.
If it's available to a regular consumer (as opposed to, say, a test driver), it's deployed and will be used.
http://www.vanityfair.com/news/business/2014/10/air-france-f...
A key factor was that Asiana pilots were actively discouraged from hand flying the jet throughout most of the approach. They sometimes refer to younger pilots as "Children of the Magenta Line" [1] because of their over-reliance on the LCD flight director to fly the jet.
A similar situation could easily occur with Tesla autopilot which works as expected 99.9% of the time, with the driver caught off guard in the 0.1% of the time it doesn't work properly and causing a crash.
[0] https://en.wikipedia.org/wiki/Asiana_Airlines_Flight_214
[1] http://99percentinvisible.org/episode/children-of-the-magent...
I think autopilot confidence-mood light should be placed on such vehicles. This way you could predict what is going to happen with the vehicle.
https://www.youtube.com/watch?v=pN41LvuSz10
The problem here is that it's too easy to activate the feature by accident and it's not sufficiently clear when you do so. IMO it needs another confirmation step on the touchscreen after double-clicking the Park button before it goes into the "auto-park after closing the door" mode. It's not a sensor problem; the sensors aren't intended to be foolproof here, they're just a backup.
It's not like the summon mode is simply disengaging the brake and letting the car roll around randomly.
That's what it does? And it can crash into things? How was this hailed as ground-breaking technology? When it was announced it was on the front page of every technology site.
It's too easy for a driver to be momentarily distracted. Tesla should require the driver to opt in to self parking on the touch screen, rather than the current behavior which requires them to opt out. This seems only prudent for a feature that makes the vehicle suddenly move on its own.
The current sensors and software are absolutely not ready for true self driving, that much is clear to me after driving the S for 6 months.
10+ year ago for Grand Challenge it cost some money (and good luck getting decent resolution stereo with decent FPS from a pair of 1M sensors, so most relied on lidar - $3K and you have minimally decent 3D of the scene ahead). Today the tens-megapixel sensors cost like nothing, along with CPU power to process it. One can have reasonable infrared too. Ultrasound sensors - cost nothing. Short distance lidar cost close to nothing too. Millimeter radar still probably cost a bit just because no mass production. When i see Google cars - Lexus SUV - they have at least minimally reasonable set of sensors. Nobody else comes even close. I don't understand why.
But with regards to how "humans need seconds, not milliseconds, to react to complex unexpected events": That actually doesn't seem to be a problem in this situation. This human evidently had seconds to deal with it. What appears to have happened here is that the guy somehow accidentally activated the feature, ignored the alert that came up, got out of the car and stood there as the car very slowly edged toward the trailer.
Seems like something of an oversight.
That said, sure it should be able to stop on it's own, but I think they couldn't have been more clear that this is beta and the driver is still always responsible.
In my view the driver is just as liable if the put on cruise control and don't pay attention. Is it the manufacturer's fault the car slammed into a vehicle in front of them while cruise control was on? No, I think any reasonable person will be saying it's the driver's fault.
It sounds like the only two mistakes that the guy made were to make a double press of the park button (which, as far as mistakes go, isn't the most unreasonable thing to do), and to assume that a car told to be in park would, well, be in park. He ignored warnings, yes, but he was likely worried about getting out of the car and doing other things by that point, which isn't wildly unreasonable either.
He enabled Summon mode through the menu, then either accidentally or on purpose triggered Summon mode, then stepped out of the car and the car drove off. He shouldn't have assumed the car was in park when he knew that hitting the park button twice would activate this beta software that he had to manually enable in the first place.
He used Summon mode on purpose and didn't pay attention to its limitations or the rules saying "only use this on private property". The only blame Tesla has is selling a car to an irresponsible driver.
Tesla has a big target painted on its back moving as fast as it is, and people will take advantage of that. This smells of that sort of thing but one can never know without being there. It looks pretty clear this person isn't going to get any sympathy from Tesla.
Well actually this is like executing something as a sudo user. The question of safety precedence doesn't arise because you have explicitly asked for it to be disabled. Now complaining that safety features should have still taken precedence is naive, its actually more like trying to dump the blame on somebody else's for what is very clearly your mistake.
I guess now that Software is sneaking back into automobiles, we're going to shift back into "blame the user" mode?
"But he used the feature wrong!" - To which I say, why was he able to use the feature the wrong way in the first place? Why are there so many limitations (won't sense a bike; won't sense a partially opened garage door) on a feature designed and advertised as "hands off summoning of the vehicle"?
https://news.ycombinator.com/item?id=11652940
I think as more autonomous features get developed it's going to be complicated for some time regarding who is culpable for a given accident.
It's like the user manual for a microwave oven saying it "must never be used unattended" and "the user is responsible for shutting off the oven if it fails to stop when the timer reaches zero".
Driver assist vs auto pilot, although the same feature they have different meanings to the general public and drivers may have the wrong expectations.
Should it matter? This should be a safety critical operation. User error should mean it fails safely.
This also points out a failing with Tesla's "we don't need LIDAR" strategy for sensors. Ultrasonic/IR sensors around the body might be reasonable for most driving situations, but clearly there are going to be incidents like this one if the car can't see at the full height of the body at close distance.
Honestly it's a design issue. The way that summon was activated(double tap on P) is very possible to slip and do. The screen where you cancel I've occasionally seen take 1-3s to pop up depending on what the rest of the SoC is doing.
I could totally see a scenario where this happened, screen popped up while he was exiting and wasn't able to hear/notice that summon was engaged.
The better fix here is to have a CONFIRM on the touchscreen rather than a CANCEL. It wouldn't hinder the experience since you already select forwards/back and catches this accident case.
For the record, love the car and almost everything that Tesla does but I really hope they revisit this and design it a bit more defensively.
"Unfortunately, these warnings were not heeded in this incident. The vehicle logs confirm that the automatic Summon feature was initiated by a double-press of the gear selector stalk button, shifting from Drive to Park and requesting Summon activation. The driver was alerted of the Summon activation with an audible chime and a pop-up message on the center touchscreen display. At this time, the driver had the opportunity to cancel the action by pressing CANCEL on the center touchscreen display; however, the CANCEL button was not clicked by the driver. In the next second, the brake pedal was released and two seconds later, the driver exited the vehicle. Three seconds after that, the driver's door was closed, and another three seconds later, Summon activated pursuant to the driver's double-press activation request. Approximately five minutes, sixteen seconds after Summon activated, the vehicle's driver's-side front door was opened again. The vehicle's behavior was the result of the driver's own actions and as you were informed through multiple sources regarding the Summon feature, the driver is always responsible for the safe operation and for maintaining proper control of the vehicle."
Basically, they designed an autonomous-operation mode that was easy to activate by accident and incapable of reliably avoiding crashing into things, it appears someone did and his shiny Tesla crashed into a trailer as a result, and they responded by accusing him of intentionally activating the feature and misusing it.
I'm currently wondering if this a safety relevant feature (according to ASIL/ISO26262) and whether it would be even allowed to run such a feature on a component that is not designed for safety related environments which require realtime behavior (a QT UI running on Linux certainly doesn't provide that, and even lots of other automotive software stacks including Autosar give only limited guarantees).
I agree completely with the Verge: this should never happen. 'Beta' is not an excuse for this kind of thing, ever.
A valid excuse? The mode was activated and a small land slide caused the car to slip down the side of a hill. THAT'S a valid excuse. 'We told you to be careful' isn't.
I understand Tesla is clarifying that the driver misused the feature and that this is not normal operation, which is fine as far as it goes, but they simply shouldn't allow this to happen. If your product is not resilient against human error (or even a reasonable degree of human malice), it's not production-ready.
Tesla actually takes a great attitude on this with regard to vehicle crash safety. They take minimizing fatalities super seriously. The kinds of collisions that all other automakers would've written off as "Well man, we can't stop people from driving into poles at 80 mph", Tesla notes and does everything they can to make sure the occupants can walk away.
That's the kind of attitude we need here -- if your machine allows something bad to happen, you should not blame the user, but take every reasonable measure to correct the problem. This is the attitude that allowed the Model S to break the crash safety scale. You can't be perfect, but you can be pretty good. Saying "Well, don't press that button if there's a trailer in front of your car" isn't good enough.
Yes, apparently the fact that this feature was activated was messaged on the instrument cluster, but that shouldn't be sufficient to absolve Tesla from the liability of this poor UI decision.
We're talking about a company which has installed a flat glass control panel in their cars — they clearly don't care about UI/UX.
From a technical perspective, you just don't have all the data necessary, and therefore any solutions will be guesses, hacks and "best efforts", and cannot be improved on via any manor of software update. This voids the "beta" claim made by the company, as no software update could remedy the situation.
Tesla has got to know this, and I think its negligent for them to release a feature (even in "beta") when they know there are hard technical limitations (sensors, not software) that prohibit it from working properly. It puts property and people's lives at risk unnecessarily.
At the minimum, Tesla cars equipped or enabled with these features represent a higher risk to the public, and the owners of these vehicles should be required to carry high risk insurance.
May as well rewrite that to read "May run over bikers or children". If you can't implement a feature properly, then don't implement it at all. If that means current Teslas can't do it because they lack the proper sensors, then they shouldn't do it.
We shouldn't have this in the wild until we're sure it's ready.
So, there's going to be some deaths. Without a doubt, before autonomous cars are fully integrated into society, there will be some deaths that would not have happened with a human driver. That's always the cost of human progress.
Of course we should do everything we can to minimize it as much as possible, but there's no way to guarantee a new technology will be 100% perfect on the first try, or the second try, or the 50th try. What scares me is that one of these deaths will happen and the public outcry will kill the whole endeavor before it ever gets off the ground. We shouldn't let that happen.
autonomous system design errors or failures have been involved in many accidents/deaths (trains, planes, just not yet automobiles)
non-autonomous cars are involved in over 30k deaths per year in the US in mostly preventable accidents
i'm not saying that autonomous driving systems shouldn't be as good as possible or improved constantly but just wanted to put this into perspective
I agree. We need to get all car ads off TV and quit pushing for people to own automobiles. Pushing this technology into the hands of as many unqualified people as possible is a recipe for death and disaster, killing tens of thousands of people each year.
I'll push for any technology which will kill fewer people.
The direction of my blame here kind of depends on the answer to those questions. Of course, it's technically the owner's fault, but a feature like this really needs to be 100% idiot proof.
This is a new feature to many people, and it's exactly the type of feature that people are going to “test” outside of the ideal operating conditions. It's not Tesla's responsibility to account for every stupid decision of its customers, but Tesla should have at least done everything in their power to ensure that critical safety features couldn't be disabled (which they may have done; I don't know).
Most critical safety features on cars can't be trivially disabled (ABS, airbags, automatic seatbelt locks, etc...). The only safety feature that I can think of that can be trivially disabled is traction/stability control, but there's a real reason for this (getting out of deep snow/mud). Also, disabling traction/stability control is a multi-stage process on many cars. On late model BMWs at least, pressing the “DTC” button once will partially reduce traction/stability control, but not completely disable it. To the average person, it appears to be completely disabled. However, if you do a little research, you'll find that if you hold it down for another 5 seconds, it disables completely (sort of). Even with it completely disabled, certain aspects of the system remain on. The only way to completely disable those portions would be to flash custom software to the car (which is well beyond the ability of the average person).