I really hate the whole Tesla angle of "technically we are not lying but we know people are going to mis-remember and misrepresent what we are writing".
Take for instance: > The driver had received several visual and one audible hands-on warning earlier in the drive...
What this means is that during this incident there were No visual or audible clues that a crash was about to happen
What they are saying is that while he was driving there was an earlier point at which the car did not crash, where the car gave a visual and audible clues. And you have to ask? Why the hell is that relevant? It isn't. They are stating it as a fact because they know some people will incorrectly claim that the car warned the driver prior to the crash.
With their eager to try to explain what happened why aren't they talking about what actually went wrong? Why did the car drive straight into a barrier. If their claim is that it was cased by the driver not having his hands on the wheel for 5 seconds, then they need to fold the company and give up trying to create self driving cars. That is Not acceptable of a self driving system.
The fact that the barrier was damaged intensified the accident, but that does not in any way excuse their system driving straight into it. And no, you can't get out of culpability by claiming statistical superiority. That's like a gang-member drying to get out of jail after killing a rival gang member because his gang statistically kills fewer people. Tesla has put a product in the hands of consumers, when it kills those consumers they need to step up to the plate and be honest about their fuckups, not just blame the drivers, blame the infrastructure and point to statistics.
And no, you can't get out of culpability by claiming statistical superiority.
Quite. Tesla is deliberately comparing their modern vehicle design & wealthy driver base (statistically one of the safest cohorts) with the entirety of the U.S. driving population. This is bad statistics: We expect Tesla vehicles to have far fewer accidents than the mean US vehicle per mile, because Tesla's are expensive modern vehicles with (relatively) wealthy drivers who can afford to keep them maintained & will themselves be in better health than the mean US driver.
Don't tell us how safe Tesla's are compared to the entire US driving population: Tell us how safe they are compared to equivalent vehicles from other manufacturers. My working assumption is that Tesla doesn't do that because it would be far less flattering to the 'A Tesla is totally safe, honest' PR boosterism that runs through every Tesla press release.
> Don't tell us how safe Tesla's are compared to the entire US driving population: Tell us how safe they are compared to equivalent vehicles from other manufacturers.
I'm having a hard time to find an exact number comparison, but:
- Tesla claims 1 fatality per 320M miles
Compared to
- A 2015 study of 2011 model year vehicles showed, for instance, Volvo XC90 had never been involved in a driver fatality (1)
- There are about 10K 2011 XC90s in the US (2)
- Avg. US drivers cover 13,400miles / year (3)
So:
- 4 years studied * 10K vehicles * 13,400 avg miles -> 536,000,000 fatality-free miles
This is obviously super hand-wavy, but I think it's fair to state as a hypothesis that Tesla could be twice as deadly as the safest high-end vehicles.. you'd need to design some experiment to attempt to falsify it to feel confident this is correct.
> Quite. Tesla is deliberately comparing their modern vehicle design & wealthy driver base (statistically one of the safest cohorts) with the entirety of the U.S. driving population. This is bad statistics: We expect Tesla vehicles to have far fewer accidents than the mean US vehicle per mile, because Tesla's are expensive modern vehicles with (relatively) wealthy drivers who can afford to keep them maintained & will themselves be in better health than the mean US driver.
You're going to have to explain to me how the wealth of a driver has any bearing on their ability to pilot a self-driving car.
Do you expect 370% above average for any cohort? That's amazingly amazing. That must mean that some other cohorts are correspondingly far below the average, what are they?
This is by far the most important point in this discussion. According to this article (https://www.greencarreports.com/news/1107109_teslas-own-numb...), Tesla's statistics are deliberately wrong, and Tesla compared its Autopilot crash rate to the overall U.S. traffic fatality rate—which includes bicyclists, pedestrians, buses, and 18-wheelers.
You could argue that Tesla should be compared to luxury vehicles (which have more safety features), but even if you just compare their crash rate against U.S. drivers of cars and light trucks, the Tesla Autopilot driver fatality rate is almost four times higher than typical passenger vehicles...
They should (on closed tracks) test self driving cars with drunk and deliberately stupid drivers. Do they do this? Because if you mass market such a car you will get people using it to drive themselves home drunk.
They can be far more specific: just tell us how Model S drivers without (or not using) Autopilot do compared to those with and using it. This would control almost every variable except Autopilot. I find the fact that they don't release this number as extremely damning.
Tesla always blame the "driver" (and publicly so, nonetheless) when they are killed by their "autopilot" (which is actually driving, that is fulfilling the purpose indicated by its name, despite being not very good at it...) -- either directly, or by carefully stating misleading statements that are both technically correct and incredibly dishonest in the context of both the events and the surrounding text.
Why people are still using their shit is beyond me. Why they are still authorized to sell it under this marketing (like the name given to it, or the mismatch between the restriction and the practice), I also can't understand: it should be better regulated by the authorities.
This is probably the worst part about driving a Tesla: Your car is going to testify against you. Because Tesla can and will make a statement like this, and they will always throw you under the bus. Which is a terrible way to treat a customer, as a general practice.
You know how many plane crashes are not the fault of human error? Very few.
I have a friend that investigates planet crashes. He’s told me that between pilot error, human error, hubris, flying when conditions are too dangerous, poorly trained or sloppy mechanics only about about 1% really can’t be classified as being caused by humans. He said the stats are a bit skewed because like any organization that reports on human performance they’ve been influenced to be a bit “flexible” when assigning blame.
Sadly I have to agree with you on the narrative angle. While I am very impressed with what Tesla has accomplished in their autopilot I find the careful construction and omissions in this follow up post to be done poorly.
They omit what speed and what lane the car was travelling in. They omit the time at which the car was no longer behaving in the way in which it had (when did it start heading for the crash barrier? At what point did the wheels closest to the barrier move over the line? etc)
They have previously mentioned that their cars have navigated this spot many times in the past (over 200 as I recall) did they note the number with and without the crash barrier intact? How many times was the car in the exit lane and how many times in the lane to the right of the exit lane? How many times have they navigated past it with a damaged crash barrier.
The driver had driven this route before, it was his commute to work. Had he driven it with this car on that route in autopilot before? Did it work then? How many times? How many seconds were there between when the driver's car was doing something right (in lane etc) and doing something wrong (straddling the lane). The response indicates there were 5 seconds of visibility coming up to the crash barrier, at what point was the autopilot on a collision course? All of those five seconds? or the last 500mS ?
Reading the Tesla response I felt I wasn't getting the whole story, and was getting a whole lot of deflection. That isn't the tone that inspires confidence in me. And that makes me sad because they have done much better in the past. I cannot help but speculate that this time they feel they might have been contributory at least and that a full disclosure would be used against them in a civil suit.
Yeah, it's pretty crafty PR. The CPC must be green with envy, while Uber is still ruing their reputation. Such a difference in conversation; so many people laying the blame squarely on the driver(he is ultimately the PIC tbh, but Tesla and particularly CalTrans messed up bad), however Uber is being raked through the coals when somebody literally walked in front of their car..
Just Say'n
EDIT: While I believe Tesla and the driver are the ultimate source of the accident, I believe CalTrans messed up worse with that barrier. At least Tesla owns up to the limitations and warns drivers to stay in control, but why the hell was that barrier not replace d?! What systemic mediocracy is playing out there that they have road work signs with dates 3 years old on them, and a critical safety barrier is just missing with traffic flow as usual?!
The analysis I've seen suggested that the self-driving system in the Uber vehicle had 4 or 5 seconds to detect the pedestrian crossing the road in front of them. Driving into a pedestrian on the road in that circumstance is /not/ acceptable for any driver, be they human or automated.
Don't be fooled by the video footage Uber released from the in-car camera - the quality is appalling. Compare it with the dashcam footage taken by people driving the same road at the same time & you'll see that visibility should have been perfect & a human driver would easily have avoided the pedestrian who was doing something that all of do every day - crossing the road.
While I believe Tesla and the driver are the ultimate source of the accident, I believe CalTrans messed up worse with that barrier.
When reading something like this in sci-fi, I always thought that the stories where over the top and tongue in cheek, to make some other point. Now I see they were real.
Companies are allowed to create robots that kill people. Then we sit around here and discuss how much of the killing is on the the robot's manufacturer, the robot's owner, the street lights and the victim.
I think CalTrans owed up to its limitations: you should expect the road to be flat and not end abruptly without appropriate signage. But you are fully responsible to control the vehicle at all times and maintain an appropriate speed - so that when the road does inevitably end you can react safely. At no point are the crash atenuators a guaranteed component that the user of the road can expect to be in place and should rely on to save his life.
That is not to say CalTrans did not break some contractual obligations to replace that infrastructure in a reasonable timeframe, and shouldn't be held responsible in a commercial sense. But they are not in any way liable for the accident, the missing hardware was not a reason to stop traffic and all such devices are offered on a best effort, good to have, basis.
It's widely known that RADAR systems are not able to detect stationary objects, only moving ones. Actually, they are are to detect them but just not where they come from, so a barrier in the middle of the road and a traffic sign on the side of the street, they would look like the same to the RADAR system.
But to be honest, I am more worried with the markings of the road rather than the inability of the autopilot of not foreseeing the accident: https://imgur.com/a/hAeQI
What's wrong with the US road administration? Why is this even look like a driving line? Where are the obvious markings? It's a very misleading road layout, I am curious the amount of accidents that happen there every year.
Given on how the road looks like, it makes more sense why Tesla is reinforcing the fact that the driver wasn't paying attention to the road.
Edit: Since people are curious about the limitations of the RADAR, the manual of the car mentions this limitation:
"Traffic-Aware Cruise Control cannot detect all objects and may not brake/decelerate for stationary vehicles, especially in situations when you are driving over 50 mph (80 km/h) and a vehicle you are following moves out of your driving path and a stationary vehicle or object is in front of you instead."
> It's widely known that RADAR systems are not able to detect stationary objects, only moving ones. Actually, they are are to detect them but just not where they come from
That doesn't make any sense. Radar systems do detect stationary objects just fine. In fact, from the point of view of the moving car, almost nothing is stationary.
I wouldn't be surprised if you're correct about the markings on the road, I almost crashed into a temporary barrier driving at night, because there were two sets of lane markings - the old ones, and the ones going around the barrier. Construction workers simply didn't bother to erase the old ones.
> It's widely known that RADAR systems are not able to detect stationary objects, only moving ones.
Uh, what?
I think you're thinking of Doppler radar systems. You can do static scene imaging with radar, no problems (I've written the code to do it, even!).
There's no way in hell a car is going to be using a Doppler radar. In all likelyhood, they're using a plain-old FMCW system, and it can absolutely detect stationary stuff.
While I find it deplorable that several companies are essentially using people, although in this case apparently willing, participants in testing auto pilot or autonomous driving software. I also find the lack of prominent road markings, signage, or any significant shock absorbing zone simply astonishing!
It makes me wonder, is that layout actually on purpose?
I truly find it hard to believe that a road or traffic authority in this day and age could design that on purpose!
It's quite well known that human attention is a tenous thing, and that the attention given to an object is primarily proportional to its size. But here we have a barrier end that is signed as being about as dangerous as a a bit of debris.
Radars do just fine detecting stationary objects, what they can't do is tell whether the stationary object is blocking the road, overhead, or to the side of a curve in the road. As most radar reports of stationary objects fall in the latter two categories, the adaptive cruise control system filters them out of consideration.
>> The driver had received several visual and one audible hands-on warning earlier in the drive
> What they are saying is that while he was driving there was an earlier point at which the car did not crash, where the car gave a visual and audible clues. And you have to ask? Why the hell is that relevant?
I see it being relevant that the driver had his follow distance set to the lowest and that several times in the drive prior to the accident he had to be prompted to maintain his hands on the wheel. This speaks to a driver that wasn't paying attention like they were supposed to be doing. That would increase the likelihood of him missing an upcoming hazard and responding in time.
Regardless of that, if he had reported and issue in that area to Tesla, why the hell was he not driving it himself at the time? I know that I personally value my life enough that I wouldn't trust AP if it had been veering into the wall repeatedly in one area.
>> That is Not acceptable of a self driving system.
Yes. But as we all know autopilot is not the same as self-driving.
>> And no, you can't get out of culpability by claiming statistical superiority.
In fact no one has found Tesla culpable for the accident. You can however defend the safety of your system by referring to statistics. (In fact that is the ONLY way to measure safety level) And your analogy does not hold. It it more like a parachute company saying their parachutes are safer than other brands because it fails less often than the other brands. Which brings me to my final point. Even if it turns out Tesla was culpable and negligent in this accident, it would not be a complete judgment if we leave out the inherent risks in car manufacturing and the incidents attributed to other companies.
I'm pretty sympathetic with PR departments doing what they have to do, in terms of putting out the official boilerplate remorse, the promises to cooperate and keep people up to date, as well as publishing the incident data, and then keeping mum until the investigation is finished.
But Tesla is shoveling in the bullshit pretty early on. Besides the fuzzy and deliberately vague stats (why is the mileage attributed to Tesla's "equipped with Autopilot hardware"?), this graf is just despicable:
> Tesla Autopilot does not prevent all accidents – such a standard would be impossible – but it makes them much less likely to occur. It unequivocally makes the world safer for the vehicle occupants, pedestrians and cyclists.
"unequivocally" has an actual meaning: leaving no doubt. And yet it's used following the paragraph that pretends Tesla's and all U.S. vehicles are directly comparable. I know it sounds silly to focus on that word but someone in Tesla PR thought it needed to be used with the pile of crap numbers. Such blatant and pointless deception -- "significantly" would work just as well for that inane sentence -- feels like a strong a signal to doubt the rest of the info given, as if the evidence weren't already so obviously dubious.
> What they are saying is that while he was driving there was an earlier point at which the car did not crash, where the car gave a visual and audible clues. And you have to ask? Why the hell is that relevant? It isn't.
Vehicle notifies the driver it's having difficulty operating the driver elected auto pilot mode based on the conditions. Driver elects to continue using the failing operating mode despite being warned.
I fail to understand how "It isn't" relevant that the driver chose not to take manual control when warned that conditions were not ideal.
Perhaps they don't know (yet?) what exactly went wrong? The car was heavily destroyed, I am surprised they got any logs retrieved from the car at all. They might be able to reconstruct more information about the accident, but unless they have a complete log including video (unlikely), it will take time to reconstruct.
As there is an ongoing investigation about the accident, it is understandable, that they are not making any statements beyond anything they can prove without a doubt at the current point of time.
Sure, but the ONLY reason many of us can figure for mentioning the previous warnings in the way they did is because the way it would.get interpreted as a sound bite. Could be wrong, but this isn't conspiracy theory territory. These companies have teams of highly paid communications experts(spinsters) engaging in social engineering.
Like you’ve predicted, first headline on Techmeme: “Tesla says Autopilot was engaged during fatal Model X crash on Mar. 23, and the driver didn’t respond to warnings to take control of the car before the crash”. Although the linked article has a different title; might’ve been changed.
Also, ever notice how in none of the Tesla Autopilot accidents, Tesla finds itself guilty?
I don't even read these posts from Tesla anymore because I know each one of them is just justification for how Tesla did nothing wrong, and it's the fault of the stupid human driver for using its Autopilot "wrong" and dying in the process. Meanwhile, Tesla is laughing all the way to the bank selling the usage of Autopilot as one of the primary features of its cars.
It almost reminds me of how Coke markets its sugar-filled soda "Drink as much as you can at every meal and at any event...but you know, in moderation (because that's what the government makes us say with its dietary guidelines)".
Tesla is kind of like that. It tells people to use Autopilot because it's such a great experience and whatnot, but then stuffs its terms and conditions with stuff like "but don't you dare to actually lay back and enjoy the Autopilot driving. You need to watch the car like a hawk, and if you happen to ignore any of our warnings, well it's your stupid-ass fault for ignoring them, even though we wrote everything there in the 11-point font." It's very irresponsible for any company to do stuff like this.
I don't know what the exact process is now, but I wish the authorities would be able to get access to un-tampered logs of the accident after the fact from the self-driving car companies.
I don't care if it's a car company that has "Elon Musk's reputation" (who I like) or Uber's reputation. These accidents, especially the deadly ones, should be investigated thoroughly. No company will ever have the incentive to show proof that leads to itself becoming guilty. It's the job of the authorities to independently (and hopefully impartially) verify those facts.
I'm a big fan of Tesla and its electric vehicles, big fan of Tesla Energy, big fan of SpaceX. But I've always criticized them for the Autopilot because it was obvious to me from day one that they're being very irresponsible with it and putting the blame on humans, when they were marketing it as a self-driving feature (they did for years, even if they don't do it as much now, but they still use the same name), while its system is far from being anywhere close to a complete self-driving system, and thus putting anyone who uses in danger of dying.
Tesla's huge success in people's mindspace of self-driving vehicles is baffling to me. Correct me if I'm wrong, but these are the self-driving features in Teslas:
- Cruise control maintains speed, will automatically decelerate if a car in front slows down, speed back up if they speed or go away, etc.
- Collision avoidance: the car will automatically emergency brake if it believes a collision is imminent.
- Lanekeeping: the car will stay in the lane without driver attention, although if it doesn't detect your hands on the wheel with enough frequency, the system will disengage.
- The car can park itself.
A quick look at Wikipedia tells me it can also change lanes and be "summoned."
Most of these features, especially considering cruise control, collision avoidance, and lane-keeping as the core featureset, since at least in 2017, have been in wide deployment among most car manufacturers, down to even cheap economy cars, that people are buying right now without getting on waiting lists or anything of the kind. You can buy a car right now with this core featureset for around ~18k.
Not only that, people seem to dramatically overestimate what kind of auto-pilot Teslas have. Speaking with my doctor about it, he assured me that "you just tell the car where to go, and it goes there." How did Teslas get so hyped up in the common man's imagination?
> It's very irresponsible for any company to do stuff like this.
It's very irresponsible for regulation to allow them to do that. And statistically for us to vote for politics that are not against that kind of behavior. Big companies rarely give a shit about killing people in the process of making money, as long as they are not punished for doing that. (And we, btw, mostly don't give a shit when the people being killed live in other countries, even when the process to do so is far more direct than the 1st world problem that interest us here.) People who used to organize those kind of criminal behaviors used to be held responsible for the consequences, but it is more rare now and the laws in various countries have actually been changed so that people managing companies are far less accountable than before for the illegal and otherwise bad things they order. As for companies being accountable, it is usually in punitive money that is order of magnitude less amount than the extra profit they made in doing their shit, so why would them stop? The risk of the CEO being sent to jail would stop them. Ridiculously small fines against the company, in the astonishingly rare cases where they even actually fined, not so.
It reminds me of South Park's drink responsibly ad [1]. You could almost replace 'drink' with 'drive' and it would sound like the current autopilot spin.
To me there is a stunning lack of compassion and decency in the response to these incidents by both Tesla and Uber.
Uber made sure to point out that the victim of their incident was homeless. Tesla is pointing out how the driver received unrelated cues earlier in the journey. None of this information is relevant. They’re trying to bamboozle the reader in an effort to improve their images at the expense of victims who can’t defend themselves.
I don’t understand why it is so impossible for these companies to act humbly and with a sense of dignity around all this. I don’t expect them to accept responsibility if indeed their technology was not to blame, but frankly that isn’t for them to decide. Until the authorities have done their jobs, why not show remorse, regardless of culpability, as any decent human would?
I agree with the lack of compassion bit -- I think the messaging could have been far more empathetic. However...
> Uber made sure to point out that the victim of their incident was homeless. Tesla is pointing out how the driver received unrelated cues earlier in the journey. None of this information is relevant.
I don't understand how you can equate those first two lines. Uber's observation is clearly irrelevant, but the fact that the Tesla driver received multiple "get your hands back on the wheel" notifications, as close as six seconds before the accident seems very relevant to me.
> the fact that the Tesla driver received multiple "get your hands back on the wheel" notifications, as close as six seconds before the accident
But that isn't what their statement says. It says the victim had his hands off the wheel for six seconds before the crash, and that he received hands-on warnings "earlier in the drive." It does not say that during those crucial six seconds he was being warned. Nor does it explain the fact the car plowed into a barrier.
Yeah, but it seems that they may be, statistically speaking, right. The fears that surround self driving cars may be unfounded, and if they are unfounded, then it would be a shame to throw the baby out with the bathwater.
They're right when they say we don't speak of the accidents that didn't happen, and I bet there's a ton of them.
As someone who's been in multiple car crashes, as a passenger, I really on longer want to be at the mercy of human drivers.
Brief reminder of the same discussion, in a different context.
Yet on one, and I really mean no one complains about computer assisted landings and takeoffs. I'm not sure why. Lots of passengers even sleep through them.
I guess we've come to realise that the man machine symbiosis works well for flight. But it might take time for this to get engrained into our culture when it comes to driving.
Exactly. It's very important to ensure negative (and unjustified!) publicity do not affect technology perception, adoption and growth. There are so many Luddites who take every chance to hammer another nail.. It's understandable that any tech pioneer can easily sense being attacked and has to quiveringly rebuff any hysteria attempts.
What exactly am I looking at? The video itself says it's a computer-piloted takeoff, the YouTube title says it's a computer-piloted landing, and the top-voted comment says it's a human-piloted fly-by.
That is utter bullshit that Uber said the victim was homeless. It was the police chief, Uber never mentioned it once and their response was completely appropriate.
This was clearly vetted (or written) by a lawyer who is interested in changing the narrative for his future wrongful death defense rather than exercising empathy.
It reads like it was written by a company who has a lot of scientists working there is all. They are simply explaining what happened according the system log files before the incident and then reminding the public that it is still safer.
What you are saying here is: "I really wish the PR person who wrote this chose words that would cause readers to feel as though the company empathized more with the victim."
Ultimately companies don't have feelings, their employees do. PR statements and the words chosen (unless written by an individual like the CEO) don't somehow make a company into a person and don't reflect the feelings of their employees: they are a crafted tool to create a certain outcome by instilling thoughts in the mind of the reader and/or providing legal cover.
> I don’t understand why it is so impossible for these companies to act humbly and with a sense of dignity around all this.
I would venture a guess this is an outcome of the "sue for everything" legal environment in the US. Any statement that could be construed as anything other than "we did nothing wrong" could be seized on by shareholders suing the company for securities fraud and demanding class action status.
Really not much additional info here aside from the number of warnings the driver had. Also, the standard "Tesla is 10x safer" metrics that get pulled out each time a crash gets sensationalized.
What I think they fail to address, especially in this case, is that the autopilot did something a human driver who was paying attention would never do. Autopilot does a great job of saving people from things even a wary driver would miss, much less a negligent one, but the fatal accidents that occur in statistics are not from people who are fully watching the road missing the fact that there's a concrete barrier with yellow safety markings directly in their path, and hitting it head on for no good reason (e.g. evasive action because of another driver, etc, oh a stupid last-minute "oh crap that's my exit").
I want autopilot to succeed, and I want Tesla (and Musk) to succeed, and for the sake of their public image they have to realize that this isn't an average accident statistic, a lapse in attention or evasive maneuvering. It's a car that seemingly plowed right into a concrete barrier while still under complete control. That's not a mistake a healthy human will make.
> the autopilot did something a human driver who was paying attention would never do.
Then how did the crushed barrier get crushed before the Tesla hit it? Clearly, the stretch of road is unsafe enough to trick humans drivers (and, clearly, Tesla should improve).
I'm obviously not privy to the details of the prior crash, but I'm pretty sure any healthy human would not deliberately drive into the barrier.
The most likely scenarios I can think of would be not paying attention and drifting into the barrier, attempting to avoid a car merging into the lane (and not paying enough attention), or being struck by another car and being forced into the barrier.
It's a somewhat easy accident to have in heavy traffic if the lanes aren't marked clearly and you don't see the barrier because of cars ahead of you. Not a factor in the Tesla crash (they had a clear line of sight) or in the original crash (which was a DUI).
«the autopilot did something a human driver who was paying attention would never do»
I'm sure that out of the 1.25 million annual automobile-related deaths, plenty of drivers were paying attention and still did stupid things similar to this accident.
Using global statistics greatly skews your argument. While all these deaths are tragic, many of them could be prevented through law and regulation of human drivers.
Humans do exceedingly stupid things all the time because they stop paying attention, even momentarily (or subconsciously).
We put big lights on the back of cars that light up when they brake. And yet, despite a driver looking direct at a huge object with two lights, that rapidly grows larger right in front of them, does not always prevent a collision. Or even a chain of collisions. I don't get on the road much, and yet even I've seen a ton of accidents that are baffling and can only have been the result of a driver not paying attention for a bit.
I'm reminded of how in quite a few places removing signs and lights actually improved safety because it forced drivers to stay aware instead of 'driving on autopilot', so to speak.
I think that's the real issue here. The more we outsource our attention to a machine, the more important it is that said machine does MUCH better than we humans do. Especially if a mistake can be deadly.
But I wouldn't be surprised if, indeed, technically this accident could've happened just as easily by a non-autopilot car where the driver had a little 'micro-sleep', got distracted by something in his field of view, mistakenly thought he was on a lane and didn't notice the (let's be fair) ridiculously bad markers that were the only way to tell that part of the 'gray' stuff in front of him was in fact a wall of concrete.
I mean, just look at the image: https://imgur.com/a/iMY1x . Half of what makes the barrier stand out is the shadow!
All that said I might sound more argumentative than I am. I do agree with most of your comment.
but the average person in the US anyway is not a good driver. i very often see people get aurprised at the lane ending at that exact spot. they should put a rumble strip leading up to it.
So, my question is, is it fair to compare AP to a human driver paying attention or is it more fair to compare it to an average driver.
I mean that just for the sake of argument. In this specific case, NTSB or someone should ban AP. It is so obvious what’s about to happen there, an AP should do what a driver should do which is take the ramp whether it’s a wrong turn or not. So many a-holes try to squeeze in last second (oops) when they should just take the damn exit or miss the damn exit as the case may be. What AP did here is what a poor and panicked driver would do and that’s just not acceptable.
I want electric cars to succeed as I am am long bored of breathing in vehicle fumes. Automated driving is cool sci-fi tech, but holds nowhere near the same sense of necessity, as far as I am concerned.
So, I thought it wasn't all that clever in the first place to try and marry the risks of getting electric cars into the market with the risks of telling the extremely wealthy that they didn't need to hold the damn steering wheel.
This statement is meaningless circular logic. You can handwave away any incident with a human driver by saying he wasn't "healthy".
A human pilot has intentionally driven an airplane full of passengers into the ground with full control because he wanted to kill himself. The airline believed he was "healthy".
But this wasn't a self driving vehicle. It isn't capable of dealing with all situations, and it warning the driver for 6 seconds to take control, which didn't happen.
I thought the auto pilot warned the driver to take control of the vehicle or is that incorrect? There is probably a lot more we don't know and it would be premature to place the blame on the auto pilot and Tesla. This isn't a lvl 5 system. It still requires the drivers attention and that is something we need to consider ( he obviously didn't have his hands on the wheel).
Technically tesla's statement says he was warned at some point to take control during his drive. That could have been 5 seconds into his commute, or it could have been 5 seconds before he hit the barrier. He may have taken control at that time, and then reengaged it later. He may or may not have been warned to take control before hitting the barrier.
it's a mistake that healthy humans make all the time, which is why those crash barriers exist in the first place, and why that particular crash barrier was damaged by a previous crash.
You’re missing the point. The crucial fact according to OP is that the car did something that a fully aware driver would not ever do. It’s at least worth acknowledging that.
In keeping with the "Autopilot" terminology, this was "Controlled Flight info Terrain".
Tesla demonstrates their usual abuse of statistics. 1.25 fatalities per 100 million miles is the average across all types of terrain, conditions, and cars.
The death rate in rural roads is much higher than in urban areas. The death rate on highways is much lower than average. The death rate with new cars is much lower than average.
The autopilot will not engage in the most dangerous conditions. This alone will improve the fatality rate, even if the driver does all the work.
Tesla cars are modern luxury cars. They appear to have done a great job building a well constructed, safe, car. This does not mean their autopilot is not dangerous.
Note: Tesla is comparing the fatality rate of autopilot-equipped cars with the average accident rate. They are NOT only counting miles where autopilot is used, they are counting all miles driven by their cars. It could be that Tesla drivers are inherently safer than the general population, or that the cars themselves are much safer. What they are not doing (at least with the data) is making a statistical claim about autopilot safety.
That's a very good point. But it raises a new question, why didn't Tesla compare the safety record of Teslas with Autopilot against Teslas without Autopilot.
Exactly. Tesla drivers will be older, more affluent, and I bet, generally safer drivers. I'm sure an insurance company could easily debunk those statistics.
> This does not mean their autopilot is not dangerous.
It also doesn't mean that it is dangerous. That bag of doritos that you ate, it might also be dangerous. Let's use facts and not vague worrisome complaints. So if you want to talk about the higher danger of rural roads, please give a number, and then give a number for tesla, or estimate one. Don't just say "urgh".
Outside of the Tesla marketing department, conditional probability has been known for 300 years. Comparing the average fatality rate to the fatality rate of a modern car with Autopilot is an egregious abuse of statistics. How can anyone defend this practice ?!
Just as an example, click on California (it is mostly consistent all over) on the map [1]. California has a lower average fatality rate of 1.01 per 100 million miles.
Rural roads have 2.62 fatalities per 100 million miles.
Urban roads have 0.70 fatalities per 100 million miles.
The fatality rate is much lower on highways, according to Wikipedia [2], freeways have 3.38 fatalities per billion km (0.54 per 100 million mile, if I managed the conversion).
I mean, if Tesla is comparing their luxury car's death rates to motorcycles, which have 10x-50x the fatality rate of passenger cars, then, yes, it's dangerous.
I completely agree; Tesla’s marketing modus operandi is to fudge statistics. In my view, they are consistently on the wrong side of the marketing puffery | misrepresentation borderline.
I do take issue, however, with your claim that Teslas are “well constructed” - they are not. They are poorly designed, poorly constructed vehicles that feel and look cheap. Test one side by side with, say, a BMW 3-Series or a Volkswagen Golf and the difference in production quality will be palpably obvious.
I saw the argument that a traditional motor’s noise level hides rattling sounds and similar subtle car defects, which a fully-electric vehicle’s silence does not cover.
"The driver had about five seconds and 150 meters of unobstructed view of the concrete divider with the crushed crash attenuator, but the vehicle logs show that no action was taken."
This seems to be trying to suggest that the driver had 6 seconds of clear driving-toward-death time to correct for the car's actions without explicitly making such a ridiculous statement (while also throwing in the crushed crash attenuator). If the car makes a quick change in direction due to an autopilot error, a driver at speed would have very little time to make an effective correction.
Depending on the system behavior, it could be akin to having a passenger reach over and yank the wheel. I'd honestly rather Tesla just said "ongoing investigation" instead of being so transparently evasive.
They also said in their first post they would respect the family by not updating until the investigation was over. The investigation isn't over but here they are trying to manage their image with confusing and misleading partial information.
Yup this also mean that Tesla's radar and camera systems had 5 seconds to realize it was driving straight into an unmoving concrete barrier and did nothing...
What's really odd is, in my Subaru which has eyesight, if I drove straight towards that barrier the breaks would engage. Eyesight uses two visual cameras, about two feet apart, and uses the parallax between them to judge depth/distance (in the same way human eyesight judges depth).
So even if Tesla's autopilot steered the vehicle towards that divider, shouldn't the auto-braking system have engaged to avert the accident? Tesla vehicles also have visual image cameras as well as radar based ones, so the information density is even higher than Subaru's system.
I guess what I am getting at is: Is auto-braking disabled while autopilot is on? Wouldn't leaving auto-braking enabled (particularly using the visual cameras) offer a second layer of safety is autopilot made an error?
I raised the same issue when a Tesla with autopilot on drove straight into the side of a truck and the driver was decapitated. The discussion was all about "well radar couldn't distinguish it from road signs!" while ignoring that a Tesla has visual (optical) cameras front and center.
> In the moments before the collision, which occurred at 9:27 a.m. on Friday, March 23rd, Autopilot was engaged with the adaptive cruise control follow-distance set to minimum.
Would've loved to know the exact moments we are talking about here. Is it 5 seconds or 5 minutes?
> The driver had received several visual and one audible hands-on warning earlier in the drive and the driver’s hands were not detected on the wheel for six seconds prior to the collision.
What does this mean? Did Tesla want to give over control to driver? Or just normal no hands?
> The driver had about five seconds and 150 meters of unobstructed view of the concrete divider with the crushed crash attenuator, but the vehicle logs show that no action was taken.
Again its hard to know if Tesla wanted to give up control or not. I hope they aren't just saying that because the driver's hands were off the steering wheel, the crash occurred and is his fault for not obeying the agreement.
> I hope they aren't just saying that because the driver's hands were off the steering wheel, the crash occurred and is his fault for not obeying the agreement.
That's exactly Tesla's official stance - if you crash you're SOL and at fault, Autopilot or Auto Emergency Braking be damned.
https://www.cnbc.com/2018/01/31/apples-steve-wozniak-doesnt-...
"Man you have got to be ready — it makes mistakes, it loses track of the lane lines. You have to be on your toes all the time," says Wozniak. "All Tesla did is say, 'It is beta so we are not responsible. It doesn't necessarily work, so you have to be in control.'
As a Tesla owner, this is not accurate wording. The car will very quickly decelerate until about 5 mph if you have the crawl feature on.
I need to make a post, as a long time Tesla owner/very frequent autopilot user, because I think I know exactly what happened. Unfortunately I need to go to bed, already up later that I should be for tomorrow. A lot of information and comments I’ve seen about how it works and how autopilot is dangerous etc, are quite wrong. I think it’d be important for people to understand how it works from people who use it and get their opinions first before making statements about AP in general.
For the record, I love using AP and will continue to do so. But it is no silver bullet, and 99.5% of Tesla owners know this and how to correctly use it.
Yeah, when driving an emergency vehicle, in my case an ambulance and fire engine, you take a course that used to be called Emergency Vehicle Accident Prevention.
It's now called Emergency Vehicle -Incident- Prevention.
> In the US, there is one automotive fatality every 86 million miles across all vehicles from all manufacturers. For Tesla, there is one fatality, including known pedestrian fatalities, every 320 million miles in vehicles equipped with Autopilot hardware. If you are driving a Tesla equipped with Autopilot hardware, you are 3.7 times less likely to be involved in a fatal accident.
"Equipped with Autopilot" is different than running with Autopilot. As far as I know, all model S fatal accidents except one was with Autopilot engaged. Making Autopilot far more dangerous than manual driving from a statistical standpoint.
Take for instance: > The driver had received several visual and one audible hands-on warning earlier in the drive...
What this means is that during this incident there were No visual or audible clues that a crash was about to happen
What they are saying is that while he was driving there was an earlier point at which the car did not crash, where the car gave a visual and audible clues. And you have to ask? Why the hell is that relevant? It isn't. They are stating it as a fact because they know some people will incorrectly claim that the car warned the driver prior to the crash.
With their eager to try to explain what happened why aren't they talking about what actually went wrong? Why did the car drive straight into a barrier. If their claim is that it was cased by the driver not having his hands on the wheel for 5 seconds, then they need to fold the company and give up trying to create self driving cars. That is Not acceptable of a self driving system.
The fact that the barrier was damaged intensified the accident, but that does not in any way excuse their system driving straight into it. And no, you can't get out of culpability by claiming statistical superiority. That's like a gang-member drying to get out of jail after killing a rival gang member because his gang statistically kills fewer people. Tesla has put a product in the hands of consumers, when it kills those consumers they need to step up to the plate and be honest about their fuckups, not just blame the drivers, blame the infrastructure and point to statistics.
Quite. Tesla is deliberately comparing their modern vehicle design & wealthy driver base (statistically one of the safest cohorts) with the entirety of the U.S. driving population. This is bad statistics: We expect Tesla vehicles to have far fewer accidents than the mean US vehicle per mile, because Tesla's are expensive modern vehicles with (relatively) wealthy drivers who can afford to keep them maintained & will themselves be in better health than the mean US driver.
Don't tell us how safe Tesla's are compared to the entire US driving population: Tell us how safe they are compared to equivalent vehicles from other manufacturers. My working assumption is that Tesla doesn't do that because it would be far less flattering to the 'A Tesla is totally safe, honest' PR boosterism that runs through every Tesla press release.
I'm having a hard time to find an exact number comparison, but:
- Tesla claims 1 fatality per 320M miles
Compared to
- A 2015 study of 2011 model year vehicles showed, for instance, Volvo XC90 had never been involved in a driver fatality (1)
- There are about 10K 2011 XC90s in the US (2)
- Avg. US drivers cover 13,400miles / year (3)
So:
- 4 years studied * 10K vehicles * 13,400 avg miles -> 536,000,000 fatality-free miles
This is obviously super hand-wavy, but I think it's fair to state as a hypothesis that Tesla could be twice as deadly as the safest high-end vehicles.. you'd need to design some experiment to attempt to falsify it to feel confident this is correct.
(1) https://www.freep.com/story/news/nation/2015/01/29/study-cha...
(2) http://carsalesbase.com/us-car-sales-data/volvo/volvo-xc90/
(3) https://www.fool.com/investing/general/2015/01/25/the-averag...
You're going to have to explain to me how the wealth of a driver has any bearing on their ability to pilot a self-driving car.
You could argue that Tesla should be compared to luxury vehicles (which have more safety features), but even if you just compare their crash rate against U.S. drivers of cars and light trucks, the Tesla Autopilot driver fatality rate is almost four times higher than typical passenger vehicles...
Why people are still using their shit is beyond me. Why they are still authorized to sell it under this marketing (like the name given to it, or the mismatch between the restriction and the practice), I also can't understand: it should be better regulated by the authorities.
I have a friend that investigates planet crashes. He’s told me that between pilot error, human error, hubris, flying when conditions are too dangerous, poorly trained or sloppy mechanics only about about 1% really can’t be classified as being caused by humans. He said the stats are a bit skewed because like any organization that reports on human performance they’ve been influenced to be a bit “flexible” when assigning blame.
They omit what speed and what lane the car was travelling in. They omit the time at which the car was no longer behaving in the way in which it had (when did it start heading for the crash barrier? At what point did the wheels closest to the barrier move over the line? etc)
They have previously mentioned that their cars have navigated this spot many times in the past (over 200 as I recall) did they note the number with and without the crash barrier intact? How many times was the car in the exit lane and how many times in the lane to the right of the exit lane? How many times have they navigated past it with a damaged crash barrier.
The driver had driven this route before, it was his commute to work. Had he driven it with this car on that route in autopilot before? Did it work then? How many times? How many seconds were there between when the driver's car was doing something right (in lane etc) and doing something wrong (straddling the lane). The response indicates there were 5 seconds of visibility coming up to the crash barrier, at what point was the autopilot on a collision course? All of those five seconds? or the last 500mS ?
Reading the Tesla response I felt I wasn't getting the whole story, and was getting a whole lot of deflection. That isn't the tone that inspires confidence in me. And that makes me sad because they have done much better in the past. I cannot help but speculate that this time they feel they might have been contributory at least and that a full disclosure would be used against them in a civil suit.
Just Say'n
EDIT: While I believe Tesla and the driver are the ultimate source of the accident, I believe CalTrans messed up worse with that barrier. At least Tesla owns up to the limitations and warns drivers to stay in control, but why the hell was that barrier not replace d?! What systemic mediocracy is playing out there that they have road work signs with dates 3 years old on them, and a critical safety barrier is just missing with traffic flow as usual?!
The analysis I've seen suggested that the self-driving system in the Uber vehicle had 4 or 5 seconds to detect the pedestrian crossing the road in front of them. Driving into a pedestrian on the road in that circumstance is /not/ acceptable for any driver, be they human or automated.
Don't be fooled by the video footage Uber released from the in-car camera - the quality is appalling. Compare it with the dashcam footage taken by people driving the same road at the same time & you'll see that visibility should have been perfect & a human driver would easily have avoided the pedestrian who was doing something that all of do every day - crossing the road.
When reading something like this in sci-fi, I always thought that the stories where over the top and tongue in cheek, to make some other point. Now I see they were real.
Companies are allowed to create robots that kill people. Then we sit around here and discuss how much of the killing is on the the robot's manufacturer, the robot's owner, the street lights and the victim.
That is not to say CalTrans did not break some contractual obligations to replace that infrastructure in a reasonable timeframe, and shouldn't be held responsible in a commercial sense. But they are not in any way liable for the accident, the missing hardware was not a reason to stop traffic and all such devices are offered on a best effort, good to have, basis.
in both the driver was supposed not to let the car drive itself unsupervised but only in one the driver was paid to do so
But to be honest, I am more worried with the markings of the road rather than the inability of the autopilot of not foreseeing the accident: https://imgur.com/a/hAeQI
What's wrong with the US road administration? Why is this even look like a driving line? Where are the obvious markings? It's a very misleading road layout, I am curious the amount of accidents that happen there every year.
This is how I expect this kind of thing to look like: https://i.imgur.com/dfZehmd.gif
Given on how the road looks like, it makes more sense why Tesla is reinforcing the fact that the driver wasn't paying attention to the road.
Edit: Since people are curious about the limitations of the RADAR, the manual of the car mentions this limitation:
"Traffic-Aware Cruise Control cannot detect all objects and may not brake/decelerate for stationary vehicles, especially in situations when you are driving over 50 mph (80 km/h) and a vehicle you are following moves out of your driving path and a stationary vehicle or object is in front of you instead."
You can also read here that Volvo's system faces the same problem: https://www.wired.com/story/tesla-autopilot-why-crash-radar/
That doesn't make any sense. Radar systems do detect stationary objects just fine. In fact, from the point of view of the moving car, almost nothing is stationary.
I wouldn't be surprised if you're correct about the markings on the road, I almost crashed into a temporary barrier driving at night, because there were two sets of lane markings - the old ones, and the ones going around the barrier. Construction workers simply didn't bother to erase the old ones.
Uh, what?
I think you're thinking of Doppler radar systems. You can do static scene imaging with radar, no problems (I've written the code to do it, even!).
There's no way in hell a car is going to be using a Doppler radar. In all likelyhood, they're using a plain-old FMCW system, and it can absolutely detect stationary stuff.
It makes me wonder, is that layout actually on purpose? I truly find it hard to believe that a road or traffic authority in this day and age could design that on purpose! It's quite well known that human attention is a tenous thing, and that the attention given to an object is primarily proportional to its size. But here we have a barrier end that is signed as being about as dangerous as a a bit of debris.
The crash attenuator was 'used up' from a previous accident and might have prevented death if it was in working condition.
I see this all the time where I live. Crashed attenuators or disfigured guard rails go months and months without repair.
"However, it seems the driver ignored the vehicle’s warnings to take back control" [1]
Followed by the quote from Tesla that at some point in the history of the ride a warning of some sort was given.
[1] https://techcrunch.com/2018/03/30/tesla-says-fatal-crash-inv...
> What they are saying is that while he was driving there was an earlier point at which the car did not crash, where the car gave a visual and audible clues. And you have to ask? Why the hell is that relevant?
I see it being relevant that the driver had his follow distance set to the lowest and that several times in the drive prior to the accident he had to be prompted to maintain his hands on the wheel. This speaks to a driver that wasn't paying attention like they were supposed to be doing. That would increase the likelihood of him missing an upcoming hazard and responding in time.
Regardless of that, if he had reported and issue in that area to Tesla, why the hell was he not driving it himself at the time? I know that I personally value my life enough that I wouldn't trust AP if it had been veering into the wall repeatedly in one area.
Yes. But as we all know autopilot is not the same as self-driving.
>> And no, you can't get out of culpability by claiming statistical superiority.
In fact no one has found Tesla culpable for the accident. You can however defend the safety of your system by referring to statistics. (In fact that is the ONLY way to measure safety level) And your analogy does not hold. It it more like a parachute company saying their parachutes are safer than other brands because it fails less often than the other brands. Which brings me to my final point. Even if it turns out Tesla was culpable and negligent in this accident, it would not be a complete judgment if we leave out the inherent risks in car manufacturing and the incidents attributed to other companies.
But Tesla is shoveling in the bullshit pretty early on. Besides the fuzzy and deliberately vague stats (why is the mileage attributed to Tesla's "equipped with Autopilot hardware"?), this graf is just despicable:
> Tesla Autopilot does not prevent all accidents – such a standard would be impossible – but it makes them much less likely to occur. It unequivocally makes the world safer for the vehicle occupants, pedestrians and cyclists.
"unequivocally" has an actual meaning: leaving no doubt. And yet it's used following the paragraph that pretends Tesla's and all U.S. vehicles are directly comparable. I know it sounds silly to focus on that word but someone in Tesla PR thought it needed to be used with the pile of crap numbers. Such blatant and pointless deception -- "significantly" would work just as well for that inane sentence -- feels like a strong a signal to doubt the rest of the info given, as if the evidence weren't already so obviously dubious.
Vehicle notifies the driver it's having difficulty operating the driver elected auto pilot mode based on the conditions. Driver elects to continue using the failing operating mode despite being warned.
I fail to understand how "It isn't" relevant that the driver chose not to take manual control when warned that conditions were not ideal.
Every MBA student needs this tattoo'd on the back of their hand in order to graduate from now on.
I don't even read these posts from Tesla anymore because I know each one of them is just justification for how Tesla did nothing wrong, and it's the fault of the stupid human driver for using its Autopilot "wrong" and dying in the process. Meanwhile, Tesla is laughing all the way to the bank selling the usage of Autopilot as one of the primary features of its cars.
It almost reminds me of how Coke markets its sugar-filled soda "Drink as much as you can at every meal and at any event...but you know, in moderation (because that's what the government makes us say with its dietary guidelines)".
Tesla is kind of like that. It tells people to use Autopilot because it's such a great experience and whatnot, but then stuffs its terms and conditions with stuff like "but don't you dare to actually lay back and enjoy the Autopilot driving. You need to watch the car like a hawk, and if you happen to ignore any of our warnings, well it's your stupid-ass fault for ignoring them, even though we wrote everything there in the 11-point font." It's very irresponsible for any company to do stuff like this.
I don't know what the exact process is now, but I wish the authorities would be able to get access to un-tampered logs of the accident after the fact from the self-driving car companies.
I don't care if it's a car company that has "Elon Musk's reputation" (who I like) or Uber's reputation. These accidents, especially the deadly ones, should be investigated thoroughly. No company will ever have the incentive to show proof that leads to itself becoming guilty. It's the job of the authorities to independently (and hopefully impartially) verify those facts.
I'm a big fan of Tesla and its electric vehicles, big fan of Tesla Energy, big fan of SpaceX. But I've always criticized them for the Autopilot because it was obvious to me from day one that they're being very irresponsible with it and putting the blame on humans, when they were marketing it as a self-driving feature (they did for years, even if they don't do it as much now, but they still use the same name), while its system is far from being anywhere close to a complete self-driving system, and thus putting anyone who uses in danger of dying.
- Cruise control maintains speed, will automatically decelerate if a car in front slows down, speed back up if they speed or go away, etc.
- Collision avoidance: the car will automatically emergency brake if it believes a collision is imminent.
- Lanekeeping: the car will stay in the lane without driver attention, although if it doesn't detect your hands on the wheel with enough frequency, the system will disengage.
- The car can park itself.
A quick look at Wikipedia tells me it can also change lanes and be "summoned."
Most of these features, especially considering cruise control, collision avoidance, and lane-keeping as the core featureset, since at least in 2017, have been in wide deployment among most car manufacturers, down to even cheap economy cars, that people are buying right now without getting on waiting lists or anything of the kind. You can buy a car right now with this core featureset for around ~18k.
Not only that, people seem to dramatically overestimate what kind of auto-pilot Teslas have. Speaking with my doctor about it, he assured me that "you just tell the car where to go, and it goes there." How did Teslas get so hyped up in the common man's imagination?
It's very irresponsible for regulation to allow them to do that. And statistically for us to vote for politics that are not against that kind of behavior. Big companies rarely give a shit about killing people in the process of making money, as long as they are not punished for doing that. (And we, btw, mostly don't give a shit when the people being killed live in other countries, even when the process to do so is far more direct than the 1st world problem that interest us here.) People who used to organize those kind of criminal behaviors used to be held responsible for the consequences, but it is more rare now and the laws in various countries have actually been changed so that people managing companies are far less accountable than before for the illegal and otherwise bad things they order. As for companies being accountable, it is usually in punitive money that is order of magnitude less amount than the extra profit they made in doing their shit, so why would them stop? The risk of the CEO being sent to jail would stop them. Ridiculously small fines against the company, in the astonishingly rare cases where they even actually fined, not so.
[1] - https://www.youtube.com/watch?v=j3osSJSGInQ
Uber made sure to point out that the victim of their incident was homeless. Tesla is pointing out how the driver received unrelated cues earlier in the journey. None of this information is relevant. They’re trying to bamboozle the reader in an effort to improve their images at the expense of victims who can’t defend themselves.
I don’t understand why it is so impossible for these companies to act humbly and with a sense of dignity around all this. I don’t expect them to accept responsibility if indeed their technology was not to blame, but frankly that isn’t for them to decide. Until the authorities have done their jobs, why not show remorse, regardless of culpability, as any decent human would?
> Uber made sure to point out that the victim of their incident was homeless. Tesla is pointing out how the driver received unrelated cues earlier in the journey. None of this information is relevant.
I don't understand how you can equate those first two lines. Uber's observation is clearly irrelevant, but the fact that the Tesla driver received multiple "get your hands back on the wheel" notifications, as close as six seconds before the accident seems very relevant to me.
But that isn't what their statement says. It says the victim had his hands off the wheel for six seconds before the crash, and that he received hands-on warnings "earlier in the drive." It does not say that during those crucial six seconds he was being warned. Nor does it explain the fact the car plowed into a barrier.
They're right when they say we don't speak of the accidents that didn't happen, and I bet there's a ton of them.
As someone who's been in multiple car crashes, as a passenger, I really on longer want to be at the mercy of human drivers.
Brief reminder of the same discussion, in a different context.
https://www.youtube.com/watch?v=bzD4tIvPHwE
Yet on one, and I really mean no one complains about computer assisted landings and takeoffs. I'm not sure why. Lots of passengers even sleep through them.
I guess we've come to realise that the man machine symbiosis works well for flight. But it might take time for this to get engrained into our culture when it comes to driving.
What exactly am I looking at? The video itself says it's a computer-piloted takeoff, the YouTube title says it's a computer-piloted landing, and the top-voted comment says it's a human-piloted fly-by.
edit: it appears to be this flight: https://en.m.wikipedia.org/wiki/Air_France_Flight_296
(That would be an inexcusable and cynical deflection.)
Because humility requires you to admit wrongdoing, which from a legal standpoint is not advisable in a public statement.
Deleted Comment
Ultimately companies don't have feelings, their employees do. PR statements and the words chosen (unless written by an individual like the CEO) don't somehow make a company into a person and don't reflect the feelings of their employees: they are a crafted tool to create a certain outcome by instilling thoughts in the mind of the reader and/or providing legal cover.
I would venture a guess this is an outcome of the "sue for everything" legal environment in the US. Any statement that could be construed as anything other than "we did nothing wrong" could be seized on by shareholders suing the company for securities fraud and demanding class action status.
What I think they fail to address, especially in this case, is that the autopilot did something a human driver who was paying attention would never do. Autopilot does a great job of saving people from things even a wary driver would miss, much less a negligent one, but the fatal accidents that occur in statistics are not from people who are fully watching the road missing the fact that there's a concrete barrier with yellow safety markings directly in their path, and hitting it head on for no good reason (e.g. evasive action because of another driver, etc, oh a stupid last-minute "oh crap that's my exit").
I want autopilot to succeed, and I want Tesla (and Musk) to succeed, and for the sake of their public image they have to realize that this isn't an average accident statistic, a lapse in attention or evasive maneuvering. It's a car that seemingly plowed right into a concrete barrier while still under complete control. That's not a mistake a healthy human will make.
Then how did the crushed barrier get crushed before the Tesla hit it? Clearly, the stretch of road is unsafe enough to trick humans drivers (and, clearly, Tesla should improve).
The most likely scenarios I can think of would be not paying attention and drifting into the barrier, attempting to avoid a car merging into the lane (and not paying enough attention), or being struck by another car and being forced into the barrier.
I'm sure that out of the 1.25 million annual automobile-related deaths, plenty of drivers were paying attention and still did stupid things similar to this accident.
Humans do exceedingly stupid things all the time because they stop paying attention, even momentarily (or subconsciously).
We put big lights on the back of cars that light up when they brake. And yet, despite a driver looking direct at a huge object with two lights, that rapidly grows larger right in front of them, does not always prevent a collision. Or even a chain of collisions. I don't get on the road much, and yet even I've seen a ton of accidents that are baffling and can only have been the result of a driver not paying attention for a bit.
I'm reminded of how in quite a few places removing signs and lights actually improved safety because it forced drivers to stay aware instead of 'driving on autopilot', so to speak.
I think that's the real issue here. The more we outsource our attention to a machine, the more important it is that said machine does MUCH better than we humans do. Especially if a mistake can be deadly.
But I wouldn't be surprised if, indeed, technically this accident could've happened just as easily by a non-autopilot car where the driver had a little 'micro-sleep', got distracted by something in his field of view, mistakenly thought he was on a lane and didn't notice the (let's be fair) ridiculously bad markers that were the only way to tell that part of the 'gray' stuff in front of him was in fact a wall of concrete.
I mean, just look at the image: https://imgur.com/a/iMY1x . Half of what makes the barrier stand out is the shadow!
All that said I might sound more argumentative than I am. I do agree with most of your comment.
"this isn't an average accident statistic, a lapse in attention or evasive maneuvering. "
"That's not a mistake a healthy human will make."
Almost every driver thinks they're significantly better than average. Few are.
If a human driver's lapse of attention causes a similar crash, how much less of a tragedy is it just because we can less ambiguously blame the victim?
My opinion is that the statistics compare just fine.
Actually, about half of drivers are better than average.
true. here’s the location (post #62)
https://teslamotorsclub.com/tmc/threads/model-x-crash-on-us-...
but the average person in the US anyway is not a good driver. i very often see people get aurprised at the lane ending at that exact spot. they should put a rumble strip leading up to it.
So, my question is, is it fair to compare AP to a human driver paying attention or is it more fair to compare it to an average driver.
I mean that just for the sake of argument. In this specific case, NTSB or someone should ban AP. It is so obvious what’s about to happen there, an AP should do what a driver should do which is take the ramp whether it’s a wrong turn or not. So many a-holes try to squeeze in last second (oops) when they should just take the damn exit or miss the damn exit as the case may be. What AP did here is what a poor and panicked driver would do and that’s just not acceptable.
So, I thought it wasn't all that clever in the first place to try and marry the risks of getting electric cars into the market with the risks of telling the extremely wealthy that they didn't need to hold the damn steering wheel.
This statement is meaningless circular logic. You can handwave away any incident with a human driver by saying he wasn't "healthy".
A human pilot has intentionally driven an airplane full of passengers into the ground with full control because he wanted to kill himself. The airline believed he was "healthy".
Tesla's statement is not clear on this at all.
Tesla demonstrates their usual abuse of statistics. 1.25 fatalities per 100 million miles is the average across all types of terrain, conditions, and cars.
The death rate in rural roads is much higher than in urban areas. The death rate on highways is much lower than average. The death rate with new cars is much lower than average.
The autopilot will not engage in the most dangerous conditions. This alone will improve the fatality rate, even if the driver does all the work.
Tesla cars are modern luxury cars. They appear to have done a great job building a well constructed, safe, car. This does not mean their autopilot is not dangerous.
It also doesn't mean that it is dangerous. That bag of doritos that you ate, it might also be dangerous. Let's use facts and not vague worrisome complaints. So if you want to talk about the higher danger of rural roads, please give a number, and then give a number for tesla, or estimate one. Don't just say "urgh".
Just as an example, click on California (it is mostly consistent all over) on the map [1]. California has a lower average fatality rate of 1.01 per 100 million miles.
Rural roads have 2.62 fatalities per 100 million miles. Urban roads have 0.70 fatalities per 100 million miles.
The fatality rate is much lower on highways, according to Wikipedia [2], freeways have 3.38 fatalities per billion km (0.54 per 100 million mile, if I managed the conversion).
[1] https://cdan.nhtsa.gov/stsi.htm
[2] https://en.wikipedia.org/wiki/Transportation_safety_in_the_U...
Remember, there are several very normal cars models that have had ZERO deaths: https://www.nbcnews.com/business/autos/record-9-models-have-...
These autopilot cars are death traps.
I do take issue, however, with your claim that Teslas are “well constructed” - they are not. They are poorly designed, poorly constructed vehicles that feel and look cheap. Test one side by side with, say, a BMW 3-Series or a Volkswagen Golf and the difference in production quality will be palpably obvious.
(I haven’t seen scientific comparisons, though.)
Thanks, I never knew this term before. Seems like a dark humour gem - equally funny and terrifying.
This seems to be trying to suggest that the driver had 6 seconds of clear driving-toward-death time to correct for the car's actions without explicitly making such a ridiculous statement (while also throwing in the crushed crash attenuator). If the car makes a quick change in direction due to an autopilot error, a driver at speed would have very little time to make an effective correction.
Depending on the system behavior, it could be akin to having a passenger reach over and yank the wheel. I'd honestly rather Tesla just said "ongoing investigation" instead of being so transparently evasive.
So even if Tesla's autopilot steered the vehicle towards that divider, shouldn't the auto-braking system have engaged to avert the accident? Tesla vehicles also have visual image cameras as well as radar based ones, so the information density is even higher than Subaru's system.
I guess what I am getting at is: Is auto-braking disabled while autopilot is on? Wouldn't leaving auto-braking enabled (particularly using the visual cameras) offer a second layer of safety is autopilot made an error?
I raised the same issue when a Tesla with autopilot on drove straight into the side of a truck and the driver was decapitated. The discussion was all about "well radar couldn't distinguish it from road signs!" while ignoring that a Tesla has visual (optical) cameras front and center.
Would've loved to know the exact moments we are talking about here. Is it 5 seconds or 5 minutes?
> The driver had received several visual and one audible hands-on warning earlier in the drive and the driver’s hands were not detected on the wheel for six seconds prior to the collision.
What does this mean? Did Tesla want to give over control to driver? Or just normal no hands?
> The driver had about five seconds and 150 meters of unobstructed view of the concrete divider with the crushed crash attenuator, but the vehicle logs show that no action was taken.
Again its hard to know if Tesla wanted to give up control or not. I hope they aren't just saying that because the driver's hands were off the steering wheel, the crash occurred and is his fault for not obeying the agreement.
That's exactly Tesla's official stance - if you crash you're SOL and at fault, Autopilot or Auto Emergency Braking be damned.
https://www.cnbc.com/2018/01/31/apples-steve-wozniak-doesnt-... "Man you have got to be ready — it makes mistakes, it loses track of the lane lines. You have to be on your toes all the time," says Wozniak. "All Tesla did is say, 'It is beta so we are not responsible. It doesn't necessarily work, so you have to be in control.'
"Well you that is kinda a cheap way out of it."
I need to make a post, as a long time Tesla owner/very frequent autopilot user, because I think I know exactly what happened. Unfortunately I need to go to bed, already up later that I should be for tomorrow. A lot of information and comments I’ve seen about how it works and how autopilot is dangerous etc, are quite wrong. I think it’d be important for people to understand how it works from people who use it and get their opinions first before making statements about AP in general.
For the record, I love using AP and will continue to do so. But it is no silver bullet, and 99.5% of Tesla owners know this and how to correctly use it.
In this case, it would be more like "drive in a straight line and... collide to a stop."
Perhaps braking to a stop at a reasonable rate would be the right thing to do, given that it surely should've detected it was going to hit something?
http://www.roadpeace.org/take-action/crash-not-accident/
https://www.crashnotaccident.com
It's now called Emergency Vehicle -Incident- Prevention.
"Equipped with Autopilot" is different than running with Autopilot. As far as I know, all model S fatal accidents except one was with Autopilot engaged. Making Autopilot far more dangerous than manual driving from a statistical standpoint.