The actual, non-editorialized data, along with the two actual NHTSA reports (one for level 2 driver assistance systems, one for self-driving systems in levels 3-5), are here:
Can we trust Tesla (and other manufacturers) after they did this:
"Tesla eventually agreed to split the cost of the repair, but before they would do that, the owner had to sign away his right to discuss the defect, which could preclude him from reporting it to the National Highway Safety and Transportation Administration (NHTSA)."
No. They are offering bribes. It's anti-competitive behavior in that it withholds information consumers need to make free and informed decisions. And it's fraud. Tesla gains financially from this deception. If offering the bribe is not already illegal, it should be made illegal. If it is illegal it should be prosecuted. By all means start out with a per bribe fine at least equivalent to the profit of a single unit, or else the fine is just a cost of doing business. Otherwise, treat it as a crime.
Here's some actual data I just pulled from the report, showing number of incidents when using a level 3-5 system, sorted in descending order and then alphabetically by reporting entity. Despite having the greatest number of FSD users, Tesla has only a single reported accident when FSD was engaged. Other level 3-5 systems with much smaller installed bases are doing far worse. The data for level 2 is further below.
# of incidents, levels 3-5:
Waymo 62
Transdev 34
Cruise 23
GM 16
Zoox 12
Argo 10
Ford 7
May Mobility 3
Mercedes Benz 3
Pony.ai 3
Easymile 2
Toyota 2
WeRide 2
Apple 1
Beep 1
Chrysler 1
Hyundai 1
Local Motors 1
NVYA 1
Navistar 1
Porsche 1
Robert Bosch 1
Robotic Rsrch 1
Tesla 1
TuSimple 1
Here's the level 2 data -- number of incidents, again sorted in descending order and then alphabetically by reporting entity. The biggest caveat with these figures is that the NHTSA does NOT know how the rate of adoption & usage of level 2 technologies varies by reporting entity. It's likely that Tesla and Honda have the greatest rate of adoption & usage of level 2 technologies, but we don't really know:
# of incidents, level 2:
Tesla 273 <-- figure quoted by the OP
Honda 90
Subaru 10
Ford 5
Toyota 4
BMW 3
APTIV 1
Hyundai 1
Lucid 1
Porsche 1
VW 1
EDIT: After a quick look at the report and the data, my take is that it's way too early to reach conclusions other than non-Tesla level 3-5 systems appear to have more accidents despite being used in many fewer vehicles. Shame on the Washington Post for publishing such an unjustifiably alarmist piece!
To drive home the point about how unsafe Tesla' Autopilot is, Nissan has sold more than 560,000 vehicles with an L2 system...and had zero reportable accidents, which makes Tesla infinitely more dangerous than a Nissan.
Even viewing the data in the light most charitable for Tesla, Autopilot is 3x more dangerous than the next most dangerous L2 system, and 54x more dangerous than the companies that sell the most vehicles in the world.
Also, FSD is not an L3 system. It's an L2 system, so Tesla should not have any data in the L3-L5 section.
EDIT: Ars notes that Tesla's 273 crashes are only for the past year, since the NHTSA required automakers to report crashes, but the numbers for all other automakers are total numbers (meaning, since they started selling L2 systems). This makes the Tesla data even worse.
I'm sure Tesla are testing L3 systems, but I didn't think any currently offered to the public provided any more than L2? Even so-called "FSD" is still only a L2 system and requires the driver to maintain attention and be prepared to take control with no notice.
Thank you for linking to all of the data. Consumers have different priorities than society when it comes to these systems, "convenience" vs "safety" respectively.
If these systems are not improving safety above baseline (rate of accidents for non-assisted driving) it would be interesting to find out how this difference appears with each system's level of assistance. Then within those levels, are there certain platforms which pull the numbers in different directions (e.g is a Tesla more or less safe than other DAS of the same level).
If these systems are not resulting in a reduction of accidents, should we rethink what kind of 'driver assistance' should systems provide?
If higher level systems are less safe, should the industry hold off on pushing those systems to market until they're developed further?
If a specific platform is a significant source for the reported accidents, then what action should be taken to address? (disable those specific DAS already in market?)
Great questions. I don't think anyone has good (defensible) answers for them yet. This new effort by the NHTSA to compare ADAS and DAS systems apples-to-apples, in combination with more traditional efforts to collect overall data per mile driven (for both machine and human drivers), should eventually give us the answers :-)
Considering the newspaper owned by the billionaire runs stories criticizing said billionaire's company's anti-union efforts [1], mayhap there's editorial independence?
You replied to someone that posted NHTSA links. This only makes you sound more whiny than if you had just posted this as a reply to the article itself.
This one from The AP [0] says Tesla’s figure and its crash rate per 1,000 vehicles was substantially higher than the corresponding numbers for other automakers that provided such data, though Tesla is the only one that does it real time and "Other automakers, by contrast, must wait for reports to arrive from the field and sometimes don’t learn about crashes for months." There's quite a few more details in there, the comparisons make Tesla look worse than the others, though I'm not quite sure by how much when you consider all the differences in reporting?
(this might also be in the WAPO article, I couldn't read it)
One of the most contentious issues is that Tesla is able to provide a figure in terms of crashes per miles driven on Autopilot whereas other automakers can't do the same.
So they report crashes per 1000 vehicles, but that doesn't take into consideration how much people are using Autopilot vs how much people are using GM SuperCruise. If people with GM cars are hardly ever using SuperCruise but people with Teslas use Autopilot often, that's going to really skew the figure.
Great context. Impossible to know if this is good or bad without a point of comparison.
I can’t help but feel discouraged that other automakers will improve their reporting. It’s all downside, because you get articles like this, that inevitably build into narratives - true or not - that your cara are unsafe. Meanwhile, why bother when your deficiency is reflecting poorly on your competitor?
Faster reporting shouldn't affect long time averages. If the delay is x months 100*x/(n+x)% of crashes go unreported at month n with delayed reporting under simplified assumptions.
The challenge is that the availability of these systems is increasing exponentially at the moment. Long term it matters less, as the long term reporting can include mileage. Short term, the lags introduce a lot of bias.
I'm thinking largely of how quickly Super Cruise and Blue Cruise are expanding here. Other systems may not be expanding as quickly.
It's also worth noting that if Tesla is automatically collecting these crash stats from vehicle telemetry and other manufacturers are collecting them from customer reports or vechicle service calls, Tesla's numbers are probably inflated by a factor of three (or other manufacturers are deflated by three, however one wants to call that).
The first thing Waymo learned starting out on public roads from observing their own telemetry was that NHTSA numbers (based on reported crashes and service calls) are three times lower than actual collisions because there's huge incentive to not report a collision (insurance premiums). Comparing numbers from differing data-collection approaches is an apples-to-oranges comparison.
I care about nine numbers, in the order of the groups given.
Group A
1. How many deaths and serious injuries per standard distance does AP have?
2. How many deaths and serious injuries per standard distance do other self-driving cars have?
3. How many deaths and serious injuries per standard distance do human drivers have?
Group B
1. What is the financial impact of crashes per distance driven with AP?
2. What is the financial impact of crashes per distance driven with other self-driving cars?
3. What is the financial impact of crashes per distance driven with human drivers?
Group C
1. How many crashes per distance does AP have?
2. How many crashes per distance do other self-driving systems have?
3. How many crashes per distance do human drivers have?
Anything not comparing these numbers one to the other is incomplete and potentially propagandistic. Tell me about the human impact. Tell me about the monetary impact. Tell me about the number of total crashes.
Without all three numbers in each group and all three groups, we can't tell if 273 is a high number or a low one. We can't tell how it's impacted by total distance driven with the systems. We can't tell if the accidents are more or less serious. We can't tell if these systems are dangerous convenience features or if they're actually improving on numbers from human drivers. We need real data, and real comparisons.
That's a very clean way of dividing up the data, I'd be curious about those figures too.
I was going to question the relevance of comparing tech-caused accidents with human-caused accidents since, despite the growing evidence that tech is the better driver under normal circumstances, we have this cognitive bias around lower perceived safety when someone/something else is driving. People just want to feel in control, even though they're more likely to crash, and my guess is that this will slow down adoption of self-driving technology more than necessary. But then I realized that evidence like this would probably be the best argument no matter what, so in the end, it makes even more sense to compare those numbers!
Humans have a fatal crash about once per million miles. Self Driving is complicated by the fact that humans can intervene. Human drivers have no such backup.
So comparison may require compensating for the fact that automated drivers have that human backup preventing them from otherwise killing more people.
lol thanks for the broken down version. I was saying the exact same thing but too lazy to break it out into all the rates. Seems like the dumbed down version is "what is the rate vs other equivalent autopilots" and "what is the rate vs your average fleshbag"
Oh, you're a tesla apologist then? For each of those groups, the AP case is much more forgiving as both the self-driving case and human driving case need to handle all driving conditions whereas AP only needs to handle the safest of conditions (especially safest on a per-mile basis).
Oh, and bonus: "other self-driving systems", Tesla's AP and FSD (despite it's name literally being "full self-driving") are not self-driving systems.
Your comment ignores the parent and therefore misses their point. It is _irrelevant_ which conditions need to be handled by AP. If we don't have even basic data, we can't make any useful comparisons.
Literally just omit the word "other" and the point stands. Without data we don't know numbers for Tesla or other manufacturers. That's literally what data means - the quantitative and possibly qualitative information about the topic.
A "crash" in this case is any crash where the automated driving aids were in use less than 30 seconds prior to the crash. So for example, if the car realized it couldn't deal with something 29 seconds up the road, gives a 10 second warning to the human that the human needs to take over, and then the human drives into a crash, this would count as an Autopilot crash.
> A "crash" in this case is any crash where the automated driving aids were in use less than 30 seconds prior to the crash.
It's 5 seconds, not 30 seconds. From Tesla safety report [1]:
> To ensure our statistics are conservative, we count any crash in which Autopilot was deactivated within 5 seconds before impact, and we count all crashes in which the incident alert indicated an airbag or other active restraint deployed.
The article states: "The NHTSA order required manufacturers to disclose crashes where the software was in use within 30 seconds of the crash, in part to mitigate the concern that manufacturers would hide crashes by claiming the software wasn’t in use at the time of the impact."
As has been stated, though, simply saying there have been 273 crashes by itself means nothing.
If I said to you "following the introduction of a new drug there have been 1000 heart attacks in the USA this year so far", you may think, shit, 1000 people have had heart attacks, but when you realise that there are 800,000 heart attacks per year in the USA, making it about 400,000 pro-rata for June, reducing that down to 1000 would be amazing.
So, 273 crashes. But what would that figure be without the system?
We need the data to compare this to groups of drivers who don't use these systems. Otherwise, it's meaningless. The numbers are meaningless. Like values without units.
I think OP is saying something slightly different: the Tesla numbers could be substantially worse if Autopilot simply disengages 1 millisecond before impact, it would not count as an Autopilot crash.
How does it not count? 10 seconds doesn't seem like much time for a human not really paying attention to the road, as would probably be the case, to assess the situation and respond. Although the cutoff is at 30 seconds, I wonder what the distribution is actually like.
10s is basically an eternity in car time. Pretty sure I could wake up from a nap and boot my brain enough to drive in 5s or less. A car going 80mph can stop in around 4s.
With level 2 autonomous driving systems the human is required to always be watching and ready to take over. 10 seconds is a reasonable amount of time for this transition according to Mercedes Drive Pilot system: https://www.roadandtrack.com/news/a39481699/what-happens-if-...
I'm just going to interpret "[autopilot] couldn't deal with something" as "autopilot would also have crashed the car in that situation".
To me a situation in which autopilot would have crashed, gave control to the driver, who couldn't "save" the bad situation, is just an autopilot crash.
Autopilot is a SAE level 2 driver assist system; it should never be trusted to keep you safe. Whether or not the system bails out completely prior to a crash is immaterial.
1. While that’s true there’s no reason to think that extreme case is normal for these.
2. Even if it was, it’s sort of irrelevant. When considering the safety of the feature, the whole process, including allowing the driver to become distracted, must be considered. This isn’t a clean room where you can just focus on the performance of the models, this is a real life car interacting with other unpredictable cars, being driven by an imperfect human. If the nature of these systems is that it makes the drivers complacent, that’s absolutely something that needs to be taken into account when thinking about the overall experience of a self driving system.
What's your alternative? Tesla has also faced criticism for turning off autopilot less than a second before the crash, and then claiming that because the car was not on autopilot it was driver error.
"On Thursday, NHTSA said it had discovered in 16 separate instances when this occurred that Autopilot “aborted vehicle control less than one second prior to the first impact,” suggesting the driver was not prepared to assume full control over the vehicle.
CEO Elon Musk has often claimed that accidents cannot be the fault of the company, as data it extracted invariably showed Autopilot was not active in the moment of the collision. "
My alternative is to present the data including how long before the crash the system deactivated and not just report all of the incidents into one category. This requires nuance and judgement in reporting.
I'm not blaming Tesla or the NHTSA for this, I'm blaming the WP as their reporting on this lacks any information and seems rather to just paint the data in such a way to make Tesla look bad. There's no room for interpretation by the reader because no actual useful data is presented.
Edited to correct my naming of the article source.
It does seem like there should be some happy medium between this and Tesla's policy of not counting it as an Autopilot crash if it disengages one second before impact.
I guess at the end of the day I mostly care about the overall accident rate, AP or not. Since the premise is that this kind of car will be safer, you would expect that rate to be lower.
Yep, by itself the data in the article is kind of useless unless we can get that normalised to crashes per km driven, and to also include stats on other manufacturers' cars' use of autonomous systems at the time of crashes.
The author alluded to it by saying Tesla collects more data, however it doesn't state that no other manufacturer collects what we want to know about (E.g. dynamic cruise control, active lane keep assist wheel nudges, AEB activation, etc).
So, actual data please, then let's compare apples to apples rather than this sensationalist clickbait. (I would really like to see crashes per km for both AP on, AP off, and FSD beta enabled in the case of Tesla.)
I agree. I also want to note that Tesla has treated this subject like an opportunity to market and really done nothing to deserve the benefit of the doubt.
Was this headline written by an unpaid teenage intern? It should be obvious that '273 crashes' is meaningless without a comparison to how many are expected among other similar vehicles.
From the article...
Tesla vehicles made up nearly 70 percent of the 392 crashes involving advanced driver-assistance systems reported since last July
Do Teslas make up 70% of vehicles with advanced driver assistance?
Here's a list of cars with these systems.[1] That's a lot of models, so it certainly appears Tesla has an outsized proportion of crashes.
It could have been done in a couple paragraphs and charts/tables with informative "rates of..." but they have to mix it in with a ton of fluff. Modern journalists always have to interject their "take" rather than just make a statement and show some data to support it.
No this is just the new way of doing things. COVID case numbers were notoriously absolute during the whole pandemic (in some cases govts insisted the media/dashboads only gave absolutes), which is so useless as a data point and so easy to make useful, but no...
This is the new journalism/government: we love absolutes.
Your first sentence is more of a commentary than anything. Other than that I have to agree with you, just giving the numbers like that is super meaningless and almost deceptive, especially for infection data. It's a disgrace and shouldn't be tolerated, it makes people with poor deductive skills draw the wrong conclusions and operate with emotions instead of logic.
Well, considering these happened when (I assume) the cars are using 'state of the art' driver assistance software, any number >1 is something to discuss.
Tesla could have been a fantastic brand if they'd managed expectations better on AP/FSD and the douche-in-chief kept his twitter trigger hands in check a bit.
Short answer, yes. Based on the data so far released by the NHTSA, comparing similar cohorts (drivers of luxury cars with advanced driving systems), Tesla's using Autopilot crash at least 3x more frequently than cars from other automakers using their respective advanced driving systems. (3x compared to Honda, 50+x compared to Toyota, GM, or Ford L2 systems)
Or to put it more directly: Nissan has sold over 560k cars in the U.S. with an L2 system, and has no reportable accidents. Tesla has sold approximately 1.3 million cars in the U.S. with Autopilot, and has 273 accidents. GM has been selling SuperCruise since last year (approx 100k vehicles), and has 0 L2-level accidents. (And testing by Consumer Reports, et al, report fewer issues with SuperCruise than with Autopilot in similar situations.) TLDR: there is no way to massage the data to make Tesla look even remotely safe compared to its competitors.
Can't read the article because I'm poor, but... Is that a lot? How high is this compared to non-autopilot cars? How many of these are fault of the autopilot? Where are the stats coming from? What counts as a crash - is every one of those fatal, or does this include running into a pole while parking?
* Overview: https://www.nhtsa.gov/press-releases/initial-data-release-ad...
* Data: https://www.nhtsa.gov/node/103486
* Report on level 2 systems: https://www.nhtsa.gov/document/summary-report-standing-gener...
* Report on level 3-5 systems: https://www.nhtsa.gov/document/summary-report-standing-gener...
Going forward, the NHTSA will release data updates monthly. That's exactly what we need to judge progress with these technologies: data. Bravo!
Everyone: Let's at least skim the data and read the reports' summary conclusions before we start sharing our opinions.
"Tesla eventually agreed to split the cost of the repair, but before they would do that, the owner had to sign away his right to discuss the defect, which could preclude him from reporting it to the National Highway Safety and Transportation Administration (NHTSA)."
Source: https://www.forbes.com/sites/lianeyvkoff/2016/06/09/is-tesla...
--
EDIT: After a quick look at the report and the data, my take is that it's way too early to reach conclusions other than non-Tesla level 3-5 systems appear to have more accidents despite being used in many fewer vehicles. Shame on the Washington Post for publishing such an unjustifiably alarmist piece!
Tesla: 3 of 550 accidents the car was stationary. 9 fatalities, 6 serious injuries.
Even viewing the data in the light most charitable for Tesla, Autopilot is 3x more dangerous than the next most dangerous L2 system, and 54x more dangerous than the companies that sell the most vehicles in the world.
Also, FSD is not an L3 system. It's an L2 system, so Tesla should not have any data in the L3-L5 section.
EDIT: Ars notes that Tesla's 273 crashes are only for the past year, since the NHTSA required automakers to report crashes, but the numbers for all other automakers are total numbers (meaning, since they started selling L2 systems). This makes the Tesla data even worse.
And Tesla could probably tell you how many crash per driving hour among other things
If these systems are not improving safety above baseline (rate of accidents for non-assisted driving) it would be interesting to find out how this difference appears with each system's level of assistance. Then within those levels, are there certain platforms which pull the numbers in different directions (e.g is a Tesla more or less safe than other DAS of the same level).
If these systems are not resulting in a reduction of accidents, should we rethink what kind of 'driver assistance' should systems provide? If higher level systems are less safe, should the industry hold off on pushing those systems to market until they're developed further? If a specific platform is a significant source for the reported accidents, then what action should be taken to address? (disable those specific DAS already in market?)
[1] https://www.washingtonpost.com/technology/2022/06/13/amazon-...
https://www.latimes.com/business/story/2022-06-14/data-likel...
0: https://apnews.com/article/technology-business-5e6c354622582...
So they report crashes per 1000 vehicles, but that doesn't take into consideration how much people are using Autopilot vs how much people are using GM SuperCruise. If people with GM cars are hardly ever using SuperCruise but people with Teslas use Autopilot often, that's going to really skew the figure.
I can’t help but feel discouraged that other automakers will improve their reporting. It’s all downside, because you get articles like this, that inevitably build into narratives - true or not - that your cara are unsafe. Meanwhile, why bother when your deficiency is reflecting poorly on your competitor?
I'm thinking largely of how quickly Super Cruise and Blue Cruise are expanding here. Other systems may not be expanding as quickly.
The first thing Waymo learned starting out on public roads from observing their own telemetry was that NHTSA numbers (based on reported crashes and service calls) are three times lower than actual collisions because there's huge incentive to not report a collision (insurance premiums). Comparing numbers from differing data-collection approaches is an apples-to-oranges comparison.
Without all three numbers in each group and all three groups, we can't tell if 273 is a high number or a low one. We can't tell how it's impacted by total distance driven with the systems. We can't tell if the accidents are more or less serious. We can't tell if these systems are dangerous convenience features or if they're actually improving on numbers from human drivers. We need real data, and real comparisons.
I was going to question the relevance of comparing tech-caused accidents with human-caused accidents since, despite the growing evidence that tech is the better driver under normal circumstances, we have this cognitive bias around lower perceived safety when someone/something else is driving. People just want to feel in control, even though they're more likely to crash, and my guess is that this will slow down adoption of self-driving technology more than necessary. But then I realized that evidence like this would probably be the best argument no matter what, so in the end, it makes even more sense to compare those numbers!
So comparison may require compensating for the fact that automated drivers have that human backup preventing them from otherwise killing more people.
Oh, and bonus: "other self-driving systems", Tesla's AP and FSD (despite it's name literally being "full self-driving") are not self-driving systems.
Literally just omit the word "other" and the point stands. Without data we don't know numbers for Tesla or other manufacturers. That's literally what data means - the quantitative and possibly qualitative information about the topic.
I'm no Tesla fan, but that's a bit disingenuous.
It's 5 seconds, not 30 seconds. From Tesla safety report [1]:
> To ensure our statistics are conservative, we count any crash in which Autopilot was deactivated within 5 seconds before impact, and we count all crashes in which the incident alert indicated an airbag or other active restraint deployed.
[1] https://www.tesla.com/VehicleSafetyReport
The whole point is "my new system reduces crashes", if it doesn't, in the bottom line, then it doesn't.
If I said to you "following the introduction of a new drug there have been 1000 heart attacks in the USA this year so far", you may think, shit, 1000 people have had heart attacks, but when you realise that there are 800,000 heart attacks per year in the USA, making it about 400,000 pro-rata for June, reducing that down to 1000 would be amazing.
So, 273 crashes. But what would that figure be without the system?
We need the data to compare this to groups of drivers who don't use these systems. Otherwise, it's meaningless. The numbers are meaningless. Like values without units.
To me a situation in which autopilot would have crashed, gave control to the driver, who couldn't "save" the bad situation, is just an autopilot crash.
2. Even if it was, it’s sort of irrelevant. When considering the safety of the feature, the whole process, including allowing the driver to become distracted, must be considered. This isn’t a clean room where you can just focus on the performance of the models, this is a real life car interacting with other unpredictable cars, being driven by an imperfect human. If the nature of these systems is that it makes the drivers complacent, that’s absolutely something that needs to be taken into account when thinking about the overall experience of a self driving system.
"On Thursday, NHTSA said it had discovered in 16 separate instances when this occurred that Autopilot “aborted vehicle control less than one second prior to the first impact,” suggesting the driver was not prepared to assume full control over the vehicle.
CEO Elon Musk has often claimed that accidents cannot be the fault of the company, as data it extracted invariably showed Autopilot was not active in the moment of the collision. "
* https://fortune.com/2022/06/10/elon-musk-tesla-nhtsa-investi...
I'm not blaming Tesla or the NHTSA for this, I'm blaming the WP as their reporting on this lacks any information and seems rather to just paint the data in such a way to make Tesla look bad. There's no room for interpretation by the reader because no actual useful data is presented.
Edited to correct my naming of the article source.
https://amaresq.com/blog/auto-accidents/how-many-car-acciden...
The author alluded to it by saying Tesla collects more data, however it doesn't state that no other manufacturer collects what we want to know about (E.g. dynamic cruise control, active lane keep assist wheel nudges, AEB activation, etc).
So, actual data please, then let's compare apples to apples rather than this sensationalist clickbait. (I would really like to see crashes per km for both AP on, AP off, and FSD beta enabled in the case of Tesla.)
Preferably normalised by type of road too. I.e. if one system only drives on highways it might not compare to one that drives on more dangerous roads.
Do Teslas make up 70% of vehicles with advanced driver assistance? Here's a list of cars with these systems.[1] That's a lot of models, so it certainly appears Tesla has an outsized proportion of crashes.
1 - https://www.consumerreports.org/car-safety/cars-with-advance...
A comparison of miles driven with the systems active, normalized for road conditions, is not currently possible, but would be the most useful metric.
Dead Comment
This is the new journalism/government: we love absolutes.
Tesla could have been a fantastic brand if they'd managed expectations better on AP/FSD and the douche-in-chief kept his twitter trigger hands in check a bit.
Their brand's jumped the shark now.
For me the interesting question is: does Autopilot crash more often than human drivers?
Or to put it more directly: Nissan has sold over 560k cars in the U.S. with an L2 system, and has no reportable accidents. Tesla has sold approximately 1.3 million cars in the U.S. with Autopilot, and has 273 accidents. GM has been selling SuperCruise since last year (approx 100k vehicles), and has 0 L2-level accidents. (And testing by Consumer Reports, et al, report fewer issues with SuperCruise than with Autopilot in similar situations.) TLDR: there is no way to massage the data to make Tesla look even remotely safe compared to its competitors.