> The safety agency informed Tesla weeks ago that it was looking into the spate of crashes in which Tesla vehicles operating on Autopilot failed to detect stopped emergency vehicles with flashing lights. The regulator originally said it was looking into 11 such crashes. A 12th occurred on Saturday, when a Model 3 hit a police cruiser that had stopped behind a car that had broken down on an interstate in Orlando, Fla.
12 crashes into vehicles with flashing lights on them doesn't sound like great success from Autopilot, gonna be honest.
We just did a 1,500 mile round-trip journey to visit family. Because of the Tesla stuff, I was casually aware of roadside emergency vehicles moreso than usual. We passed 28 police/fire/emergency vehicles that were next to our travel lane.
I saw a number of Tesla vehicles on the road as well, each passing these same emergency vehicles, but not always in the same lane.
By this point, there must be 1M+ incidents of Tesla vehicles passing roadside emergency vehicles in an adjacent lane.
12 crashes is 12 crashes too many, statistically speaking, but it's also a tiny tiny percentage of the total number of cases.
IMO, it is good that there is federal investigation into this. My main concern is that is seems our government agencies at large are not very good when it comes to properly understanding anything technology related, or making relevant decisions/rulings/etc. on these kinds of things.
>I saw a number of Tesla vehicles on the road as well, each passing these same emergency vehicles, but not always in the same lane.
How do you know all of them were on autopilot?
>12 crashes is 12 crashes too many, statistically speaking, but it's also a tiny tiny percentage of the total number of cases.
It's also 12 more than any normal attentive human has gotten into, and an average attentive human driver is the absolute floor for self-driving on public roads unless Tesla is going to go to the lengths GM has to ensure the person behind the wheel is remaining attentive.
>IMO, it is good that there is federal investigation into this. My main concern is that is seems our government agencies at large are not very good when it comes to properly understanding anything technology related, or making relevant decisions/rulings/etc. on these kinds of things.
I think they're good at understanding if Tesla loses complete public confidence in self driving that is a net-loss for humanity and technology, as well as a prime source of political shitstorm when people ask why something wasn't done sooner.
I'm sorry but the general computer called your brain is not like the purposebuilt computer of Autopilot. 12 is too many for a computer who's only job is to literally look at the road and drive.
> IMO, it is good that there is federal investigation into this
> seems our government agencies at large are not very good when it comes to properly understanding anything technology related, or making relevant decisions/rulings/etc. on these kinds of things.
> By this point, there must be 1M+ incidents of Tesla vehicles passing roadside emergency vehicles in an adjacent lane.
> 12 crashes is 12 crashes too many, statistically speaking, but it's also a tiny tiny percentage of the total number of cases.
That perspective is little more than spin. Most serious flaws are like that.
Any failure that isn't a "tiny tiny percentage of the total number of cases" is pretty serious. If it was even a small percentage (say 0.01%), Tesla autopilot would be banned from the road, and maybe Elon Musk would go to jail (or at least Tesla would probably get bankrupted) for releasing such a hazardous product.
The fact that Tesla and Elon have been fighting these autopilot errors in completely the wrong way is the primary reason the feds should be involved.
I have been using autopilot in Nissan. If the car were to crash into an ambulance light going off I would not be able to predict it and stop it from happening quickly enough. I just have a whole lot of trust in some basic things the car does.
And this is one of those cases where they should have been able to get this right after the first crash but they didn’t and now the feds have to get involved. A lesson on why we can’t have nice things.
>> The safety agency informed Tesla weeks ago that it was looking into the spate of crashes in which Tesla vehicles operating on Autopilot failed to detect stopped emergency vehicles with flashing lights. The regulator originally said it was looking into 11 such crashes. A 12th occurred on Saturday, when a Model 3 hit a police cruiser that had stopped behind a car that had broken down on an interstate in Orlando, Fla.
> 12 crashes into vehicles with flashing lights on them doesn't sound like great success from Autopilot, gonna be honest.
It's really surprising what people will put up with. IIRC, recently "autopilot" had some regressions in the situations it could handle, and Elon Musk tweeted something to the effect of "Yeah, this version sucks since we're retraining all our NNs."
Like how did something that half-baked ever make it to production in a safety critical system?
> 12 crashes into vehicles with flashing lights on them doesn't sound like great success from Autopilot, gonna be honest.
I'm sorry, but how can you form an opinion so quickly? 12 crashes out of what? Any number? 12 crashes out of 10000000 scenarios sound like a pretty good record, while 12 crashes out of 15 scenarios indeed doesn't sound like a great success.
But what information do you have that the rest of us doesn't?
"I'm sorry, but how can you form an opinion so quickly?"
It doesn't really matter. They're stationary objects; there's absolutely no reason, good or bad, for the algorithm to determine that the correct trajectory change is into the stationary object.
Humans crash into stationary objects because they're lacking attention almost 100% of the time, whether it's because they're half asleep, drunk, texting, or distracted by just about anything else. Autopilot shouldn't have this issue, so there's no reason for it to fail to recognize the obstacles unless it's been poorly trained, and there's absolutely no reason for it to recognize an obstacle as a valid path or destination.
I still remember the days when the SciFi of the day sold robotic drivers as "they don't get tired, they don't make mistakes, they are relentless in their pursuit".
Instead of that, we got robots that are apparently unable to avoid crashing into large vehicles with flashing lights. Sometimes.
As usual, the future never ceases to disappoint me.
These are situations where failure is not suppose to happen when the driver is paying attention. Given that the auto-pilot is always paying attention when operational, this becomes an unacceptable failure mode because it failed in the most rudimentary way - failure to perceive stationary obstacles and a display of complete lack of situational awareness.
For emergency responders on highways, there are means to reduce the risk of additional collisions by increasing visibility to oncoming drivers (markers, lights, etc.). If the auto-pilot is not able to perceive these even when paying attention, we do have a problem.
It’s not that simple. 12 out of ten million is extremely bad if the average driver takes part in that lottery 100,000 times a day. 12 out of 15 is nothing to worry about if the event is extremely rare.
I guess this lies somewhere between these extremes, but wouldn’t know.
We also don’t know how many of these accidents were prevented by attentive drivers.
When human drivers cause accidents, the causes are collected and analyzed, and laws are proposed to prevent future accidents from happening. It doesn't matter if it's 12 or 12 thousand, or even zero crashes... if there's fear, then society will demand something be done.
The fact that AI can give no easily understood excuses, and improvements to the algorithms is a trade secret means that any fear cannot be abated, aside from outright banning of AI driving.
Hopefully the NHTSA is also going to investigate the many other cars with traffic aware cruise control, which have the same problem with stationary vehicles.
I was driving my Rav4 with TACC enabled, and it would have crashed into a column of stopped cars if I hadn't noticed in time and intervened.
Tesla has something like 1000x greater rate of hitting stationary emergency vehicles. The fact that all the other cars have the same issue with tacc actually points even more strongly to something being wrong with Tesla.
> The driver is responsible for stopping the car. You literally agree to it when you enable AP. People are too comfortable and not paying attention.
Autopilot encourages people to not pay as much attention while driving, until a critical situation comes up at which point they're expected to be fully in control. That's a design flaw that isn't fixed with a liability shifting "Agree" button.
You have no telemetry and no idea what actually happened, car might have veered without enough warning. Which is the whole point of having an investigation, really.
The absolving argument that it's still the drivers responsibility to intervene has flown out of the window with this request.
The federal safety agency is not having any of that excuse anymore.
They might find that Tesla did no wrong here, further enabling Tesla to get away with this. But this looks a lot like they want to regulate the feature a bit more.
If self regulation via companies and humans does not work, the government will step in.
Curious to see the outcome, they haven't ordered the immediate disablement of the feature, they might have a neutral view.
IMO this is a good thing, and should be applied to all manufacturers of driver-assist technology. Tesla has certainly been at the bleeding-edge of overly optimistic and confusing marketing for their product, but others should be on alert too.
It's the counter thesis to the " i drive more alert when drunk" , I know what you mean. Might be that there is no in-between from full self driving and fully humanly operated.
Of course, liability under the current state of affairs will always be on the human, but I think it might be reflected in insurance premiums one day where it becomes economically difficult to pay the premium.
I agree, all the brands should be observed, Tesla simply has the largest attach surface currently, due to most media exposure and let's be honest, loudest and most misleading marketing strategies.
The question is, how much of that is Tesla’s fault? They’re an EV company, not a psychology consultancy. We don’t hold gun manufacturers or alcohol companies liable when people do stupid things with their products, despite the best efforts of their marketing divisions. Granted “pull trigger get bullet” is much simpler than the Autopilot response.
Let's ignore the larger point (which is that this is just one of a million possible edge cases where autopilot could systematically fail), I could absolutely see Tesla just bodging in a "If you see emergency vehicle lights flashing disengage" - it'd perfectly reasonable to say that you shouldn't be using autopilot in that situation because you should be paying attention and ready to react to the emergency vehicle.
I don't think you're wrong policy wise specifically.
I do think that this 'auto pilot ... but you have to manage it' system is not workable for most humans. Paying attention when you sorta don't have to ... is just not paying attention for most people.
It's a space I don't think we manage well outside of say a real pilot in a plane trained to do his thing (and there's actually a lot of cases with trained airline pilots losing situational awareness and auto pilot does the wrong thing and the pilots don't notice soon enough / auto pilot makes bad choices for a surprisingly long time before anyone notices).
I think this might be a space where it is all or nothing auto pilot wise.
If the driver isn’t paying attention already, this can’t work in the worst case. The roads and their speed limits are designed for the driver to safely come to a stop if there’s an obstacle in the road. If you add another 3-10 seconds for the system to disengage, warn the user, have the user focus in the road and act appropriately, you are crashed into the emergency vehicle again.
It should warn and turn off in a LOT of situations imo; there was another accident in a construction zone where the car hit a barrier, for example. And there's the shared videos of people literally sleeping or sitting in the back seat of a Tesla on autopilot; the car should detect that and disable autopilot for an X amount of time.
Of course, at the same time the car should not play police; if someone is caught doing that, their license should be taken away from them.
As much as it would be a solution to Tesla's problem, it would impact Tesla's overall plan and strategy. Automatically disengaging when emergency vehicles are detected is a huge barrier in their selling strategy ("It can drive itself, it's just not allowed yet to for legal reasons, but soon").
FTC should have obliterated Tesla from orbit the minute they called cruise control "Autopilot", and the Tesla beta opt-in fund "Full Self Driving", and any time Elon made any claim that the vehicle was going to be self-driving by X date.
This would have saved several lives, untold injuries, and millions in damages. It would have been disastrous for Elon's wealth and the market cap of Tesla, but then we could at least pretend there's still rule of law in the US.
Thankfully other jurisdictions have actual regulators and are telling Tesla that you cannot use fraudulent names, but who knows how much longer it will take for the US to join the rest of the world in this.
This was definitely a poor name in retrospect, but I can see why they chose the name. The current functionality of autopilot is in fact pretty similar to aviation autopilots. Unfortunately most of the public thinks that an autopilot can fly a plane unassisted, so they naturally expect "Autopilot" to drive their car unassisted.
Another point is, legal human operator liability or not, the agency has been alerted of 12 such incidents which could never, ever occur unless a human driver is asleep, fully drunk/under the influence of drugs, some sort of fight happened in the car or was distracted from heavy sunlight to the eyes or had to dodge a hazard or animal on the road.
Their way of thought is simply that if the software can't deal with what is trivial for humans, maybe a recall or better look is in order.
I know, this is not trivial for software, it just looks to not technical people like something that should never happen, and if it does , they loose all the trust in it.
Obviously, there are many bright engineers working at Tesla, not sure how this was deployed and how this slipped QA repeatedly.
This, the authority asks Tesla, before the authority has to answer to the voters , congress and media shows.
This not a feature that Autopilot or any other of the lane keeping systems have at the moment. If we are going to say that this is an issue then pretty much all lane keeping systems would have to be outlawed.
If it is really impossible for people to pay attention without their hands on the wheel at all the time then all lane keeping must be abolished or a standard needs to be established what 'paying attention' means.
Crashing into emergency vehicles happens often even without lane keeping system as well, so its a univeral problem when you have big stationary objects standing around on driving lanes.
What do people on HN actually want or expect in terms or regulation?
Here are some suggestions:
- Only allow any kind of lane keeping if a comprehensive testing that they can detect any obstetrical of any kind and have emergency breaking. This of course would simply mean less such system would be deployed as most companies don't have practical scalable solution for this as of yet.
- Only allow any kind of lane keeping if a comprehensive driver monitoring system is implemented that makes sure driver is alert. Additionally you could require wheel touches every so often in addition to that, if 'hands free' driving should not be allowed.
Tesla is in a good spot here, they have the internal cameras and already have a driver monitoring system yet despite that they have not allowed hands free driving.
Tesla new Vision Autopilot or the more extensive FSD stack (that will do highway driving with v10) should solve the stationary object problem.
I'm sure that if there was any kind of regulation, Tesla would simply comply and move on. In the worst case if regulation would somehow insist on having things named in a specific way, Tesla would just rename their features.
However the point here seems to be non of this is technological problem with Tesla, is simply a situation with no regulation and so every manufacturer picks their own solution. Tesla is the most prominent and most talked about so they are the whipping boy of anybody that wants more regulation.
>- Only allow any kind of lane keeping if a comprehensive testing that they can detect any obstetrical of any kind and have emergency breaking. This of course would simply mean less such system would be deployed as most companies don't have practical scalable solution for this as of yet.
probably this. it's a nice feature but it lulls you into a false sense of security and just doesn't seem to mesh well with human nature. It works well enough most of the time that people start trusting it too much over time
I'm just not sure this is objectively true. Many people on HN seem convinced that there is nothing between all human and all machine that is better but I don't really think this is based on a solid evidence.
I'm curious what the data requirement will be. Is it just the segmented map? Is it the actual model plus all inputs for n-seconds leading up to the crash? Is it all training data that meets some threshold of similarity to the incident?
It would be nice if the investigation of these incidents resulted in an open standard for analysis and quality control of self-driving systems.
12 crashes into vehicles with flashing lights on them doesn't sound like great success from Autopilot, gonna be honest.
I saw a number of Tesla vehicles on the road as well, each passing these same emergency vehicles, but not always in the same lane.
By this point, there must be 1M+ incidents of Tesla vehicles passing roadside emergency vehicles in an adjacent lane.
12 crashes is 12 crashes too many, statistically speaking, but it's also a tiny tiny percentage of the total number of cases.
IMO, it is good that there is federal investigation into this. My main concern is that is seems our government agencies at large are not very good when it comes to properly understanding anything technology related, or making relevant decisions/rulings/etc. on these kinds of things.
How do you know all of them were on autopilot?
>12 crashes is 12 crashes too many, statistically speaking, but it's also a tiny tiny percentage of the total number of cases.
It's also 12 more than any normal attentive human has gotten into, and an average attentive human driver is the absolute floor for self-driving on public roads unless Tesla is going to go to the lengths GM has to ensure the person behind the wheel is remaining attentive.
>IMO, it is good that there is federal investigation into this. My main concern is that is seems our government agencies at large are not very good when it comes to properly understanding anything technology related, or making relevant decisions/rulings/etc. on these kinds of things.
I think they're good at understanding if Tesla loses complete public confidence in self driving that is a net-loss for humanity and technology, as well as a prime source of political shitstorm when people ask why something wasn't done sooner.
> seems our government agencies at large are not very good when it comes to properly understanding anything technology related, or making relevant decisions/rulings/etc. on these kinds of things.
Not sure how you think it's good then.
> 12 crashes is 12 crashes too many, statistically speaking, but it's also a tiny tiny percentage of the total number of cases.
That perspective is little more than spin. Most serious flaws are like that.
Any failure that isn't a "tiny tiny percentage of the total number of cases" is pretty serious. If it was even a small percentage (say 0.01%), Tesla autopilot would be banned from the road, and maybe Elon Musk would go to jail (or at least Tesla would probably get bankrupted) for releasing such a hazardous product.
I have been using autopilot in Nissan. If the car were to crash into an ambulance light going off I would not be able to predict it and stop it from happening quickly enough. I just have a whole lot of trust in some basic things the car does.
And this is one of those cases where they should have been able to get this right after the first crash but they didn’t and now the feds have to get involved. A lesson on why we can’t have nice things.
Dead Comment
> 12 crashes into vehicles with flashing lights on them doesn't sound like great success from Autopilot, gonna be honest.
It's really surprising what people will put up with. IIRC, recently "autopilot" had some regressions in the situations it could handle, and Elon Musk tweeted something to the effect of "Yeah, this version sucks since we're retraining all our NNs."
Like how did something that half-baked ever make it to production in a safety critical system?
I'm sorry, but how can you form an opinion so quickly? 12 crashes out of what? Any number? 12 crashes out of 10000000 scenarios sound like a pretty good record, while 12 crashes out of 15 scenarios indeed doesn't sound like a great success.
But what information do you have that the rest of us doesn't?
It doesn't really matter. They're stationary objects; there's absolutely no reason, good or bad, for the algorithm to determine that the correct trajectory change is into the stationary object.
Humans crash into stationary objects because they're lacking attention almost 100% of the time, whether it's because they're half asleep, drunk, texting, or distracted by just about anything else. Autopilot shouldn't have this issue, so there's no reason for it to fail to recognize the obstacles unless it's been poorly trained, and there's absolutely no reason for it to recognize an obstacle as a valid path or destination.
Instead of that, we got robots that are apparently unable to avoid crashing into large vehicles with flashing lights. Sometimes.
As usual, the future never ceases to disappoint me.
These are situations where failure is not suppose to happen when the driver is paying attention. Given that the auto-pilot is always paying attention when operational, this becomes an unacceptable failure mode because it failed in the most rudimentary way - failure to perceive stationary obstacles and a display of complete lack of situational awareness.
For emergency responders on highways, there are means to reduce the risk of additional collisions by increasing visibility to oncoming drivers (markers, lights, etc.). If the auto-pilot is not able to perceive these even when paying attention, we do have a problem.
I guess this lies somewhere between these extremes, but wouldn’t know.
We also don’t know how many of these accidents were prevented by attentive drivers.
The fact that AI can give no easily understood excuses, and improvements to the algorithms is a trade secret means that any fear cannot be abated, aside from outright banning of AI driving.
Hopefully the NHTSA is also going to investigate the many other cars with traffic aware cruise control, which have the same problem with stationary vehicles.
I was driving my Rav4 with TACC enabled, and it would have crashed into a column of stopped cars if I hadn't noticed in time and intervened.
Autopilot encourages people to not pay as much attention while driving, until a critical situation comes up at which point they're expected to be fully in control. That's a design flaw that isn't fixed with a liability shifting "Agree" button.
Deleted Comment
They might find that Tesla did no wrong here, further enabling Tesla to get away with this. But this looks a lot like they want to regulate the feature a bit more. If self regulation via companies and humans does not work, the government will step in.
Curious to see the outcome, they haven't ordered the immediate disablement of the feature, they might have a neutral view.
I think the word you're looking for is "dishonest."
I'm not sure that's fixable on the human end.
Exactly, there were some pretty good 99% Invisible podcasts on that topic:
https://99percentinvisible.org/episode/children-of-the-magen...
https://99percentinvisible.org/episode/johnnycab-automation-...
I do think that this 'auto pilot ... but you have to manage it' system is not workable for most humans. Paying attention when you sorta don't have to ... is just not paying attention for most people.
It's a space I don't think we manage well outside of say a real pilot in a plane trained to do his thing (and there's actually a lot of cases with trained airline pilots losing situational awareness and auto pilot does the wrong thing and the pilots don't notice soon enough / auto pilot makes bad choices for a surprisingly long time before anyone notices).
I think this might be a space where it is all or nothing auto pilot wise.
Ultimately a human problem.
Of course, at the same time the car should not play police; if someone is caught doing that, their license should be taken away from them.
Sometimes, "coming soon" is a great answer, vagueness is a powerful tool to help with time demands of clients.
This would have saved several lives, untold injuries, and millions in damages. It would have been disastrous for Elon's wealth and the market cap of Tesla, but then we could at least pretend there's still rule of law in the US.
Thankfully other jurisdictions have actual regulators and are telling Tesla that you cannot use fraudulent names, but who knows how much longer it will take for the US to join the rest of the world in this.
Check out how long it took for even simple safety features to make it into all cars.
> Then the waste majority of cars simply wouldn't come with lane assist. Deploying lidar on such scale is simply not practical at the moment.
IMHO, lane assist isn't "self driving" any more than cruise control is.
I think you get into self driving territory when the car makes lane changes or turns at intersections.
Their way of thought is simply that if the software can't deal with what is trivial for humans, maybe a recall or better look is in order. I know, this is not trivial for software, it just looks to not technical people like something that should never happen, and if it does , they loose all the trust in it. Obviously, there are many bright engineers working at Tesla, not sure how this was deployed and how this slipped QA repeatedly.
This, the authority asks Tesla, before the authority has to answer to the voters , congress and media shows.
If it is really impossible for people to pay attention without their hands on the wheel at all the time then all lane keeping must be abolished or a standard needs to be established what 'paying attention' means.
Crashing into emergency vehicles happens often even without lane keeping system as well, so its a univeral problem when you have big stationary objects standing around on driving lanes.
What do people on HN actually want or expect in terms or regulation?
Here are some suggestions:
- Only allow any kind of lane keeping if a comprehensive testing that they can detect any obstetrical of any kind and have emergency breaking. This of course would simply mean less such system would be deployed as most companies don't have practical scalable solution for this as of yet.
- Only allow any kind of lane keeping if a comprehensive driver monitoring system is implemented that makes sure driver is alert. Additionally you could require wheel touches every so often in addition to that, if 'hands free' driving should not be allowed.
Tesla is in a good spot here, they have the internal cameras and already have a driver monitoring system yet despite that they have not allowed hands free driving.
Tesla new Vision Autopilot or the more extensive FSD stack (that will do highway driving with v10) should solve the stationary object problem.
I'm sure that if there was any kind of regulation, Tesla would simply comply and move on. In the worst case if regulation would somehow insist on having things named in a specific way, Tesla would just rename their features.
However the point here seems to be non of this is technological problem with Tesla, is simply a situation with no regulation and so every manufacturer picks their own solution. Tesla is the most prominent and most talked about so they are the whipping boy of anybody that wants more regulation.
probably this. it's a nice feature but it lulls you into a false sense of security and just doesn't seem to mesh well with human nature. It works well enough most of the time that people start trusting it too much over time
I'm also not sure how to verify that.
It would be nice if the investigation of these incidents resulted in an open standard for analysis and quality control of self-driving systems.