The fact that Tesla doesn't have a process for making crash data available to investigators is pretty indefensible IMO, given they're retaining that data for their own analysis. Would be one thing if they didn't save the data for privacy reasons, but if they have it, and there's a valid subpoena, they obviously need to hand it over.
For context though, note that this crash occurred because the driver was speeding, using 2019 autopilot (not FSD) on a city street (where it wasn't designed to be used), bending down to pick up a phone he dropped on the floor, and had his foot on the gas overriding the automatic braking: https://electrek.co/2025/08/01/tesla-tsla-is-found-liable-in... The crash itself was certainly not Tesla's fault, so I'm not sure why they were stonewalling. I think there's a good chance this was just plain old incompetence, not malice.
The article explains that the crash snapshot shows:
- hands off wheel
- autosteer had the steering wheel despite a geofence flag
- no take-over warnings, despite approaching a T intersection at speed
Letting people use autopilot in unsafe conditions is contributory negligence. Given their marketing, that's more than worth 33% of the fault.
That they hid this data tells me everything I need to know about their approach to safety. Although nothing really new considering how publicly deceitful Musk is about his fancy cruise-control.
I don't have an excuse why their lawyer did the lawyery stuff, but as far as unlinking of the file being done in software, "destroying evidence", I think the explanation is far more benign.
If you're big on privacy, things like logging incorrect password attempts is a big no-no. We have to "thank" the privacy advocates for that.
How do you think the owner of the car would feel if the file was visible in plain sight to the next owner of the vehicle?
I own a Tesla, and here's my take on the biggest software issue:
Normal consumers don't understand the difference between "Autopilot" and "FSD".
FSD will stop at intersections/lights etc - Autopilot is basically just cruise control and should generally only be used on highways.
They're activated in the same manner (FSD replaces Autopilot if you pay for the upgrade or $99/month subscription), and again for "normal" consumers it's not always entirely clear.
A friend of mine rented a Tesla recently and was in for a surprise when the vehicle did not automatically stop at intersections on Autopilot. He said the previous one he rented had FSD enabled, and he didn't understand the difference.
IMO Tesla just needs to phase out 2019 AP entirely and just give everyone some version of FSD (even if it's limited), or geofence AP to highways only.
Why is that so though? Because of false marketing to the degree that is criminal. Elon does have one excuse: Tesla would be bankrupt several times over except for his purposeful criminal lies. Does he actually care about the company? He pumped out OOM more value from Tesla than anyone in history of any company. That is how much he cares about company surviving.
Criminal. And too dum* to think of anything innovative to save the company.
Withholding safety-relevant features unless you pay a subscription sounds like something from dystopian fiction, not something that should be allowed in the real world.
In my experience, even most Tesla owners don't really seem to understand the difference between autopilot or FSD.
However, even though Autopilot doesn't obey traffic control devices, it still DOES issue warnings if taking over may be required.
Most Tesla owners I've talked with, are actually completely unaware of the v12 and v13 improvements to FSD, and generally have the car for other reasons than FSD. So, if anything, Tesla is actually quite behind on marketing FSD to the regular folk, even those who are already Tesla owners.
That’s not the point. The point is if Teslas marketing led the driver to over estimate the car’s capabilities leading to him engaging in reckless behavior. He admitted on the stand that he was acting careless, that autonomous mode required supervision; however, he also admitted that he thought that the car would drive better than a human and intervene when required _based_ on Tesla’s marketing. When you look at Musk’s tweets and the Paint it Black video the jury agreed that it was not an unreasonable belief that was factually _not_ true and found Tesla 33% guilty of the accident.
The fact that Tesla purposely mislead the investigators and hid evidence was why the jury awarded such a large sum.
> Update: Tesla’s lawyers sent us the following comment about the verdict:
> Today’s verdict is wrong and only works to set back automotive safety and jeopardize Tesla’s and the entire industry’s efforts to develop and implement life-saving technology. We plan to appeal given the substantial errors of law and irregularities at trial. Even though this jury found that the driver was overwhelmingly responsible for this tragic accident in 2019, the evidence has always shown that this driver was solely at fault because he was speeding, with his foot on the accelerator – which overrode Autopilot – as he rummaged for his dropped phone without his eyes on the road. To be clear, no car in 2019, and none today, would have prevented this crash. This was never about Autopilot; it was a fiction concocted by plaintiffs’ lawyers blaming the car when the driver – from day one – admitted and accepted responsibility.
---
Personally, I don't understand how people can possibly be happy with such verdicts.
Recently in 2025, DJI got rid of their geofences as well, because it's the operator's responsibility to control their equipment. IIRC, DJI did have support of the FAA in their actions of removing the geofencing limitations. With FAA expressly confirming that geofencing is not mandated.
These sorts of verdicts that blame the manufacturer for operator errors, are exactly why we can't have nice things.
It's why we get WiFi and 5G radios, and boot loaders, that are binary-locked, with no source code availability, and which cannot be used with BSD or Linux easily, and why it's not possible to override anything anywhere anymore.
Even as a pedestrian, I'm glad that Tesla is fighting the good fight here. Because next thing I know, these courts will cause the phone manufacturers to disable your phone if you're walking next to a highway.
There's an interesting philosophical debate about the nature of product liability laws. Suppose I'm selling some gadget which when used correctly is safe, and I've taken all reasonable steps possible to try to make it hard for people to use it incorrectly.
Nevertheless people sometimes still manage to use it in an incorrect way that injures their hands such that it will take a year of treatment and physical therapy before they can use their hands again.
Some people view the point of product liability laws is to make those who are to blame for the injury pay. Under that view I'd not be on the hook.
Another point of view is that it should be about who can most efficiently handle dealing with these injuries. Either someone is going to have to pay for the treatment and therapy to enable these people to use their hands again or they are going to probably end up on disability for the rest of their lives which will be paid for by government (and so indirectly by most of the rest of us).
Who should that someone be?
One candidate is the user's health insurance company.
One problem with that is that in the US there are plenty of people without health insurance. They would be able to get free treatment right after the injury at any emergency room if they can't afford to pay for that treatment, but that would only get them to the point they aren't in any more danger. It would not include the follow ups needed to actually restore function, so there is still a good chance they will end up on disability. Also that "free" emergency room treatment will actually be paid for by higher costs for the rest of us.
Even if the user does have insurance that pays for their treatment and therapy, ultimately that money is coming from premiums of the people that use that insurance company.
This health insurance approach then ultimately comes down to socializing the cost among some combination of two broad groups: (1) those who have health insurance, and (2) taxpayers in general.
Another candidate is me, the gadget maker. Make it so I am liable for these injuries regardless of who was at fault. I know exactly how many of these gadgets are out there. If all injury claims from people using them go through me I'll have full data on injury rates and severity.
That puts me in a good position to figure out how much to raise the price of my gadgets to establish and maintain a fund to pay out for the injuries.
This still socializes the costs, but now instead of socializing it across those two broad groups (everyone with health insurance and taxpayers) it is getting socialized across all purchasers of my gadgets.
The people who favor this strict liability approach also argue that I'm the best candidate for this because I'm in the best position to try to reduce injuries. If health insurance companies were the ones dealing with it and they noticed injury rates are going up there isn't really anything they can do about that other than raise premiums to cover it.
If I'm the one dealing with it and notice injury rates are going up I can deal with it the same way--raise prices so my injury fund can cope with the rising injuries. But I also have the option to make changes to the gadgets such as adding extra safety features to reduce injuries. I might come out ahead going that route and then everybody wins.
The fundamental problem here is that the way it's presented caused the driver to trust it in a fashion he should not have. The jury slammed Tesla for overpromising, and for trying to hide the evidence.
The article claims that the software should have been geo fensed in that area but Tesla failed to do that, that the software should have trigger warnings of collisions but it did not do that. So there were things Tesla wanted to hide.
I don't necessarily disagree, but I personally find these "but you theoretically could have done even more to prevent this"-type arguments to be a little dubious in cases where the harm was caused primarily by operator negligence.
I do like the idea of incentivizing companies to take all reasonable steps to protect people from shooting themselves in the foot, but what counts as "reasonable" is also pretty subjective, and liability for having a different opinion about what's "reasonable" seems to me to be a little capricious.
For example, the system did have a mechanism for reacting to potential collisions. The vehicle operator overrode it by pushing the gas pedal. But the jury still thinks Tesla is still to blame because they didn't also program an obnoxious alarm to go off in that situation? I suppose that might have been helpful in this particular situation. But exactly how far should they legally have to go in order to not be liable for someone else's stupidity?
No. If they were allowed to just say "We did nothing wrong" to not have to cooperate with an investigation, it would open the door to trivial abuse. It is reasonable to assume they are at least partially to blame. Just like a regular citizen can be arrested when they have a backpack full of money an hour after a bank robbery happened, because it is a reasonable asumption, Tesla should be submitted to an investigation until it becomes clear that the backpack's content is legit.
How are investigators supposed to determine the "100% liable" without access to all available data? In a typical RTC, police will seek to obtain dashcam footage from other vehicles to determine what happened and then determine liability (more likely the insurance companies or courts).
As long as there is no criminal liability for people doing this, nothing will change. This is pocket change for a company, rounding error, as Tesla's valuation has gone significantly since this happened in 2019, six years ago.
The question is not how much does this cost vs the value of the company; it's how much does this cost vs the cost to just do things the right way. $329 Million is orders of magnitude more than it would cost to implement the additional warnings and geofencing. It's hard to imagine any possible corner that could be cut that would save enough money to justify that sort of risk.
Further, it's one fee for a single accident. Since then there have been 756 more Tesla Autopilot crashes. If each of those got a similar payout, that would be 80% of Tesla's current market cap. Obviously they won't all payout that much, but if on average an autopilot crash cost Tesla $56 Million to handle (settlement + legal expenses + lost sales), that would wipe out the company's profit entirely. There's no possible justification for leaving such a massive liability in place.
I think OP's point is - the fine is not large enough to impact Tesla's stock price - which is all Tesla cares about.
It also didn't really seem to impact Tesla's decision to keep pushing Full Self Driving and Robotaxi despite it having obvious severe flaws (because Tesla sees this rollout as something holding up its stock price).
This seems pretty dumb of Tesla, as I find it rather moot to the conclusion of fault in the accident. The obstruction of justice is damning.
Autopilot is cruise control. When you understand this, claiming that Tesla is partially at fault here does not match the existing expectations of other driver assistance tech. Just because Tesla has the capability of disabling it doesn't mean they have to.
This all comes down to an interpretation of marketing speak. If you believe "autopilot" is misleading you'd agree with the jury here, if you don't you wouldn't. I'm no lawyer, and don't know the full scope of requirements for autopilot like features, but it seems that Tesla is subject to unfair treatment here given the amount of warnings you have to completely ignore and take no responsibility for. I've never seen such clear warnings on any other car with similar capabilities. I can't help but think there's maybe some politically driven bias here and I say that as a liberal.
Happy to be convinced otherwise. I do drive a Tesla, so there's that.
Do you think Tesla spends more time and money on making their warnings convincing, or making their marketing convincing? If a person is hearing two conflicting messages from the same group of people, they'll have to pick one, and it shouldn't be surprising if they choose to believe the one that they heard first and that was designed by professionals to be persuasive.
In other words, if you bought the car because you kept hearing the company say "this thing drives itself", you're probably going to believe that over the same company putting a "keep your eyes on the road" popup on the screen.
Of course other companies have warnings that people ignore, but they don't have extremely successful marketing campaigns that encourage people to ignore those warnings. That's the difference here.
Tesla infamously doesn't have a marketing team, so that one should be easy to answer unless they rehired for that. Not sure on the latest there.
When you get your Tesla and attempt to turn on the features described, it has a dialog with a warning and you have to agree to understanding before proceeding. If you choose to ignore that, and all the other clearly marked and accessible warnings in the manual and not learn how to operate it, is that not on you the licensed driver? Isn't that what regulations and licensure are for?
I'm very much in favor of consumer protections around marketing, but in this case there were none that were clearly defined to my knowledge.
the Center for Science in the Public Interest filed a class-action lawsuit
The suit alleges that the marketing of the drink as a "healthful alternative" to soda is deceptive and in violation of Food and Drug Administration guidelines.
Coca-Cola dismissed the allegations as "ridiculous," on the grounds that "no consumer could reasonably be misled into thinking Vitaminwater was a healthy beverage"
Interesting case but I'm not sure it's apples to apples.
One, you don't need a license to buy a non alcoholic beverage. Two, while the FDA has clear guidelines around marketing and labeling, I'm not aware of any regulatory body having clear guidelines around driver assistance marketing. If they did it wouldn't be controversial.
I might challenge with "autopilot is cruise control." To me, Tesla is marketing the feature much differently. Either way, looking up the definitions of each:
"Auto Pilot: a device for keeping an aircraft or other vehicle on a set course without the intervention of the pilot."
"Cruise Control: an electronic device in a motor vehicle that can be switched on to maintain a selected constant speed without the use of the accelerator."
> "Auto Pilot: a device for keeping an aircraft or other vehicle on a set course without the intervention of the pilot."
All an auto pilot on an aircraft does is keep the plane flying in a straight line at a constant speed. It mostly doesn't do obstacle avoidance, or really anything else. Yes, you don't need intervention of the pilot, because it turns out going in a straight line in an airplane is pretty hard to screw up.
From that standard at least, modern cruise controls are more capable than airplane auto pilots. There is a widespread belief on HN, however, that people are generally very dumb and will mistake autopilot for something more like FSD.
You are right, but unfortunately you are the least useful right, which is technical right.
That is definitely what auto pilot means in the aeronautical and maritime sphere.
But a lot of the general public has a murky understanding of how an auto pilot on a ship or a plane works. So for a lot, probably the majority of them. They will look at the meaning of those two words and land on that auto pilot, means automatic pilot. Which basically ends up beeing self driving.
Sure in a perfect world, they would look up what the term means in the sphere they do not know, and use it correctly, but that is not the world we live in.
We do not get the general public, we want, but we have to live with the one we got.
In both cases, they are driver assistance. A pilot is responsible and must monitor an autopilot system in a plane. We license drivers and pilots and the responsibility is placed on them to understand the technology before using it and putting themselves and others at risk.
Would Boeing or John Deere be responsible for marketing language or just the instruction manual. We know the latter is true. It's there any evidence of the former? Intuitively I would say it's unlikely we'd blame Boeing if a pilot was mislead by marketing materials. Maybe that has happened but I haven't found anything of that sort (please share if aware).
The one you accept when you first turn it on. And the numerous ones you ignored/neglected to read when using features without understanding them.
This is the responsibility of a licensed driver. I don't know how a Mercedes works, but if I crash one because I misused a feature clearly outlined in their user manual, Mercedes is not at fault for my negligence.
> but it seems that Tesla is subject to unfair treatment here given the amount of warnings you have to completely ignore and take no responsibility for.
Lol is this for real? No amount of warnings can waive away their gross negligence. Also, the warnings are clearly completely meaningless because they result in nothing changing if they are ignored.
> Autopilot is cruise control
You're pointing to "warnings" while simultaneously saying this? Seems a bit lacking in self awareness to think that a warning should muster the day, but calling cruise control "autopilot" is somehow irrelevant?
> I can't help but think there's maybe some politically driven bias here
> they result in nothing changing if they are ignored.
That’s not true
> Do I still need to pay attention while using Autopilot?
> … Before enabling Autopilot, you must agree to “keep your hands on the steering wheel at all times” and to always “maintain control and responsibility for your vehicle.” Once engaged, Autopilot will also deliver an escalating series of visual and audio warnings, reminding you to place your hands on the wheel if insufficient torque is applied. If you repeatedly ignore these warnings, you will be locked out from using Autopilot during that trip.
> If you repeatedly ignore the inattentive driver warnings, Autosteer will be disengaged for that trip. If you receive several ‘Forced Autopilot Disengagements’ (three times for vehicles without a cabin camera and five times for vehicles with a cabin camera), Autosteer and all features that use Autosteer will be temporarily removed for approximately one week.
What part of how autopilot is marketed do you find to be gross negligence?
I would ask, what is the existing definition of autopilot as defined by the FAA? Who is responsible when autopilot fails? That's the prior art here.
Additionally if NTSB failed to clearly define such definitions and allowments for marketing, is that the fault of Tesla or the governing body?
I'm pretty neurotic about vehicle safety and I still don't think this clearly points to Tesla as being in the wrong with how they market these features. At best it's subjective.
Saying Warnings are meaningless because they can be ignored would literally flip the entire legal system on its head. That is literally an insane way to think about things.
"it's never the crime... its the cover up". So in this case, they are kinda screwed.
I've owned two Tesla's ( now a Rivian/Porsche EV owner). Hands down Tesla has the best cruise control technology in the market. There-in lies the problem. Musk constantly markets this as self driving. It is NOT. Not yet at least. His mouth is way way way ahead of his tech.
Heck, stopping for a red light is a "feature", where the car is perfectly capable of recognizing and doing so. This alone should warrant an investigation and one that i completely, as a highly technical user, fell for when i first got my model 7 delivered... Ran thru a red light trying out auto pilot for the first time.
I'm honestly surprised there are not more of these lawsuits. I think there's a misinterpretation of the law by those defending Tesla. The system has a lot of legalese safe-guards and warnings. But the MARKETING is off. WAY OFF. and yes, users listen to marketing first.
and that ABSOLUTELY counts in a court of law. You folks would also complain around obtuse EULA, and while this isn't completely apples to apples here, Tesla absolutely engages in dangerous marketing speak around "auto pilot". Eliciting a level of trust for drives that isn't there, and they should not be encouraging.
So sorry, this isn't a political thing ( and yes, disclaimer, also a liberal).
Signed... former Tesla owner waiting for "right around the corner" self driving since 2019...
Are there clear guidelines set for labeling and marketing of these features? If not, I'm not sure how you can argue such. If it was so clearly wrong it should have been outlined by regulation, no?
Tesla's not being treated unfairly. It advertised Autopilot as having more capabilities than it actually did. Tesla used to sell Autopilot as fully autonomous. ("The driver is only there for legal reasons.")
And it didn't warn users about this lack of capabilities until it was forced to do so.
Those warnings you're talking about were added after this accident occurred as part of a mandated recall during the Biden administration.
> Those warnings you're talking about were added after this accident occurred as part of a mandated recall during the Biden administration.
If that's the case, this is certainly a stronger argument. I thought autosteer and FSD always had this dialog. As far as I know these dialogs go back 10 years and this was April 2019.
Even still find retroactive punishment of this to be dubious. If Tesla is liable to some degree so should the NHTSA, to the extent that anyone who makes the rules can be, for not defining this well enough to protect drivers.
> Autopilot is cruise control. When you understand this, claiming that Tesla is partially at fault here does not match the existing expectations of other driver assistance tech.
The problem is for several years they actively targeted a customer base incapable of understanding the limitations of the mis-named system they advertised. (Those customers capable of understanding it were more likely to buy vehicles from brands who advertised more honestly.) While the current approach of targeting Nazi and friend-of-Nazi customers might eventually change the story (with its own risks and downsides, one imagines), for the time being it seems reasonable that Tesla bear some responsibility for the unsafe customer confusion they actively courted.
> This all comes down to an interpretation of marketing speak. If you believe "autopilot" is misleading you'd agree with the jury here, if you don't you wouldn't. I'm no lawyer, and don't know the full scope of requirements for autopilot like features, but it seems that Tesla is subject to unfair treatment here given the amount of warnings you have to completely ignore and take no responsibility for. I've never seen such clear warnings on any other car with similar capabilities. I can't help but think there's maybe some politically driven bias here and I say that as a liberal.
And that's exactly why the law is supposed to have a Reasonable Person Standard.
When the majority of Tesla's owners are completely unaware of the viability of autopilot even in 2025, how exactly does it make any sense to blame the marketing when someone was so entrusting in the unproven technology back in 2019? Especially given so many reports of so many people being saved by said technology in other circumstances?
I imagine these things will get better when courts would not be able to find jurors that are unfamiliar with the attention-monitoring nags that Tesla's are famous for.
If this is the 300M jury case 100% they will win in appeals. The driver is clearly responsible for driving and there’s never a moment of doubt about it with Autopilot
Apologies if this has been discussed in one of the many subthreads, but is there a reasonable explanation for the finding that the crashed Tesla "deleted its local copy" of the crash data?
> Within ~3 minutes of the crash, the Model S packaged sensor video, CAN‑bus, EDR, and other streams into a single “snapshot_collision_airbag-deployment.tar” file and pushed it to Tesla’s server, then deleted its local copy.
Putting aside the legal implications wrt evidence, etc — what is the ostensible justification for this functionality? To a layperson, it's as bizarre as designing a plane's black box to go ahead and delete data if it somehow makes a successful upload to the FAA cloud server. Why add complexity that reduces redundancy in this case?
1) Embedded systems typically do not allow data to grow without bound. If they were going to keep debugging data, they'd have to limit it to the last N instances or so. In this case N=0. It seems like the goal here was to send troubleshooting data, not keep it around.
2) Persisting the data may expose the driver to additional risks. Beyond the immediate risks, someone could grab the module from the junkyard and extract the data. I can appreciate devices that take steps to prevent sensitive data from falling into the hand of third parties.
It would be trivial to set it up to only delete old instances when free space goes below a threshold.
If the data can expose the driver to additional risks, then the driver can be exposed by someone stealing the vehicle and harvesting that data. Again, that can be trivially protected against using encryption which would also protect in the instance that communication was disrupted so that the tar isn't uploaded/deleted.
One of the few reasons I have _never_ entertained the possibility of buying a Tesla: despite collecting enough data to know what happened in almost every crash, the company will withhold the data if Tesla is at fault, but is more than happy to quickly publish selective data to support a PR assassination of a driver who is at fault.
There are obviously other reasons, but I would much rather buy a car from a company that either doesn’t collect my data or doesn’t feel the need to bury me in the media if I screw up.
There aren't enough details in the somewhat hyperbolic narrative format to really say, but if I were going to create a temporary archive of files on an embedded system for diagnostic upload, I would also delete it, because that's the nature of temporary files and nobody likes ENOSPACE. If their system had deleted the inputs of the archive that would seem nefarious, but this doesn't, at first scan.
The main reasons to store data are for safety and legal purposes first, diagnostics second. Collision data are all three. They need to be prioritized above virtually everything else on the system and if your vehicle has had so many collisions that the filesystem is filled up, that's a justifiable reason to have a service visit to delete the old ones.
If I were implementing such a system (and I have), I could see myself deleting the temporary without much thought. I would still have built a way to recreate the contents of the tarball after the fact (it's been a requirement from legal every time I've scoped such a system). Tesla not only failed to do that, but avoided disclosing that any such file was transferred in the first place so that the plaintiffs wouldn't know to request it.
When a vehicle crash occurs, that embedded system should no longer be treating data as "temporary", but as what it now is, civil and potentially criminal evidence, and it should be preserved. To go to the effort of creating that data, uploading it to a corporate server, and then having programming that explicitly deletes that data from the source (the embedded system), certainly reads as nefarious without easily verifiable evidence to the contrary. The actions of a company that has acted this way in no fashion lends any credibility to being treated as anything other than a hostile party in court. Any investigators in the future involving a company with such a history need to act swiftly and with the immediate and heavy hand of the court behind them if they expect any degree of success.
"Their system" is a car, sold as a consumer product, which has just experienced a collision removing it indefinitely from normal operation. Reconsider your analysis.
I would love to see what you need so much disk space for after the car is crashed and airbags are deployed. If that event fires the car is going in to the shop to have its airbags replaced at a minimum. Adding a service step to clear up /tmp after a crash is fairly straitforward.
What was deleted was the compiled file sent to Tesla, not all the bits of data that it came from. Nothing malicious in that, just code not leaving trash around.
The file was reportedly named "snapshot_collision_airbag-deployment.tar", which indicates that the developers are able to know that this particular event is a terminal one. Is it really necessary to be mindful of a few GB of storage space on a device that will likely never again to record data for its host vehicle?
Tesla must pay portion of $329M damages after fatal Autopilot crash, jury says
https://news.ycombinator.com/item?id=44760573
For context though, note that this crash occurred because the driver was speeding, using 2019 autopilot (not FSD) on a city street (where it wasn't designed to be used), bending down to pick up a phone he dropped on the floor, and had his foot on the gas overriding the automatic braking: https://electrek.co/2025/08/01/tesla-tsla-is-found-liable-in... The crash itself was certainly not Tesla's fault, so I'm not sure why they were stonewalling. I think there's a good chance this was just plain old incompetence, not malice.
Letting people use autopilot in unsafe conditions is contributory negligence. Given their marketing, that's more than worth 33% of the fault.
That they hid this data tells me everything I need to know about their approach to safety. Although nothing really new considering how publicly deceitful Musk is about his fancy cruise-control.
Dead Comment
The meme of Hanlon's Razor needs to die. Incompetence from a position of power is malice, period.
A bit more nuanced version is that incompetence from a position of power is a choice.
If you're big on privacy, things like logging incorrect password attempts is a big no-no. We have to "thank" the privacy advocates for that.
How do you think the owner of the car would feel if the file was visible in plain sight to the next owner of the vehicle?
Normal consumers don't understand the difference between "Autopilot" and "FSD".
FSD will stop at intersections/lights etc - Autopilot is basically just cruise control and should generally only be used on highways.
They're activated in the same manner (FSD replaces Autopilot if you pay for the upgrade or $99/month subscription), and again for "normal" consumers it's not always entirely clear.
A friend of mine rented a Tesla recently and was in for a surprise when the vehicle did not automatically stop at intersections on Autopilot. He said the previous one he rented had FSD enabled, and he didn't understand the difference.
IMO Tesla just needs to phase out 2019 AP entirely and just give everyone some version of FSD (even if it's limited), or geofence AP to highways only.
However, even though Autopilot doesn't obey traffic control devices, it still DOES issue warnings if taking over may be required.
Most Tesla owners I've talked with, are actually completely unaware of the v12 and v13 improvements to FSD, and generally have the car for other reasons than FSD. So, if anything, Tesla is actually quite behind on marketing FSD to the regular folk, even those who are already Tesla owners.
The fact that Tesla purposely mislead the investigators and hid evidence was why the jury awarded such a large sum.
> Update: Tesla’s lawyers sent us the following comment about the verdict:
> Today’s verdict is wrong and only works to set back automotive safety and jeopardize Tesla’s and the entire industry’s efforts to develop and implement life-saving technology. We plan to appeal given the substantial errors of law and irregularities at trial. Even though this jury found that the driver was overwhelmingly responsible for this tragic accident in 2019, the evidence has always shown that this driver was solely at fault because he was speeding, with his foot on the accelerator – which overrode Autopilot – as he rummaged for his dropped phone without his eyes on the road. To be clear, no car in 2019, and none today, would have prevented this crash. This was never about Autopilot; it was a fiction concocted by plaintiffs’ lawyers blaming the car when the driver – from day one – admitted and accepted responsibility.
---
Personally, I don't understand how people can possibly be happy with such verdicts.
Recently in 2025, DJI got rid of their geofences as well, because it's the operator's responsibility to control their equipment. IIRC, DJI did have support of the FAA in their actions of removing the geofencing limitations. With FAA expressly confirming that geofencing is not mandated.
These sorts of verdicts that blame the manufacturer for operator errors, are exactly why we can't have nice things.
It's why we get WiFi and 5G radios, and boot loaders, that are binary-locked, with no source code availability, and which cannot be used with BSD or Linux easily, and why it's not possible to override anything anywhere anymore.
Even as a pedestrian, I'm glad that Tesla is fighting the good fight here. Because next thing I know, these courts will cause the phone manufacturers to disable your phone if you're walking next to a highway.
Nevertheless people sometimes still manage to use it in an incorrect way that injures their hands such that it will take a year of treatment and physical therapy before they can use their hands again.
Some people view the point of product liability laws is to make those who are to blame for the injury pay. Under that view I'd not be on the hook.
Another point of view is that it should be about who can most efficiently handle dealing with these injuries. Either someone is going to have to pay for the treatment and therapy to enable these people to use their hands again or they are going to probably end up on disability for the rest of their lives which will be paid for by government (and so indirectly by most of the rest of us).
Who should that someone be?
One candidate is the user's health insurance company.
One problem with that is that in the US there are plenty of people without health insurance. They would be able to get free treatment right after the injury at any emergency room if they can't afford to pay for that treatment, but that would only get them to the point they aren't in any more danger. It would not include the follow ups needed to actually restore function, so there is still a good chance they will end up on disability. Also that "free" emergency room treatment will actually be paid for by higher costs for the rest of us.
Even if the user does have insurance that pays for their treatment and therapy, ultimately that money is coming from premiums of the people that use that insurance company.
This health insurance approach then ultimately comes down to socializing the cost among some combination of two broad groups: (1) those who have health insurance, and (2) taxpayers in general.
Another candidate is me, the gadget maker. Make it so I am liable for these injuries regardless of who was at fault. I know exactly how many of these gadgets are out there. If all injury claims from people using them go through me I'll have full data on injury rates and severity.
That puts me in a good position to figure out how much to raise the price of my gadgets to establish and maintain a fund to pay out for the injuries.
This still socializes the costs, but now instead of socializing it across those two broad groups (everyone with health insurance and taxpayers) it is getting socialized across all purchasers of my gadgets.
The people who favor this strict liability approach also argue that I'm the best candidate for this because I'm in the best position to try to reduce injuries. If health insurance companies were the ones dealing with it and they noticed injury rates are going up there isn't really anything they can do about that other than raise premiums to cover it.
If I'm the one dealing with it and notice injury rates are going up I can deal with it the same way--raise prices so my injury fund can cope with the rising injuries. But I also have the option to make changes to the gadgets such as adding extra safety features to reduce injuries. I might come out ahead going that route and then everybody wins.
If you or I did this, do you think a judge would care? No. We would be sitting in jail with a significant fine to boot.
The point is that these businesses consider this a "cost of doing business" until someone is actually put in jail.
I do like the idea of incentivizing companies to take all reasonable steps to protect people from shooting themselves in the foot, but what counts as "reasonable" is also pretty subjective, and liability for having a different opinion about what's "reasonable" seems to me to be a little capricious.
For example, the system did have a mechanism for reacting to potential collisions. The vehicle operator overrode it by pushing the gas pedal. But the jury still thinks Tesla is still to blame because they didn't also program an obnoxious alarm to go off in that situation? I suppose that might have been helpful in this particular situation. But exactly how far should they legally have to go in order to not be liable for someone else's stupidity?
Perhaps hiding the data like this _is_ their process.
Mixing up who is responsible for driving the car is very much Tesla's fault, starting with their dishonest marketing.
If the driver is 100% liable without autopilot, then they should be held 100% liable with autopilot.
The law should be clear and unambiguous in this regard until we remove the steering wheel entirely.
The penalties for being at fault with auto pilot on should be even higher, since it may as well be just as bad as driving while texting!
Further, it's one fee for a single accident. Since then there have been 756 more Tesla Autopilot crashes. If each of those got a similar payout, that would be 80% of Tesla's current market cap. Obviously they won't all payout that much, but if on average an autopilot crash cost Tesla $56 Million to handle (settlement + legal expenses + lost sales), that would wipe out the company's profit entirely. There's no possible justification for leaving such a massive liability in place.
It also didn't really seem to impact Tesla's decision to keep pushing Full Self Driving and Robotaxi despite it having obvious severe flaws (because Tesla sees this rollout as something holding up its stock price).
Autopilot is cruise control. When you understand this, claiming that Tesla is partially at fault here does not match the existing expectations of other driver assistance tech. Just because Tesla has the capability of disabling it doesn't mean they have to.
This all comes down to an interpretation of marketing speak. If you believe "autopilot" is misleading you'd agree with the jury here, if you don't you wouldn't. I'm no lawyer, and don't know the full scope of requirements for autopilot like features, but it seems that Tesla is subject to unfair treatment here given the amount of warnings you have to completely ignore and take no responsibility for. I've never seen such clear warnings on any other car with similar capabilities. I can't help but think there's maybe some politically driven bias here and I say that as a liberal.
Happy to be convinced otherwise. I do drive a Tesla, so there's that.
In other words, if you bought the car because you kept hearing the company say "this thing drives itself", you're probably going to believe that over the same company putting a "keep your eyes on the road" popup on the screen.
Of course other companies have warnings that people ignore, but they don't have extremely successful marketing campaigns that encourage people to ignore those warnings. That's the difference here.
When you get your Tesla and attempt to turn on the features described, it has a dialog with a warning and you have to agree to understanding before proceeding. If you choose to ignore that, and all the other clearly marked and accessible warnings in the manual and not learn how to operate it, is that not on you the licensed driver? Isn't that what regulations and licensure are for?
I'm very much in favor of consumer protections around marketing, but in this case there were none that were clearly defined to my knowledge.
the Center for Science in the Public Interest filed a class-action lawsuit
The suit alleges that the marketing of the drink as a "healthful alternative" to soda is deceptive and in violation of Food and Drug Administration guidelines.
Coca-Cola dismissed the allegations as "ridiculous," on the grounds that "no consumer could reasonably be misled into thinking Vitaminwater was a healthy beverage"
One, you don't need a license to buy a non alcoholic beverage. Two, while the FDA has clear guidelines around marketing and labeling, I'm not aware of any regulatory body having clear guidelines around driver assistance marketing. If they did it wouldn't be controversial.
"Auto Pilot: a device for keeping an aircraft or other vehicle on a set course without the intervention of the pilot."
"Cruise Control: an electronic device in a motor vehicle that can be switched on to maintain a selected constant speed without the use of the accelerator."
All an auto pilot on an aircraft does is keep the plane flying in a straight line at a constant speed. It mostly doesn't do obstacle avoidance, or really anything else. Yes, you don't need intervention of the pilot, because it turns out going in a straight line in an airplane is pretty hard to screw up.
From that standard at least, modern cruise controls are more capable than airplane auto pilots. There is a widespread belief on HN, however, that people are generally very dumb and will mistake autopilot for something more like FSD.
That is not how it’s marketed at all.
That is definitely what auto pilot means in the aeronautical and maritime sphere.
But a lot of the general public has a murky understanding of how an auto pilot on a ship or a plane works. So for a lot, probably the majority of them. They will look at the meaning of those two words and land on that auto pilot, means automatic pilot. Which basically ends up beeing self driving.
Sure in a perfect world, they would look up what the term means in the sphere they do not know, and use it correctly, but that is not the world we live in. We do not get the general public, we want, but we have to live with the one we got.
Would Boeing or John Deere be responsible for marketing language or just the instruction manual. We know the latter is true. It's there any evidence of the former? Intuitively I would say it's unlikely we'd blame Boeing if a pilot was mislead by marketing materials. Maybe that has happened but I haven't found anything of that sort (please share if aware).
The article says no warnings were issued before the crash.
So which warning did the driver miss?
This is the responsibility of a licensed driver. I don't know how a Mercedes works, but if I crash one because I misused a feature clearly outlined in their user manual, Mercedes is not at fault for my negligence.
Lol is this for real? No amount of warnings can waive away their gross negligence. Also, the warnings are clearly completely meaningless because they result in nothing changing if they are ignored.
> Autopilot is cruise control
You're pointing to "warnings" while simultaneously saying this? Seems a bit lacking in self awareness to think that a warning should muster the day, but calling cruise control "autopilot" is somehow irrelevant?
> I can't help but think there's maybe some politically driven bias here
Look only to yourself, Tesla driver.
That’s not true
> Do I still need to pay attention while using Autopilot?
> … Before enabling Autopilot, you must agree to “keep your hands on the steering wheel at all times” and to always “maintain control and responsibility for your vehicle.” Once engaged, Autopilot will also deliver an escalating series of visual and audio warnings, reminding you to place your hands on the wheel if insufficient torque is applied. If you repeatedly ignore these warnings, you will be locked out from using Autopilot during that trip.
> If you repeatedly ignore the inattentive driver warnings, Autosteer will be disengaged for that trip. If you receive several ‘Forced Autopilot Disengagements’ (three times for vehicles without a cabin camera and five times for vehicles with a cabin camera), Autosteer and all features that use Autosteer will be temporarily removed for approximately one week.
https://www.tesla.com/en_gb/support/autopilot
What part of how autopilot is marketed do you find to be gross negligence?
I would ask, what is the existing definition of autopilot as defined by the FAA? Who is responsible when autopilot fails? That's the prior art here.
Additionally if NTSB failed to clearly define such definitions and allowments for marketing, is that the fault of Tesla or the governing body?
I'm pretty neurotic about vehicle safety and I still don't think this clearly points to Tesla as being in the wrong with how they market these features. At best it's subjective.
I've owned two Tesla's ( now a Rivian/Porsche EV owner). Hands down Tesla has the best cruise control technology in the market. There-in lies the problem. Musk constantly markets this as self driving. It is NOT. Not yet at least. His mouth is way way way ahead of his tech.
Heck, stopping for a red light is a "feature", where the car is perfectly capable of recognizing and doing so. This alone should warrant an investigation and one that i completely, as a highly technical user, fell for when i first got my model 7 delivered... Ran thru a red light trying out auto pilot for the first time.
I'm honestly surprised there are not more of these lawsuits. I think there's a misinterpretation of the law by those defending Tesla. The system has a lot of legalese safe-guards and warnings. But the MARKETING is off. WAY OFF. and yes, users listen to marketing first.
and that ABSOLUTELY counts in a court of law. You folks would also complain around obtuse EULA, and while this isn't completely apples to apples here, Tesla absolutely engages in dangerous marketing speak around "auto pilot". Eliciting a level of trust for drives that isn't there, and they should not be encouraging.
So sorry, this isn't a political thing ( and yes, disclaimer, also a liberal).
Signed... former Tesla owner waiting for "right around the corner" self driving since 2019...
Are there clear guidelines set for labeling and marketing of these features? If not, I'm not sure how you can argue such. If it was so clearly wrong it should have been outlined by regulation, no?
And it didn't warn users about this lack of capabilities until it was forced to do so. Those warnings you're talking about were added after this accident occurred as part of a mandated recall during the Biden administration.
If that's the case, this is certainly a stronger argument. I thought autosteer and FSD always had this dialog. As far as I know these dialogs go back 10 years and this was April 2019.
Even still find retroactive punishment of this to be dubious. If Tesla is liable to some degree so should the NHTSA, to the extent that anyone who makes the rules can be, for not defining this well enough to protect drivers.
The problem is for several years they actively targeted a customer base incapable of understanding the limitations of the mis-named system they advertised. (Those customers capable of understanding it were more likely to buy vehicles from brands who advertised more honestly.) While the current approach of targeting Nazi and friend-of-Nazi customers might eventually change the story (with its own risks and downsides, one imagines), for the time being it seems reasonable that Tesla bear some responsibility for the unsafe customer confusion they actively courted.
And that's exactly why the law is supposed to have a Reasonable Person Standard.
https://en.wikipedia.org/wiki/Reasonable_person
When the majority of Tesla's owners are completely unaware of the viability of autopilot even in 2025, how exactly does it make any sense to blame the marketing when someone was so entrusting in the unproven technology back in 2019? Especially given so many reports of so many people being saved by said technology in other circumstances?
I imagine these things will get better when courts would not be able to find jurors that are unfamiliar with the attention-monitoring nags that Tesla's are famous for.
> Within ~3 minutes of the crash, the Model S packaged sensor video, CAN‑bus, EDR, and other streams into a single “snapshot_collision_airbag-deployment.tar” file and pushed it to Tesla’s server, then deleted its local copy.
Putting aside the legal implications wrt evidence, etc — what is the ostensible justification for this functionality? To a layperson, it's as bizarre as designing a plane's black box to go ahead and delete data if it somehow makes a successful upload to the FAA cloud server. Why add complexity that reduces redundancy in this case?
1) Embedded systems typically do not allow data to grow without bound. If they were going to keep debugging data, they'd have to limit it to the last N instances or so. In this case N=0. It seems like the goal here was to send troubleshooting data, not keep it around.
2) Persisting the data may expose the driver to additional risks. Beyond the immediate risks, someone could grab the module from the junkyard and extract the data. I can appreciate devices that take steps to prevent sensitive data from falling into the hand of third parties.
If the data can expose the driver to additional risks, then the driver can be exposed by someone stealing the vehicle and harvesting that data. Again, that can be trivially protected against using encryption which would also protect in the instance that communication was disrupted so that the tar isn't uploaded/deleted.
There are obviously other reasons, but I would much rather buy a car from a company that either doesn’t collect my data or doesn’t feel the need to bury me in the media if I screw up.
If I were implementing such a system (and I have), I could see myself deleting the temporary without much thought. I would still have built a way to recreate the contents of the tarball after the fact (it's been a requirement from legal every time I've scoped such a system). Tesla not only failed to do that, but avoided disclosing that any such file was transferred in the first place so that the plaintiffs wouldn't know to request it.
Deleted Comment
Let's not be naive, or deceptive, about a malpractice from a multi billion company owned by a multi billionaire.
So the solution to all of this is to lock up more executives who commit fraud or lie to the public.
Recently (after a 10 year battle) two former Volkswagen executives just got prison time for the Dieselgate scandal.
Wish it had come faster, but that's a good start.
https://www.lbc.co.uk/crime/volkswagen-execs-jailed-fraud-de...