But Tesla is not blameless with their marketing "Autopilot", "Full self driving", blah blah, giving people a false sense of security. I can't think of a worse problem to try and solve with AI. GPT hallucinates and gives a wrong fact. No biggie, but annoying. Tesla FSD hallucinates and runs over a child, a biggie.
Exactly. Tesla shares responsibility for marketing something deceptively that can cause serious incidents. We require toy manufacturers to put clear labels that the toy is not appropriate for certain ages, but allow Tesla to market something as dangerous as Autopilot and FSD, implying that the user is not needed.
> Tesla shares responsibility for marketing something deceptively that can cause serious incidents
The driver is criminally liable to the estate of the young man he killed. Tesla may be liable for money damages to the driver they misled. (Maybe the estate also has a civil claim on Tesla if they can show its marketing was grossly negligent.)
I would additionally apportion some blame to the YouTubers who cherry pick fsd footage and make ridiculous claims about how good it is, especially those who also use defeat devices so they can do it with their hands off the wheels.
What if "Autopilot"/"FSD"/computer-assisted driving hallucinates and drivers using it run over children but at a lower rate than people who are not using computer-assisted driving run over children?
If progress for automobiles is that it has to be perfect (and not just better) then the only answer is to use automobiles less or have stricter training standards for drivers.
I do think there needs to be strong proof that it is better to avoid liability. And I agree Tesla's marketing is bad and should make Tesla liable.
It comes down to the simple fact that somebody is going to hold the liability bag.
Tesla and similar obviously are not pushing to take on that liability, courts/legislatures aren't particularly interested in putting that liability on the manufacturers. So that leaves the drivers to try and offload the liability from themselves.
Unfortunately drivers are not particularly well equipped to be pushing that liability off themselves so there it stays. Insurance companies are better equipped for that sort of thing but they're only interested in the financial liability. The criminal liability, is definitely not going anywhere anytime soon.
I think it depends on how the car ran over the children (or anyone in that case). If it was an unavoidable freak accident, then you can't really blame the car.
But, if the car literally didn't see a person in front of it (where reasonably a human would 99.9999% of the time) because it's cameras malfunctioned or the LLM read it as something else, then those cars should not be on the road.
<ianal>Clearly FSD is defective if it correctly doesn't plow into motorcycles most of the time, but fails in some low % of cases. That's products liability lawsuit, and some attribution of the fault in this case (idk 1-10%?)
How many times did Andrej Karpathy sit smiling uncomfortably, by the side of Elon Musk, while he did his loony dance on FSD with no extra comment? I wonder if he feels any responsibility?
No one who has driven a Tesla with "Autopilot" for 5 minutes is under the delusion that it is a level 5 autonomous driving system.
It is amazing and useful, and assuredly makes me a better driver than I am without it.
I am begoggled at the scolds who think the "deceptive" branding contained _in a single word_ is sufficient to override the lived experience of driving with this fantastic (if flawed?) tool.
I'd rather the article was more precisely titled "manslaughter" rather than "homicide." As much as in legal jargon, homicide encompasses manslaughter, murder, and a few other things, to many people, homicide is synonymous with murder.
Journalists normally report the charge as written in court documents. Washington state appears to have "vehicular homicide" defined as a specific offense, but not "vehicular manslaughter". It would not be more precise to report an incorrect name for the charged offense.
It's not an unreasonable outcome. The goal of the criminal system is to reduce crime, not random vengeance.
I can certainly think of a time when I was driving when I missed another car in my blind spot and almost caused an accident, or when I saw someone in a car distracted by a toddler, or similar. Those didn't lead to accidents, but they might have if someone were less lucky.
There's a range of ways a car can kill people, ranging from driving through a red light at 100MPH, to an illegal U-turn, to making a stupid mistake, to a completely random fluke of circumstance.
On one end of the spectrum, there should be prison time, and regardless of whether an accident happens. On the other end, insurance should pay damages, but I'm not sure what good the criminal system can do in terms of deterrence.
If a driver is already doing their best to be safe, but slips up, or even isn't doing their best but isn't being unreasonable, criminal penalties don't seem like the right outcome.
Elon's refusal to adopt lidar - statistically probably fine, anecdotally it could turn out very badly for any one person which is a hard thing to swallow if it's you...
I'm not saying what the better option would be (because I don't know), but many people approach the problem from a very myopic point of view.
Adopting Lidar would of course provide Tesla with higher-quality input for their self-driving model. But the quality of the input isn't the whole equation; you need to process it as well. In other words, adopting Lidar would incur costs not only on the hardware side, but also on the software side, which of course would result in more expensive cars. More expensive cars means less cars sold, and less cars sold means less data, which in turns means less input.
Does this result in a worse model? Again, I don't know, but I do know that the issue is more complicated (and not only because of the reasons I mentioned here) than many people seem to think.
It makes a lot of sense if you're trying to churn out more profit per unit - less costs - they're at the mercy of a sour market atm on the other side of it.
Related video "Tesla Autopilot Crashes into Motorcycle Riders - Why?"[0],
summarized to: vision used by Tesla seems to process motorcycles differently, and may be incorrectly "assuming" the closer spaced brake lights on a motorcycle is actually a far away car.
More details on the homicide here[1], which shows the crash happened during daylight hours and the bike resembles a sport bike. This is a different condition than my referenced video (night collisions with cruiser-style motorcycles), but I suspect similar incorrect assumptions by Tesla vision happened.
Another theory I've heard is that the driver was holding down the accelerator to prevent phantom braking. If this is true Tesla will likely respond fairly quickly to prove it wasn't them. So the longer they don't the less likely this theory is.
> Another theory I've heard is that the driver was holding down the accelerator to prevent phantom braking.
Would be interesting to know how commonly this workaround is applied by Tesla owners. If this is common enough it seems like a case where a feature that's merely unreliable becomes a safety issue due to second-order effects. Echoes of Therac 25[1].
Brand new Model Y with latest software did 4 really dangerous phantom braking stunts. I engaged the system 5 times in total. It’s called enhanced autopilot. I can’t understand how people trust this kind of systems with their lives. Maybe in USA it works much better than elsewhere. But I will never ever turn it again. For the record I didn’t bought it. Got 3 months trial for using referral link.
Independent of the incident itself, the article made it sound like the driver would have benefited from hiring a lawyer before making statements to police.
There have been various discussion over the years of adopting and modernizing the model of Equine law , which dealt with injuries from horse & carriages another type of autonomous / semi-automnomous vehicles.
In this case in resolving do the people behind the vehicle share some of the blame.
>How does the legal system adapt to new technologies? Generally, this is done by constructing new legal theories that should not conflict with older models and also have characteristics of stability and rationality. What might be the potential legal theories for Autonomous Vehicles? Here are the current candidates:
>Negligence: Today, a typical example includes impaired driving. An impaired AV?
>Negligent Entrustment of Vehicles: Here the driver was negligent, but the owner is liable because they should not have trusted the driver. Can you be found negligent if you trust your Tesla AutoDrive?
>Res Ipsa Loquitur: In this theory, (“the thing that speaks for itself”) the accident would not have occurred if not for some action from the plaintiff. By applying this logic, the plaintiff caused the accident because they became startled by an AV homing features because it was surprising.
>Product Liability and Warranty: Are there implied warranties associated when you buy an AV? Can it be proven that some AV vendors are safer than others? If so, do all AV vendors have to come to some standard ?
>At this point, it is not clear which theory may apply. However, we may gain insight from a very old body of law — Equine Law. Horses were the original autonomous vehicles and for many centuries, the court system had to deal with horse-related accidents.
And while probably not applicable a guy texting, an earlier paper from 2012 which explores an interesting aspect about horses in a frightened state which is akin to the vehicle making its own decision in a crisis scenario:
"Of Frightened Horses and Autonomous Vehicles: Tort Law and its Assimilation of Innovations"
But Tesla is not blameless with their marketing "Autopilot", "Full self driving", blah blah, giving people a false sense of security. I can't think of a worse problem to try and solve with AI. GPT hallucinates and gives a wrong fact. No biggie, but annoying. Tesla FSD hallucinates and runs over a child, a biggie.
The driver is criminally liable to the estate of the young man he killed. Tesla may be liable for money damages to the driver they misled. (Maybe the estate also has a civil claim on Tesla if they can show its marketing was grossly negligent.)
If progress for automobiles is that it has to be perfect (and not just better) then the only answer is to use automobiles less or have stricter training standards for drivers.
I do think there needs to be strong proof that it is better to avoid liability. And I agree Tesla's marketing is bad and should make Tesla liable.
Tesla and similar obviously are not pushing to take on that liability, courts/legislatures aren't particularly interested in putting that liability on the manufacturers. So that leaves the drivers to try and offload the liability from themselves.
Unfortunately drivers are not particularly well equipped to be pushing that liability off themselves so there it stays. Insurance companies are better equipped for that sort of thing but they're only interested in the financial liability. The criminal liability, is definitely not going anywhere anytime soon.
But, if the car literally didn't see a person in front of it (where reasonably a human would 99.9999% of the time) because it's cameras malfunctioned or the LLM read it as something else, then those cars should not be on the road.
Sometimes also a biggie: https://www.vice.com/en/article/pkadgm/man-dies-by-suicide-a...
We are just too primitive/traditional to understand.
Anyway, after 6 month health issue, I will literally laugh at people who think there is some value in enduring pain. Children.
This situation is the opposite of that, because multiple people were knowingly reckless.
Edit: By "multiple people" I mean the driver and Elon.
FSD is not Autopilot.
FSD does a pretty good job of pedestrian and cyclist identification. It’s supposed to.
Autopilot (which is what I use most)… I dunno… I’ve never had a problem in SF, LA, or Vegas.
https://youtu.be/BFdWsJs6z4c
https://youtu.be/i5tjTACY_3Q
It is amazing and useful, and assuredly makes me a better driver than I am without it.
I am begoggled at the scolds who think the "deceptive" branding contained _in a single word_ is sufficient to override the lived experience of driving with this fantastic (if flawed?) tool.
"Tesla will unveil a robotaxi on August 8, according to Musk" - https://www.engadget.com/tesla-will-unveil-a-robotaxi-on-apr...
In the article they do clarify it is "vehicular homicide" which is the same thing as "vehicular manslaughter" legally.
A very serious crime, though often times people walk away with this with little or no jail time.
I can certainly think of a time when I was driving when I missed another car in my blind spot and almost caused an accident, or when I saw someone in a car distracted by a toddler, or similar. Those didn't lead to accidents, but they might have if someone were less lucky.
There's a range of ways a car can kill people, ranging from driving through a red light at 100MPH, to an illegal U-turn, to making a stupid mistake, to a completely random fluke of circumstance.
On one end of the spectrum, there should be prison time, and regardless of whether an accident happens. On the other end, insurance should pay damages, but I'm not sure what good the criminal system can do in terms of deterrence.
If a driver is already doing their best to be safe, but slips up, or even isn't doing their best but isn't being unreasonable, criminal penalties don't seem like the right outcome.
Adopting Lidar would of course provide Tesla with higher-quality input for their self-driving model. But the quality of the input isn't the whole equation; you need to process it as well. In other words, adopting Lidar would incur costs not only on the hardware side, but also on the software side, which of course would result in more expensive cars. More expensive cars means less cars sold, and less cars sold means less data, which in turns means less input.
Does this result in a worse model? Again, I don't know, but I do know that the issue is more complicated (and not only because of the reasons I mentioned here) than many people seem to think.
More details on the homicide here[1], which shows the crash happened during daylight hours and the bike resembles a sport bike. This is a different condition than my referenced video (night collisions with cruiser-style motorcycles), but I suspect similar incorrect assumptions by Tesla vision happened.
[0]https://www.youtube.com/watch?v=yRdzIs4FJJg
[1] https://www.king5.com/article/traffic/traffic-news/tesla-on-...
Would be interesting to know how commonly this workaround is applied by Tesla owners. If this is common enough it seems like a case where a feature that's merely unreliable becomes a safety issue due to second-order effects. Echoes of Therac 25[1].
[1] https://en.wikipedia.org/wiki/Therac-25
What a fool...
In this case in resolving do the people behind the vehicle share some of the blame.
An excerpt from: https://www.forbes.com/sites/rahulrazdan/2020/01/07/horses-e...
>How does the legal system adapt to new technologies? Generally, this is done by constructing new legal theories that should not conflict with older models and also have characteristics of stability and rationality. What might be the potential legal theories for Autonomous Vehicles? Here are the current candidates:
>Negligence: Today, a typical example includes impaired driving. An impaired AV? >Negligent Entrustment of Vehicles: Here the driver was negligent, but the owner is liable because they should not have trusted the driver. Can you be found negligent if you trust your Tesla AutoDrive? >Res Ipsa Loquitur: In this theory, (“the thing that speaks for itself”) the accident would not have occurred if not for some action from the plaintiff. By applying this logic, the plaintiff caused the accident because they became startled by an AV homing features because it was surprising. >Product Liability and Warranty: Are there implied warranties associated when you buy an AV? Can it be proven that some AV vendors are safer than others? If so, do all AV vendors have to come to some standard ?
>At this point, it is not clear which theory may apply. However, we may gain insight from a very old body of law — Equine Law. Horses were the original autonomous vehicles and for many centuries, the court system had to deal with horse-related accidents.
And while probably not applicable a guy texting, an earlier paper from 2012 which explores an interesting aspect about horses in a frightened state which is akin to the vehicle making its own decision in a crisis scenario:
"Of Frightened Horses and Autonomous Vehicles: Tort Law and its Assimilation of Innovations"
https://digitalcommons.law.scu.edu/cgi/viewcontent.cgi?artic...
Deleted Comment