Readit News logoReadit News
Syonyk · 4 years ago
I don't understand how something this broken is allowed to operate on public roads.

If I drove like Tesla's FSD seems to based on the videos I've seen, I'd be pulled out of my car and arrested on (well founded) suspicions of "driving while hammered."

After a decade of work, it's not capable of doing much beyond "blundering around a city mostly without hitting stuff, but pay attention, because it'll try to hit the most nonsensical thing it doesn't understand around the next corner." It drives in a weirdly non-human way - I've seen videos of it failing to navigate things I'm sure I could get my 6 year old to drive through competently. Except, I don't actually let her drive a car on the road.

I'm out in a rural area, and while "staying in the lane" is perfectly functional (if there are lanes, which isn't at all the case everywhere on public roads), there's a lot of stuff I do on a regular basis that I've not seen any evidence of. If there's a cyclist on a two lane road, I typically get over well over the center line if there's no oncoming traffic to make room. If there is oncoming traffic, typically one person will slow down to allow the other car to get over, or the lanes just collectively "shift over" - the oncoming traffic edges the side of the road so I can get myself on or slightly over the centerline to leave room for the bike. And that's without considering things like trailers that are a foot or two into the road, passing a tractor with a sprayer (they don't have turn signals, so be careful where you try to pass), etc.

If they've got any of this functionality, I'd love to see it, because I've not seen anything that shows it off.

At this point, I think we can reasonably say that it's easier to land people on the moon than teach a car to drive.

mortenjorck · 4 years ago
The monorail video is jaw-dropping.

Nine versions in, I would expect ongoing challenges with things like you mention. But continued failure to even see large, flat obstacles is no longer something that needs to be fixed – that it has persisted this long (even after killing someone as in the case of T-boning a semi trailer at highway speeds) is an indictment of the entire approach Tesla has been taking to FSD.

I used to think FSD was just a matter of getting enough training for the models, but this has changed my mind. When you still require a disengagement not to negotiate some kind of nuanced edge case, but to avoid driving straight into a concrete pylon, it's time to throw it all out and start over.

Animats · 4 years ago
The monorail video is jaw-dropping.

Yes. Pause the video and look at the car's screen. There's no indication on screen of the columns. A car on the other side of the row of columns is recognized, but not the columns. It's clear that Tesla has a special-purpose recognizer for "car".

The columns are a solid obstacle almost impinging into the road, one that doesn't look like a car. That's the standard Tesla fail. Most high-speed Tesla autopilot collisions have involved something that sticks out into a lane - fire truck, street sweeper, construction barrier - but didn't look like the rear end of a car.

As I've been saying for years now, the first job is to determine that the road ahead is flat enough to drive on. Then decide where you want to drive. I did the DARPA Grand Challenge 16 years ago, which was off-road, so that was the first problem to solve. Tesla has lane-following and smart cruise control, like other automakers, to which they've added some hacks to create the illusion of self-driving. But they just don't have the "verify road ahead is flat" technology.

Waymo does, and gets into far less trouble.

There's no fundamental reason LIDAR units have to be expensive. There are several approaches to flash/solid state LIDAR which could become cheap. They require custom ICs with unusual processes (InGaAs) that are very expensive when you make 10 of them, and not so bad at quantity 10,000,000. The mechanically scanned LIDAR units are now smaller and less expensive.

AlexandrB · 4 years ago
The monorail posts and planters would be trivially handled by LIDAR. Tesla's aversion to time of flight sensors often strikes me a premature given our level of planning/perception technology.
flutas · 4 years ago
I think a big issue with that instance (monorail) is probably because they just threw out years of radar data without having a comparable reliability in place with the vision only.

Completely mental that they are allowed to run this on public roadways.

moralestapia · 4 years ago
>The monorail video is jaw-dropping.

Who needs radar right? A few cameras are enough to discern gray concrete structures in the night, oh wait ...

Deleted Comment

avereveard · 4 years ago
> Tesla doesn't recognize a one-way street and the one-way sign in the street, and it drives towards the wrong way

this too, means tesla is a blind man going by memory, not an autonomous driver.

barbazoo · 4 years ago
I agree. Not sure if I'd be able to trust it again after an incident like this at least in similar situations where there are obstacles so close to the road.
moralestapia · 4 years ago
Because Elon doesn't operate under the same jurisdiction as us common people.

Try calling out a random diver a 'pedo', or perform the most cynical kind of market manipulation (then laughing to the SEC at their face) and your outcome will be very different.

It's Animal Farm all over.

heavyset_go · 4 years ago
> Try calling out a random diver a 'pedo', or perform the most cynical kind of market manipulation (then laughing to the SEC at their face) and your outcome will be very different.

Not just any random diver, a diver who had just helped rescue the lives of several children and adults from what was an internationally known emergency.

If any of us had said what Musk said about the diver, we'd be rightfully dragged through the mud.

dnautics · 4 years ago
Actually most of us can do any or all of the following things that Elon did (write a tweet calling a diver a pedo, write a tweet claiming that a stock will go to 420 huh huh, tweet random nonsense about cryptos, criticize the SEC or FAA, etc.) with very little consequence, if any.
BrissyCoder · 4 years ago
It's been a while since I read it... What's the Animal Farm reference here?
optimiz3 · 4 years ago
> Elon doesn't operate under the same jurisdiction as us common people.

Elon doesn't get any special treatment. You can do all these things as well if you're willing to expend resources when faced with repercussions. My suspicion though is society will give you more slack if you dramatically increase access to space for your nation state or make credible progress against a global ecological problem humanity is facing.

> It's Animal Farm all over

Don't get the Animal Farm reference as we're not talking about some sort of proto-communist utopia. Everyone is playing in the same sandbox.

LeoPanthera · 4 years ago
> I don't understand how something this broken is allowed to operate on public roads.

It's important to point out that this software is currently only offered as a private beta to deliberately selected testers. Now, maybe they shouldn't be using it on public roads either, but at least it's not available to the general public.

alkonaut · 4 years ago
It's being tested with the general public the oncoming lane, so it's effectively tested "on the public" even if at limited scale.
atoav · 4 years ago
As long as all the people in traffic with this experiment signed this agreement as well, all is good.
mosselman · 4 years ago
Because the self driving stuff in the other Teslas works well?
serverholic · 4 years ago
We are going to need a fundamental breakthrough in AI to achieve the level of FSD that people expect. Like a convolutional neural network level breakthrough.
bobsomers · 4 years ago
I don't think that's necessarily true. There are plenty of people in the AV space that routinely drive significantly better than this and are already doing completely driverless service in some geofences.

The problem with Tesla's approach has always been that Elon wanted to sell it before he actually knew anything about how to solve it. It's lead to series of compounding errors in Tesla's approach to AVs.

The vehicle's sensor suite is woefully inadequate and its compute (yes, even with the fancy Tesla chip) is woefully underpowered... all because Elon never really pushed to figure out where the boundary of the problem was. They started with lane-keeping on highways where everything is easy and pre-sold a "full self-driving" software update with absolutely no roadmap to get there.

To overcome the poor sensor suite and anemic compute, they've boxed themselves into having to invent AGI to solve the problem. Everyone else in the space has happily chosen to spend more on sensors and compute to make the problem computationally tractable, because those things will easily get cheaper over time.

I'm fairly convinced that if Tesla wants to ship true FSD within the next decade, the necessary sensor and compute retrofit would nearly bankrupt the company. The only way out is likely to slowly move the goal posts within the legalese to make FSD just a fancy Level 2 driver assist and claim that's always what it was meant to be.

belter · 4 years ago
I suggest some simple Smoke Tests. I love that concept for software testing.

It could be applied here to test if we are getting closer or further from what humans can do.

Some examples:

Smoke Test 1:

Driving with Snow

Smoke Test 2:

Driving with Rain

Smoke Test 3:

You are driving to a Red sign and you notice 50 meters ahead a pedestrian has its headphones on. The pedestrian is distracted and looking at traffic coming from the other way. You notice from the pedestrian gait and demeanor its probably going to cross anyway and its not noticing you. So you instinctively slowly reduce your speed and keep a sharp eye on its next action.

Smoke Test 4:

Keep eye contact with a Dutch cyclist as they look at you across a roundabout. You realize they will cross in front of your car, so you already inferred their intentions. Today the cyclist bad humor will not make them raise their hand or make any other signs to you other than an angry face. You however already know they will push forward...

Smoke Test 5:

A little soccer ball just run across your vision field. You hit the breaks hard, as you instinctively think a child might show up any second running after it.

Failing any of these scenarios would make you fail the driving license exam so I guess its the minimum we should aim for. Call me back when any AI is able to even start tackling ANY of these, much less All of these scenarios. :-)

bob1029 · 4 years ago
I think we should first start with a fundamental breakthrough in our willingness as engineers to admit defeat in the face of underestimated complexity.

Once we take proper inventory of where we are at, we may find that we are still so far off the mark that we would be inclined to throw away all of our current designs and start over again from (new) first principles.

The notion that you can iteratively solve the full-self-driving problem on the current generation of cars is potentially one of the bigger scams in tech today (at least as marketed). I think a lot of people are deluding themselves about the nature of this local minima. It is going to cost us a lot of human capital over the long haul.

Being wrong and having to start over certainly sucks really badly, but it is still better than the direction we are currently headed in.

rasz · 4 years ago
Most problems in those clips didnt require general AI, they were caused by shit vision algorithms. Car didnt spot huge ass monorail columns ...
threeseed · 4 years ago
Cruise has videos showing them driving around for hours, under challenging conditions with no issues:

https://www.youtube.com/channel/UCP1rvCYiruh4SDHyPqcxlJw

foobarbazetc · 4 years ago
The thing is... everyone knows this.

The people writing the code, the people designing the cars, the people allowing the testing, etc etc.

What we're seeing in these videos is basically unusable and will never be cleared to drive by itself on real roads.

It's just the "the last 10% takes 90% of the time" adage but applied to a situation where you can kill the occupant of the car and/or the people outside it. And that last 10% will never be good enough in a general way.

moepstar · 4 years ago
I'm really not sure if we've seen the same videos - do you really think we're 90% there?

No, i don't think this is really more than maybe 30% of the journey to FSD - and, most likely, according to what has been shown this time (and the last times) it'll never get there.

AndrewKemendo · 4 years ago
>At this point, I think we can reasonably say that it's easier to land people on the moon than teach a car to drive.

I think this is pretty well understood to be the case. Level 5 FSD is magnitudes harder.

xyst · 4 years ago
And people are paying an additional $10K to be part of the development process of FSD. You would think risking your life to fine tune a broken product would at least be free, but Tesla seems to think otherwise.

With that being said, even if it comes out of beta. I would likely only use it for extended highway travel in clear conditions.

ricardobeat · 4 years ago
This is a collection of videos from thousands of hours of driving. I’m sure you can do much worse from human drivers…
ahahahahah · 4 years ago
There's probably barely even 1000 hours on all of youtube of FSD Beta 9.0. It looks like these clips are actually from just 3 videos which in total are less than 1 hour of video.
carlivar · 4 years ago
Yep, cars are bouncing off Seattle monorail pillars constantly.
AndrewBissell · 4 years ago
You would think that just the name "Tesla Full Self Driving Beta 9.0" would be giving people some pause here.
barbazoo · 4 years ago
I thought you were making a joke but it really seems to be called "Full Self Driving Beta 9.0". It's hillarious, do they think by adding the "Beta" it makes it ok to hit stuff unless the driver intervenes instantaneously?. How are they even allowed to call it "FSD" (Full self driving) if in fact it doesn't do that at all?
protastus · 4 years ago
> I don't understand how something this broken is allowed to operate on public roads.

Legislators are slow and ignorant about technology. There's also perception that everyone who dies on a Tesla accident is a tech bro who defeated software safeguards (e.g., chose to ride on the back seat in full self-driving mode).

If automotive history is any guide, scrutiny and regulation only appear after a lot of people get killed. It's not obvious what the threshold will be, with Elon Musk being louder than anyone else in this debate and early adopters being comfortable experimenting with the lives of others.

toomuchtodo · 4 years ago
> I don't understand how something this broken is allowed to operate on public roads.

Also out in a rural area. Running out to pickup lunch a few minutes ago, a young man flipped their old pickup truck on its side in an intersection, having hit the median for some reason. I too don't understand how humans are allowed to operate on public roads. Most of them are terrible at it. About 35k people a year die in motor vehicle incidents [1], and millions more are injured [2]. Total deaths while Tesla Autopilot was active is 7 [3].

I believe the argument is the software will improve to eventually be as good or better than humans, and I have a hard time not believing that, not because the software is good but because we are very bad in aggregate.

[1] https://www.iihs.org/topics/fatality-statistics/detail/state...

[2] https://www.cdc.gov/winnablebattles/report/motor.html

[3] https://www.tesladeaths.com/

Syonyk · 4 years ago
Could have been equipment failure on the truck - a tie rod end failing or such will create some interesting behaviors.

> I believe the argument is the software will improve to eventually be as good or better than humans, and I have a hard time not believing that.

I find it easy to believe that software won't manage to deal well with all the weird things that reality can throw at a car, because we suck at software as humans in the general case. It's just that in most environments, those failures aren't a big deal, just retry the API call.

Humans can write very good software. The Space Shuttle engineering group wrote some damned fine software. They look literally nothing like the #YOLO coding that makes up most of Silicon Valley, and deal with a far, far more constrained environment as well than a typical public road.

Self driving cars are simply the most visible display of the standard SV arrogance - that humans are nothing but a couple crappy cameras and some neural network mush, and, besides, we know code - how hard can it be? That approach to solving reality fails quite regularly.

alkonaut · 4 years ago
We accept shit drivers. We don't accept companies selling technologies that calls itself "Full self driving" (witha beta disclaimer or not) that hits concrete pillars. This isn't hard. It's not a mathematical tradeoff with "but what about if it's shit, but on average it's better (causes fewer accidents) than humans?". I don't care. I accept the current level of human driving skill. People drive tired or poorly at their own risk, and that's what makes ME accept venturing into traffic with them. They have the same physical skin in the game as I have.
jazzyjackson · 4 years ago
Cars aren’t safe and robots don’t fix it.
w0m · 4 years ago
dinddingding.

Self Driving cars (Tesla; who is faaaar from it, among others) will kill people. But people are shitty drivers on their own; has to start somewhere and Tesla is the first to get anything close to this level in the hands of the general population (kind of, beta program is still limited in release))

emodendroket · 4 years ago
> I believe the argument is the software will improve to eventually be as good or better than humans, and I have a hard time not believing that, not because the software is good but because we are very bad in aggregate.

But logically this doesn't really follow, does it? That because humans are not capable of doing something without errors a machine is necessarily capable of doing it better? Your argument would be more compelling if Tesla Autopilot logged anything like the number of miles in the variety of conditions that human drivers do. Since it doesn't, it seems like saying that the climate of Antarctica is more hospitable than that of British Colombia, because fewer people have died this year of weather-related causes in the former than the latter.

cs702 · 4 years ago
Many of the comments here seem a bit... unfair to me, considering that these clips were handpicked.

I watched (and fast-forwarded) through a few of the original, full-length videos from which these clips were taken. The full-length videos show long drives (in some cases, hours long) almost entirely without human intervention, under conditions explicitly meant to be difficult for self-driving vehicles.

One thing I really liked seeing in the full-length videos is that FSD 9 is much better than previous efforts at requesting human intervention in advance, with plenty of time to react, when the software is confused by upcoming road situations. The handpicked clips are exceptions.

For BETA software, FSD 9 is doing remarkably well, in my view. I mean, it's clearly NOT yet ready for wide release, but it's much closer than all previous versions of Tesla FSD I've seen before, and acceptable for a closed-to-the-public Beta program.

rhinoceraptor · 4 years ago
The fact that it 'only' requires human intervention so rarely is still incredibly dangerous. You can't ask a human to have complete focus for hours on end when they're not making any inputs, and then require them to intervene at a moment's notice. That's not how humans work.

Also, the fact that they're distributing safety critical software to the public as a 'beta' is just insanity. How many more people need to die as a result of Autopilot?

cs702 · 4 years ago
> You can't ask a human to have complete focus for hours on end when they're not making any inputs, and then require them to intervene at a moment's notice. That's not how humans work.

I agree. Everyone agrees. That's why FSD Beta 9 is closed to the public. My understanding is that only a few thousand approved drivers can use it.

> Also, the fact that they're distributing safety critical software to the public as a 'beta' is just insanity. How many more people need to die as a result of Autopilot?

FSD 9 isn't being "distributed to the public." It's a closed Beta. Please don't attack a straw-man.

richwater · 4 years ago
It literally doesn't matter how well it does in 90% of situations when the other 10% can injure or kill people in relatively basic scenarios like the Tweets presented. I mean the car almost ran into a concrete pillar like it wasn't even there.

> For BETA software, FSD 9 is doing remarkably well

If this was a React website, that'd be great. But it's a production $40,000 multi ton automobile.

cs702 · 4 years ago
> ...doesn't matter how well it does in 90% of situations...

Based on the full-length videos, I'd say it's more like 99.9% or even 99.99% for FSD 9.

bob33212 · 4 years ago
People will die today because a driver was drunk/distracted/suicidal/rode raged or had a medical problem.

Are you OK with that or do you think we should attempt to fix that with software? If you do think you should attempt to fix that, do you understand that software engineering is a iterative process? It gets safer overtime.

CJefferson · 4 years ago
In my opinion, a self-driving car which drives me smoothly for 6 hours then drives straight into a concrete pillar without warning isn't doing "remarkably well". That should be enough to get it pulled.
throwaway-8c93 · 4 years ago
FSD occasionally requesting driver to take over in genuinely difficult situations would be completely fine.

The videos in the Twitter feed are nothing like that. The car makes potentially catastrophic blunders, like driving straight into a concrete pylon, with 100% confidence.

dillondoyle · 4 years ago
I'm with you on the second point strongly.

But IMHO it's not full self driving if it requests the driver to take over even once.

If there's an insane storm or something then it's ok for FSD to know it should disable and then you have to drive 100% control. The middle ground is more like assisted driving which doesn't seem safe according to most HN comments.

rcMgD2BwE72F · 4 years ago
You know these are extract from multiple hours of video and it's a closed beta?
cameldrv · 4 years ago
Supposedly there are a "few thousand" FSD beta testers, and only a small fraction of them are videoing their drives and uploading them to YouTube. Beta 9 has existed for 2 days. This puts a pretty high lower bound on the serious error rate.
camjohnson26 · 4 years ago
The consensus is that there far fewer than a few thousand
GhostVII · 4 years ago
For level 5 (which Elon promised), FSD has to go tens of thousands of miles without a collision-avoiding intervention. Right now it can't even do 10 miles.

Cherry picking is completely fine because with the limited number of beta users, even a few incidents is enough to show that FSD isn't nearly ready for level 5.

And it's pretty easy to make a car that can self drive on simple roads without a tonne of traffic, so being able to do that without intervention isn't much of an accomplishment.

lp0_on_fire · 4 years ago
People who drink and drive may very well be perfect sober drivers 99.9% of the time but that doesn't excuse the .1% of the time that they're running into things.

Also, this beta isn't "closed-to-the-public". The "public" is an active and unwilling participant in it.

alkonaut · 4 years ago
If this happened all in one month of constant driving, I'd say it isn't fit even for limited closed testing in public traffic. It should be back at the closed circuit with inflatable cars. If it was cut down from just one or a few days of driving that's horrifying.
dillondoyle · 4 years ago
At least for me it's because these highlighted errors are so egregious and so obvious for humans. Don't swerve into the giant concrete polls.

The 99% of 'good' doesn't matter if you keep driving into giant barriers.

nikhizzle · 4 years ago
Does anyone have any insight as to why regulators allow a beta like this on the road with drivers who are not trained to specifically respond to it's mistakes?

IMHO this puts everybody and their children at risk so Tesla can beta test earlier, but I would love to corrected.

SloopJon · 4 years ago
The Twitter thread is lacking in attributions, but I saw most of these some months back after a post about scary FSD behavior. I watched with morbid curiosity, and a rising level of anger.

The guy with the white ball cap repeatedly attempted a left turn across two lanes of fast-moving traffic, with a drone providing overhead views. He seemed smart, aware, and even a bit conservative in intervening. Nonetheless, I couldn't help but thinking that none of the oncoming vehicles consented to the experiments racking up YouTube views. If he doesn't jump on the brakes at the right time, he potentially causes a head-on collision with a good chance of fatalities.

Yes, I do agree that beta drivers should get extra training. However, I'm not sure I agree with the premise of beta testing an aluminum rocket on wheels on public roads in the first place.

aeternum · 4 years ago
Do people sign up to be beta testers whenever a new 15 year-old with a permit gets behind the wheel?

It's even crazier that we allow that. The cars that student drivers learn in typically do not have dual control so it's quite difficult to intervene. Every highschool has a story about how a student slammed the accelerator instead of the brake.

misiti3780 · 4 years ago
How do you know the beta drivers did not get training?
nrjames · 4 years ago
I used to work with the USG. There was "Beta" software absolutely everywhere, including on some very sensitive systems, because it was not required to go through security approval until it was out of Beta. In some instances, these applications had been in place for > 10 years. That was a number of years ago, so I hope the situation has changed. In general, the USG doesn't have sophisticated infrastructure and policy to deal with software that is in development. With Tesla, my guess is that it is not that they are allowing it to happen, but that they lack the regulatory infrastructure to prevent it from happening.
verelo · 4 years ago
USG? I'm not sure what that means, i did a google search and i assume it's not "United States Gypsum" or the University System or Georgia...?
TheParkShark · 4 years ago
Everyone and their children seems a bit hyperbolic. There hasn’t been an accident involving FSD beta 9, but I’m sure someone has been killed at the hands of a drunk driver today. I am failing to find any comments from you arguing for alcohol to be removed from shelves across the country? Why aren’t you pushing your regulators for that?
esperent · 4 years ago
Drunk driving is already illegal. Claiming that we should ban all alcohol because people use it to commit illegal acts makes about as much sense as claiming we should ban kitchen knives because people occasionally use them for stabbing.

Besides that, FSD being less dangerous than drunk driving is a terrible benchmark.

studentrob · 4 years ago
It's not hard to see the connection between FSD driving towards concrete pillars and the AP accidents on highways driving towards concrete barriers. None of this is hyperbole. FSD isn't a whole new system, it's based on AP. As Elon often says, they have been going for incremental improvements rather than the all-or-nothing approach taken by Waymo and others.
darknavi · 4 years ago
> drivers who are not trained to specifically respond to it's mistakes

What would this entail? Perhaps some sort of "license" which allows the user to operator a motor vehicle?

ethbr0 · 4 years ago
Safely operating a motor vehicle via a steering wheel, accelerator, and brake (and sometimes clutch and gearshift) is a completely different skillset than monitoring an automated system in realtime.

Novel skill requirements include: interpreting FSD UI, anticipating common errors, recognizing intent behind FSD actions, & remaining vigilant even during periods of autonomous success.

bigtex · 4 years ago
Giant obxonious flashing lights and blinking signs stating "Student Self Driving tech on board"
AlotOfReading · 4 years ago
California at least has a specific license for this called the Autonomous Vehicle Operator license. It enforces some minimal amount of training beyond simply having a regular driver's license.
simion314 · 4 years ago
Maybe people that are trained with such failure videos that show what can go wrong and NOT with propaganda videos that only show the good parts, causing the driver not to pay attention.
paxys · 4 years ago
What part of the driving test covers taking over control from an automated vehicle with a split second notice?
bestouff · 4 years ago
I guess any driving instructor worth its salt would have the required skills to correct the vehicle if it attempts to do something weird. After all FSD is a still-in-training (maybe for an unusual long time) driver.
vkou · 4 years ago
A license that allows you to operate a motor vehicle with a beta self-driving feature. It's very similar to a regular motor vehicle, but has different failure modes.

A motorcycle is similar to an automobile, but has different failure modes, and needs a special license. A >5 tonne truck is very similar to an automobile, but has different failure modes, and needs a special license. An automobile that usually drives itself, but sometimes tries to crash itself has different failure modes from an automobile that does not try to drive itself.

visarga · 4 years ago
Perhaps operating a motor vehicle is different from supervising an AI doing that same task.
bobsomers · 4 years ago
Safety drivers at proper AV companies usually go through several weeks of rigorous classroom instruction and test track driving with a professional driving instructor, including lots of practice with the real AVs on closed courses with simulated system faults, takeover situations, etc. Anecdotally I've seen trained safety drivers take over from unexpected system failures at speeds near the floor of human reaction time. They are some of the best and most attentive drivers on the road.

Average folks beta testing Tesla's software for them are woefully under-prepared to safely mitigate the risks these vehicles pose. In several cases they've meaninglessly died so that Tesla could learn a lesson everyone else saw coming months in advance.

Dead Comment

Dead Comment

Dead Comment

croes · 4 years ago
They fear Elon's Twitter wrath
renewiltord · 4 years ago
Because Tesla FSD accidents don’t kill people who were not party to the agreement. That seems intuitively fair to me.
snet0 · 4 years ago
I'm sure you don't actually think this, if you think about it for a second longer. "What's wrong with drunk drivers? They're only harming themselves?"
zyang · 4 years ago
It appears Tesla FSD just ignores things it doesn't understand, which is really dangerous in a production vehicle.
heavyset_go · 4 years ago
This is what the automated Uber vehicle[1] that struck and killed a pedestrian did, as well. Despite picking her up via sensors and the ML model, it was programmed to ignore them.

[1] https://www.nytimes.com/2018/03/19/technology/uber-driverles...

jazzyjackson · 4 years ago
Plus the Volvo’s radar-brake was disabled so they could test the vision system.
hytdstd · 4 years ago
Yes, and it's quite disturbing. I just went on a road trip with a friend, and despite passing a few bicyclists, the car (with radar sensors) did not detect any of them.
akira2501 · 4 years ago
It really doesn't seem to eliminate evaluated outcomes either. It's drive line on the screen is constantly twitching through illegal or impossible maneuvers multiple times while deciding what to do. It seems to lack the ability to converge on a reasonable solution.
LeoPanthera · 4 years ago
The vehicle is production but this particular software is not, it's a private beta and not available to the general public.
barbazoo · 4 years ago
I couldn't care less what version the software is as soon as the vehicle drives around in the real world. Imagine driving with "FSD" on and hitting someone because of an issue in your "Beta" software.
jazzyjackson · 4 years ago
The general public does have the honor of being part of the beta test, in that they play the role of obstacles.
mrRandomGuy · 4 years ago
Why are you getting down-voted? There's literal videos depicting what you state. Is the Musk Fanboy Brigade behind this?
darknavi · 4 years ago
You guys can downvote?!
manmal · 4 years ago
Is FSD still operating on a frame-by-frame basis? I remember it was discussed on Autonomy day that the ideal implementation would operate on video feeds, and not just the last frame, to improve accuracy.

When you look at the dashboard visualizations of the cars‘ surroundings, the model that is built up looks quirky and inconsistent. Other cars flicker into view for one frame and disappear again; lane markings come and go. I saw a video where a car in front of the Tesla indicated, and the traffic light in the visualization (wrongly) started switching from red to green and back, in sync with the indicator blinking.

How could a car behave correctly as long as its surroundings model is so flawed? As long as the dashboard viz isn’t a perfect mirror of what’s outside, this simply cannot work.

joakleaf · 4 years ago
There also seem to be general problems with objects disappearing when they are obscured by other objects, and then reappear later, when no longer obscured.

It is ridiculous, that the model doesn’t keep track of objects, and assume they continue with current velocity when they become obscured. It seems like a relatively simple thing to add depending on how they represent objects. You could even determine when the objects are obscured by other objects.

In 8.x videos I noticed cars shifting and rotating a lot over fractions of a second, so it seemed like they needed a Kalman filter for objects and roads.

Objects in 9.0 look more stable, but I still see lanes, curbs, and entire intersections shifting noticeably from frame to frame. So if they added time (multiple frames) to the model, it is still not working that well.

EForEndeavour · 4 years ago
You've nicely articulated what was bothering me about the jittery dashboard visualizations. Why on earth is everything flickering in and out of existence, and why is the car's own planned trajectory also flickering with discontinuities?? It seems like they aren't modeling the dimension of time, thus throwing away crucial information about speed and needlessly re-fitting the model to static snapshots of dynamic scenes.

It's like the ML system needs its inferences constrained by rules like "objects don't teleport" or "acceleration is never infinite."

cptskippy · 4 years ago
I would think the flickering objects in the UI is a result of objects hovering around the confidence threshold of the model. But... I have a Model 3 and the flickering happens when stationary and nothing around you is moving.
moojah · 4 years ago
This really isn't beta grade software, as it isn't feature complete as the failure scenarios in the video clearly show. I'd call it alpha grade, and it has been that for a while.

It's not 2 weeks or whatever unrealistic timeline away from being done, as Elon has claimed for ever. 2 perhaps years if we're lucky, but given human and driving complexity probably way more before even the whole of the USA is reliably supported beyond L2.

w0m · 4 years ago
>This really isn't beta grade software, as it isn't feature complete as the failure scenarios in the video clearly show.

I think it depends what they actually are trying to accomplish. This is Beta for a glorified cruise control overhaul; not a beta for promised RoboTaxi.

Musk/Tesla tend to talk about RoboTaxi then slip seemlessly into/out of 'but today we have low engagement cruise control!'.

Fair bit of hucksterism.

barbazoo · 4 years ago
> I think it depends what they actually are trying to accomplish

Good point. "Full Self Driving" in my mind paints a picture beyond "a better cruise control". But maybe they meant that and just named it wrong.

H8crilA · 4 years ago
Like Donald Trump, but for nerds:

http://elonmusk.today/

FSD would be equivalent to the Mexican border wall, I guess?

lostmsu · 4 years ago
And the tax hike on cap gains.
j7ake · 4 years ago
These spectacular fails weaken the rationalist's arguments that "as long as FSD achieves lower deaths per km driven (or any other metric) than humans" then FSD should be accepted in favor of human driving.

Even if "on aggregate" FSD performs safer (by some metric) than humans, as long as FSD continues to fail in a way that would have been easily preventable had a human been at the wheel, FSD will not be accepted into society.

the8472 · 4 years ago
I think you misunderstand the argument. It is that if, hypothetically, FSD really did save human lives on average then it should be accepted as the default mode of driving. It would be a net win in human lives after all. But the "should" can also acknowledge that people irrationally won't accept this net life-saving technology because it will redistribute the deaths in ways which they're not accustomed to. So it's as much a statement about utility as a statement about the need to convince people.

Of course this is all theoretical. If we had solid evidence that it performs better than humans in some scenarios but worse in others then we could save even more lives by only allowing it to run in those cases where it does and only do shadow-mode piloting in the others (or those who opt into lab rat mode). Enabling it by default only makes sense if we do know that it performs better on average and we do not know when it does.

paxys · 4 years ago
I don't agree with the former argument either. I'm not going to accept a self driving system unless it increases my personal safety. If the system doubles my accident rate but cuts that of drunks by a factor of 10 (thus improving the national average), it isn't irrational to not want it for myself.
cogman10 · 4 years ago
Yeah, I've brought this point up in other locations.

It does not matter that any autonomous driving tech is safer than human drivers. They MUST be perfect for the general public to accept them. The only accidents they'd be allowed to get into are ones that are beyond their control.

Algorithmic accidents, no matter how rare they are, won't be tolerated by the general public. Nobody will accept a self driving car running over a cat or rear ending a bus even if regular humans do that all day long.

The expectation for self driving cars is a perfectly attentive driver making correct decisions. Because, that's what you theoretically have. The computer's mind doesn't "wander" and it can't be distracted. There's no excuse for it to drive worse than the best human driver.

j7ake · 4 years ago
Imagine if algorithmic accidents had biases. For example, let's say a car tended to crash into children (maybe they are harder to detect with cameras), more often than adults. This type of algorithmic bias would be unacceptable no matter how safe FSD were on aggregate.

So you're right, the only bar to reach is perfection (which is impossible), because algorithmic errors have biases that will likely deviate from human biases.

matz1 · 4 years ago
>They MUST be perfect for the general public to accept them

No they don't, its far from perfect right now yet its available and you can use it right now provided you have money to buy it.

manmal · 4 years ago
I think this is already true though for highway driving. Highways are long and tiring, and the surroundings model is easy to get right, so computers have an advantage. Most manufacturers offer a usable cruise control which is safe and can probably be active 90% of the time spent on highways. I often switch it on in my 3yo Hyundai as an extra safety measure in case the car in front of me unexpectedly brakes while I‘m not looking there. Add to that a lane keeping assistant and lane change assistant, and you don‘t need to do much.

Except for when the radar doesn’t see an obstacle in front of you, eg because the car in front of you just changed lanes. That needs to be looked out for.

Deleted Comment

cptskippy · 4 years ago
> spectacular

There was nothing spectacular about those failures, I would say because the driver was attentive and caught/corrected the car. That's not to say some of these fails could not have ended in catastrophe, but to call them spectacular is quite the exaggeration.

One of those "spectacular fails" was displaying two stop signs in the UI on top of each other while properly treating it as one stop.

Using hyperbole like this only makes people ignore or dismiss your otherwise valid point.

stsmwg · 4 years ago
One of the questions I've repeatedly had regarding FSD (and Tesla's approach in particular) is the notion of memory. While a lot of these scenarios are disturbing, I've seen people wavering on lanes, exits and attempting to turn the wrong way onto one-way streets. People have memory, however. If we go through the same confusing intersection a few times, we'll learn how to deal with that specific intersection. It seems like a connected group of FSD cars could perform that learning even faster since it could report that interaction with any car rather than driver-by-driver. Are any of the FSD implementations taking this into account?
Syonyk · 4 years ago
This has been a common assertion about Tesla's "leadership" in the field - that they can learn from all the cars, push updates, and obviously not have to experience the same issue repeatedly.

It's far from clear, in practice, if they're actually doing this. If they have, it would have to be fairly recent, because the list of "Oh, yeah, Autopilot always screws up at this highway split..." is more or less endless.

GM's Supercruise relies on fairly solid maps of the areas of operation (mostly limited access highways), so it has an understanding of "what should be there" it can work off and it seems to handle the mapped areas competently.

But the problem here is that the learning requires humans taking over, and telling the automation, "No, you're wrong." And then being able to distill that into something useful for other cars - because the human who took over may not have really done the correct thing, just the "Oh FFS, this car is being stupid, no, THAT lane!" thing.

And FSD doesn't get that kind of feedback anyway. It's only with a human in the loop that you can learn from how humans handle stuff.

stsmwg · 4 years ago
Great, thanks for that info. I'm remembering the fatal crash of a Tesla on 101 where the family said the guy driving had complained about the site of the accident before. It's interesting to know that there's at least a mental list of places like this even now. Disengagements should at least prompt a review of that interaction to try and understand why the human didn't like the driving. Though at Tesla's scale that has already become something that has to be automated itself.
AtlasBarfed · 4 years ago
RE: determining what humans did was right to take over

It's a QA department. If there is a failure hot spot, then take a bunch of known "good" QA drivers through that area. Assign strong weight to their performance/route/etc.

It's interesting reading through all this, I can see a review procedure checklist:

- show me how you take hotspot information into account

- show me how your QA department helps direct the software

- show me how your software handles the following known scenarios (kids, deer, trains, deer weather)

- show me how you communicate uncertainty and requests for help from the driver

- show me if there is plans for a central monitoring/manual takeover service

- show me how it handles construction

Also, construction absolutely needs to evolve convergently with self driving. Cones are... ok, but some of those people leaning on shovels need to update systems with information on what is being worked on and what is cordoned off.

specialist · 4 years ago
My thoughts exactly. I've made those mistakes myself, many times.

I guess I sort of assumed that Tesla would do three things:

- Record the IRL decisions of 100k drivers.

- Running FSD in the background, compare FSD decisions with those IRL decisions. Forward all deltas to the mothership for further analysis.

- Some kind of boid, herd behavior. If all the other cars drive around the monorail column, or going one direction on a one way roadway, to follow suit.

To your point, there should probably also be some sort of geolocated decision memory. eg When at this intersection, remember that X times we ultimately did this action.

CyanLite2 · 4 years ago
Pretty simple that an official Tesla employee could confirm that at this location, there's a giant concrete pillar here. Or worst case, at this location deactivate FSD and require human decision until you're outside of this geofenced area. They could do that with a simple OTA update. GM/Ford have taken this approach.
notahacker · 4 years ago
I can see big issues in biasing a decision making algorithm too much towards average driver behaviour under past road conditions though, particularly if a lot of its existing issues are not handling novelty at all well...
plasma · 4 years ago
In one of the technical videos a Tesla engineer presented, I remember the (paraphrasing) quote that the car has no memory and sees the same intersection for the first time, every time. It sounds intentional, as part of their strategy to not rely upon maps etc.
AtlasBarfed · 4 years ago
Humans have an ... ok ... driving algorithm for unfamiliar roads. It's improved a lot with maps/directions software, but it still sucks, especially the more dense you get.

Routes people drive frequently are much more optimized: knowledge of specific road conditions like potholes, undulations, sight lines, etc.

I would like to have centrally curated AI programs for routes rather than a solve-everything adhoc program like Tesla is doing.

However, the adhoc/memoryless model will still work ok on highway miles I would guess.

What I really want is extremely safe highway driving more than automated a trip to Taco Bell.

I personally think Tesla is doing ...ok. The beta 9 is marginally better than the beta 8 from the youtubes I've seen. Neither are ready for primetime, but both are impressive technical demonstrations.

If they did a full-from-scratch about three or four years ago then this is frankly pretty amazing.

Of course with Tesla you have the fanboys (he is the technogod of the future!) and the rabid haters (someone equated him with Donald Trump, please).

A basic uncertainty lookup map would probably be a good thing. How many tesla drivers took control in this area/section? What is the reported certainties by the software for this area/section?

It's all a black box, google's geofencing, Tesla, once-upon-a-time Uber, GM supercruise, etc.

A twitter account listing failures is meaningless without the grand scheme of statistics and success rates. A Twitter account of human failures would be even scarier.

H8crilA · 4 years ago
That's kind of the "selling point" of running this experiment on non-consenting public, that it will learn over time and something working will come out of that in the end.