I think the problem we have with self-driving cars is more social than technological at this point.
Consider the hypothetical: a million self-driving cars on the road that, collectively, will have 1/10th of the fatal accidents that human drivers would have[0]. But, the ones they do have are accidents a human driver would almost certainly have avoided.
Is this something we would accept?
My guess is that no, we wouldn't. Because the accidents avoided don't make the news, but the accidents that occur- especially ones that you say "my god, how did it screw that up?" will make the news, and our perception would be that they are more dangerous.
Until Waymo's cars are better than most humans in every single situation, they won't be able to win over the public perception war.
[0]I'm making those numbers up. I acknowledge that. But it's a hypothetical so give me some leeway on this!
You're hitting the nail spot on with this one for me. As a blind pedestrian, I very much feel I am in danger of falling into exactly that group of potential victims you are hinting at. Right now, I have the illusive comfort of the "Vertrauensgrundsaatz" which basically tells every driver obtaining a drivers licence that they need to take special care when it comes to disabled pedestrians. Sure, one might say these new self driving systems will "just" have to follow that same rule as well, but I am very much doubtful this is technically possible. So currently, I feel like the drive to put innovation on the streets and pull money out of pockets is actually actively endangering me in the future. Not very bright outlook I must say.
Self-driving cars should be far better at recognizing a human in most orientations/poses/outfits, regardless of the car's angle, or weather. It doesn't get distracted, can see 360 around the vehicle, has faster reaction, and is specifically timid around pedestrians. Even the best drivers could easily hit someone based on conditions.
There have been suggestions that pedestrians could make cities interesting, since a place like New York would have every pedestrian no longer afraid to just walk into traffic. The cars will just wait until a fully clear moment that may never come
Like my driving teacher told me in lessons (to admonish my safety focused thinking): do you really think everyone you're sharing the road with has a driving license? I guess he has a point. There are people driving out there without a license, or with suspended license, and all kinds of various situations. Which is why we drive defensively and have to be ready for drivers that don't always follow the rules.
By your use of German, I assume that you experience a far higher driving standard than people in North America and frankly most of the world, and even that is far from perfect.
The Vertrauensgrundsatz does not account for distracted, tired and inebriated drivers.
>Sure, one might say these new self driving systems will "just" have to follow that same rule as well,
Next round of CAPTCHA: click the squares with blind pedestrians
think about it, this is how these things are being trained--by people frantically trying to click stupid images just to get to the webpage they wanted.
I wonder if we’ll all start wearing some kind of markers that these cars can more easily pick up. Like a blind person’s cane could be a specific hue or be otherwise easy to detect by driverless cars. Or cars could detect the UWB signal (like AirTags use) from our phones which is very precise.
Basically, maybe there’s better ways than standard vision (which for computers is still kinda basic compared to a person) to solve the pedestrian hitting problem.
This is one reason I never understood Tesla's vision-based approach. In order to be accepted, self-driving cars don't need to be just somewhat better than humans most of the time. They need to vastly better in every situation, as you mention, to the point that they'll need every sensory advantage they can get.
I got out of a ticket once because I didn't see a "no through traffic" sign against a bright sunset. No chance that same cop gives a self-driving car a pass, nor should he.
One of my biggest concerns about Tesla's vision based approach is that it appears to be entirely about cutting costs. Nothing about it says they actually think it is superior; the cameras on a Model 3/Y are mediocre. They were mediocre the day the Model 3 was first released. If you were going to rely on a vision system in a serious way, you'd at least invest in better camera tech. Hell, Subaru EyeSight has a significantly better camera setup, last I checked, and who looks to Subaru as a technical leader?
Someone else said it here on HN, and I think they're absolutely right -- Tesla is all about vertical integration, and this is preventing them from excelling at anything other than saving pennies. A good part of why the new EV competition is doing everything better is they didn't roll their own tech. They bought packaged solutions from companies that only do one thing, but do it well.
Karpathy claimed they worked for years and could not reap the benefits from multiple sensors however hard they tried. He seemed really convinced and does not get to me as one who tells stuff just to justify cost reductions, like Elon sometimes is carried away.
I also never understood why we had to use "vision" approaches that have the same visual spectrum as what humans see. Any sort of sensor on a device is already synthetic, why limit the spectrum that you attach it to? Should use light sensors, sound sensors, gps, everything.
They are standing firm on the vision-only approach because it is the correct approach. FSD cannot be perfected unless the Tesla team puts 100% exclusive focus on perfecting vision based models that don't have lidar as a fallback. Tesla can only use lidar again for redundancy only after vision is fully solved problem.
Continuing with your hypothetical, even though we’d be 90% safer as a collective, the safety of the individual feels compromised: the risk of an accident is non-uniform when involving humans (depending on e.g. age, experience, safety, alertness, etc.), but becomes uniform (or at least more uniform) with an algorithm in charge.
A very good observation. Based on your comment, I think we can relax the requirement stated by OP by saying:
"Until Waymo's cars reduce any individual's chance of an accident."
So for example, suppose a Waymo car is better than humans overall, but tends to do worse than humans when there's a small bump on the road. And suppose that all humans (in a given regulator's area, e.g., California) tend to encounter such bumps at roughly the same rate (per mile driven) over their lifetime. In that case, it's probably going to be acceptable, since every individual is better off.
I don't know, maybe this is not impactful / obvious enough for people to care about?
What certainly is obvious is that the safest drivers are much safer than an average driver (does anyone know of a study that estimates this ratio?). Therefore, at the very least, the threshold for Waymo should be not the average accident rate, but the accident rate for the safest drivers.
I think you're mistaken. There are still massive technological challenges. I have seen nearly no evidence that current self-driving car technology is even remotely close to matching the ability of a novice human driver. Sure while the "don't crash into things" algorithms may generally be fine, these systems seem to frequently deadlock in completely mundane situations. They also seem dependent on remote operator assistance when encountering non-ideal conditions, greatly limiting their maximum speed.
If anything, legislation and social acceptance has moved faster than the technology. That's the opposite what many of us observing this space expected 10 years ago.
At this point I'm starting to have doubts about whether the full dream of self-driving cars will even be realized within my lifetime.
I read this comment after taking a cruise in SF, which is a self driving cab with no driver. It basically reminds me of all the comments saying that VR has no future, written by people who have never tried VR and would get their mind blown by it if they tried the latest iteration. Maybe you should come to SF and try one of these self driving cars yourself :)
A bigger problem is this: Say you need to prove to the public that the autonomous car is significantly safer, and you do an apples-to-apples comparison between a hypothetical Level 4/5 car and well designed new Level 2 electric car like a Volvo C40 or a BMW i4.
The modern Level 2 car is already today at below 1 fatality per billion vehicle miles travelled (VMT). The autonomous car then needs to be below 0.1 fatalities per billion VMT. Meaning that if you have 1 million vehicles of your make deployed, they each need to have driven 30-40 000 miles autonomously before you have enough statistics!
That means proving the safety of a Level 4/5 autonomous system is extremely expensive and slow, and requires significant public adoption before it's proven to be safe. The consequence is that, assuming proven safety is necessary before public adoption, it becomes impossible to prove safety.
Another point is that OTA upgrades for autonomy become entirely pointless, as you'll be polluting your statistics if you change the code more frequently than every ~3 years.
> The modern Level 2 car is already today at below 1 fatality per billion vehicle miles travelled (VMT)
Does this statistic count fatalities to people outside the car? I ask because (in the US, at least) car safety ratings don't take those into consideration.
To me, self-driving seems like a band-aid fix for terrible car infrastructure. Driving a car is already one of the most dangerous things the average American can do. Due in part to larger, heavier vehicles, high speeds in residential areas, etc. Even if you had a "perfect" driver that doesn't prevent someone from ramming into you.
> one of the most dangerous things the average American can do
About comparable to the risk of falling. Lower risk than suicide. Or death by opioid overdose. And of course, the most dangerous thing most Americans do, by a huge huge huge margin, is overeat and lounge on the couch.
I agree it's a social problem, but IMHO it's a rather different social problem: Current cars, roads and car culture is adapted to human drivers, and AI is expected to be able to integrate into that.
What if we would make cars and rules that are adapted to AI cars and ignore human drivers? e.g. Ban human drivers from some roads, allow AI cars with designs that exploit AI advantages (e.g. much better reaction time) but do not require or even allow human backup (enabling us to put the passengers in a secured shell), etc. I suspect we could than reach a 1/20th rate today.
Why self-driving trains are much easier to implement, yet there are not that many systems capable of doing that?
Many newer metro lines are GoA 2 or 3, theoretically capable of running autonomously, but they always require a driver in the loop.
My partial answer is, making an extremely reliable system is hard. If someone wrote a deadly bug even only happen at a very corner case, it still can kill people. And it's quite hard to prove there isn't such bugs.
We don’t half self driving trains because there’s far, far less incentive.
You only need 1 or 2 drivers for a huge train carrying a lot of ppl/stuff. Optimizing away the driving barely reduces the cost of operating the train as a whole.
With cars, everyone drives themselves. There’s a lot more driving happening and so if you can automate that away, you create more value.
> Until Waymo's cars are better than most humans in every single situation, they won't be able to win over the public perception war.
The current situation as basically the opposite: waymo's cars are better than humans in almost zero situations. It's hard to gain the my trust when your car can barely drive in a drizzle.
It is a valid point but the financial incentives are so big that some jurisdictions will allow it. In fact they already do allow these autonomous systems on public roads. That is going to continue to expand and since the financial incentives are huge even when deaths happen the governments will continue to allow it.
And in fact some regulators fully understand the tradeoffs and will prefer autonomy for the better good of the public. An example of this is the Boeing 737 Max, those crashes wouldn't have happened if there were no autopilot systems. But regulators are not suggesting that all autonomous systems on planes be turned off because of the safety and financial advantages of keeping them in place even though they are obviously not perfect.
I think this is more true than not. But also underestimates the technical problem.
Airplanes are not fully autonomous, even in the instrument flight rules system which is highly standardized. You don't have non-standard or non-predictable things. In instrument conditions, only instrument planes and pilots exist. While portions of segments are automated, transitions between segments and phases of flight aren't. It's ripe for automation, yet isn't fully automated.
The automobile environment has more objects thus more density of complexity, more non-standard and non-predictable actors like non-autonomous vehicles along with bicycles, mopeds, pedestrians, etc.
Yup, exactly. "There was a 1% national accident rate before autopilot but now it is 0.5%, aren't things great?" Not really, because my personal accident risk just went up from 0.1% to 0.5%.
It is not only a social, but a legal issue, too. If a human kills someone in an easily avoidable accident, breaking the traffic laws, he may go to jail and/or lose his driving privileges for a while. If an artificial neural network does the same, should it loose driving privileges for a while? Should someone go to jail?
To double down further on the social side of this, public transit is largely getting there faster than point to point driving. In that many trains and such are already largely "hands off the wheels" for operation.
Relatedly, another "problem" with "self-driving" cars is that we want all of the convenience and ease of use, without adjusting liability and ownership considerations. Consider, if Waymo gets to the point where they have a self driving car that you have to have a subscription to use, do you own the car? Are you the liable for any accidents it has?
To lean in on that hypothetical. I'd imagine a lot of families will use self driving cars to send kids to school. Is effectively a bus that terminates at your house. Who is liable for a mistake if the operation of it is completely remote?
> public transit is largely getting there faster than point to point driving
Aside from some niche cases in very dense cities, is this generally true anywhere? I've visited a lot of cities with various levels of public transit, and I can't think of many where it was faster. More convenient sometimes, sure, cheaper, yep, but faster? Not often.
> Consider, if Waymo gets to the point where they have a self driving car that you have to have a subscription to use, do you own the car? Are you the liable for any accidents it has?
I'd imagine the company would take on liability, as long as humans can't drive the vehicle or they aren't driving when the accident occurred. Mercedes already got the ball rolling on this [1].
> But, the ones they do have are accidents a human driver would almost certainly have avoided.
> Is this something we would accept?
No, and that's good.
Human drivers behave mostly like humans, even the worst ones. We all have millenia of evolution fine-tuned to recognize human behavior based on the most subtle of cues. So human bad driving is recognizable and thus far more avoidable.
AI drivers are effectively an alies species who make errors that make no sense whatsoever to a human mind, thus they behave, for all practical purposes, apparently completely randomly.
Can this be compared to seat belts? If seat belts needed to prevent injury in every single situation then we would claim that they are ineffective because there is always a chance that seat belt may prevent someone from flying out of the window and not burning in the car being strapped unconscious. Is the goal here to be 100% reliable with 0 deaths or achieve better stats than 46k deaths per year in USA? This now becomes more of a philosophical question.
This is one reason for going to vision only automatic driving systems like Tesla is doing. The system is more likely to fail when a human would also of had a difficult time. Heavy snow, sun blind out, etc. Strange failures due to radar, lidar, and other sensors with not be understood or accepted.
That would be a reasonable argument only if Tesla's image processing were as good as a human brain (it's not) and if their cameras were reasonably comparable to human eyes (they're not). To take the argument to an absurd extreme, you can't cover a car in 240p webcams from 2003 and expect good driving performance.
Moreover, other players also have cars covered in cameras. If anyone thought vision only was the best path forward because they couldn't figure out sensor fusion, they'd already have done it and saved the BOM cost.
For what it's worth, my experience has been that cameras are one of the more problematic sensor systems overall. Vendor software is garbage, any particular tuning is finicky in extreme conditions, you have to clean the damn things, cameras streams take up lots of bandwidth, etc.
> will have 1/10th of the fatal accidents that human drivers would have[0].
If this becomes true then society would have an even bigger problem with organ donations - available supply would plummet. Some of the largest sources of organ donations are from car crash victims.
I always hear people cite this statistic and I’m never sure what they’re getting at with it.
Surely we’re in agreement that this is a good thing, right? That a shortage of organ donors due to the donors not dying is a good thing. Sometimes it feels like people are suggesting otherwise and I can’t fathom the logic.
My semi-serious suggestion is self-driving cars should be painted bright orange with big squishy bumpers and a maximum 20 mph speed limit. They would still be perfectly useful as taxis in big cities but it would greatly limit the damage they could do to anyone.
You joke, but this isn't a terrible suggestion. Mercedes has the best answer, in my opinion. Speed limited, carefully controlled environments, full liability resting on the manufacturer. I really don't like how Tesla is foisting off all the beta testing on their customers, pushing that risk to them and other people on (or near) the road, and then also giving them 100% of the liability too.
The incentives are misaligned. Tesla should want to make the software better not just to attract some more customers, but because if they screw it up they're going to be on the hook.
I really don't think so. And we are not nearly close to that situation either. We have some obvious crashes in cars that are nowhere near to be probably "safer then human". And then we have super confident claims of safety by manufacturer.
Piggybacking on that sentiment, car ownership becomes less an expression of individuality when cars are driving themselves. No point in owning an expensive sports car when it's the one doing the driving. Ride-sharing and fractional ownership start to make more sense than owning the car outright.
> I think the problem we have with self-driving cars is more social than technological at this point.
> it's a hypothetical so give me some leeway on this!
IMO you should not base (and broadcast) your opinions about safety on hypothetical statistics. I don't even believe it's true that overall statistics show self-driving is safer than humans. IIRC prior reports showed that companies were selectively picking statistics about safety.
If 10% of the time they got into accidents humans would have avoided we wouldn't be where we are today. I can't imagine any scenarios where these cars get in accidents that a human would have certainly avoided. You also say avoided accidents don't make the news but I'm pretty sure footage of them avoiding accidents that humans would have no chance of will be a major part of their marketing.
>I can't imagine any scenarios where these cars get in accidents that a human would have certainly avoided.
Then you've not been following the space. The one that immediately comes into mind is the Tesla that slammed into the side of the semi truck because it was painted blue like the sky.
As a software engineer myself they will always lose the argument. I saw so many smart systems falter in some weird way that I would never trust a software system completely. I drive an EV with some AI based system that automatically throttles the car etc. But to trust my life and my Familie in the hands of this system (or any other) no thank you.
This is the gist of it, but you have to ask yourself why it's just the accepted wisdom that it's okay to have the massive level of failure we have now.
> I think I'm relatively safe from cars on the sidewalk.
Not even true. I've been hit by a car (with human driver) on a sidewalk. Driveways cross sidewalks, and drivers seem particularly inattentive when crossing them. When you're walking on the sidewalk, do you ever walk somewhere that's more than one block away? If so, you're going to have to cross a street anyway.
This is my view as well. (I did self-driving related research, like platooning and taxi scheduling/allocation.) Waymo, Baidu, Didi, and others are the names that come to mind for places that produce research, produce data, and apply their technology in real-world practice.
My impression of Tesla is mostly shaped from (1) nonparticipation in the research community, (2) a very early "mission accomplished" declaration by calling their cars fully self driving, and (3) a longterm refusal to use LIDAR.
I don't consider Tesla a player in self-driving (edit: self-driving research), but I don't think Tesla does either. There's no reason for them to try to "win" the space.
From Tesla's side, it makes more sense to continue on their current tack: Applying results from existing research. I think Tesla's strategy is to be the highest bidder when it comes time for Waymo (or Didi, etc) to sell their tech.
They literally sell a package to purchasers of cars called full self driving and autopilot. They claim their competitive advantage is all the cameras of miles driven. They put special boards in cars for it. They absolutely consider themselves a player.
I like the level 5 or bust approach taken by Waymo and Cruise. Anything in the middle (like what Tesla is doing) is IMO more dangerous than useful. The whole "the car is self-driving but you must also pay attention and have your hands on the wheel at all times" thing is idiotic.
Humans are notoriously bad at just paying attention and not being in charge. Those few seconds their actual attention is needed are critical.
I do appreciate that my car can do full distance control and assist if I am drifting, but it doesn't control itself, so I can never disengage. Personally I feel that this is wonderful and should be the limit. Anything past that should just be fully autonomous. Otherwise you're asking for trouble.
I suspect the below-"Level 5" driving systems will become more of an "augmented driving". I've driven in newer vehicles with automatic lane centering, pedestrian detection, etc. and they don't really seem like they're even doing anything, you still feel like you're the one driving, except that it's more precise with the occasional interruption by the car when it perceives risk of a collision.
These augmented systems will probably reduce the risk of accidents so greatly that the value proposition for Level 5 driving systems just won't be there.
> Anything in the middle (like what Tesla is doing) is IMO more dangerous than useful.
Even if a hypothetical, future Tesla FSD sometimes crashes in ways that could be prevented had the driver payed attention, it could still be statistically safer than a fully human driver (ie the number of FSD crashes even if left unattended < the number of crashes by humans driving).
To clarify, I'm not talking about the current state of FSD, I'm talking about a hypothetical, future Level 3.
>Anything in the middle (like what Tesla is doing) is IMO more dangerous than useful
Intuitively I would have agreed with you, except Tesla has been doing it for years and their cars are statistically safer by every metric (fatalities, indicents, etc.).
Saying they have taxi's in SF is a bit hyperbolic. I have an invite to that program and it's
- only after 10pm
- used such an odd slice of the city I not only can't get picked up, it doesn't GO any where I go.
I would love to use either of these programs both for the novelty and because I think Autonomous driving is great, but I literally can't use the program I do have access to.
What does 'win' mean here? It seems like being able to pass on the costs of fleet management, insurance, gas, parking/storage, etc to drivers (the way taxis/ride sharing apps currently do) will always be cheaper than maintaining it yourself, even if you save on the driver fees.
At the rate things are going right now, Waymo will win when Tesla throws in the towel on developing in-house and licenses Waymo's tech in order to finally deliver on full self-driving.
This would imply that Uber/Lyft drivers on average are losing money by being on the service, which is obviously not the case. Having a large fleet of driverless taxis, even if you have to maintain them yourself, will be a very profitable business. There are other potential revenue sources as well, like licensing the tech to car manufacturers.
Do self-driving cars model, learn from, or both, from other cars? Say there's a new obstruction (traffic cones around maintenance crew). When deciding what to do, eg left or right, does the self-driver observe what the proceeding cars did?
Do self-drivers remember prior decisions? On the daily commute, there's a speed bump, pot hole, or whatever. Does the car anticipate the remembered road feature? Like maybe "Oh, last time I saw this pot hole, I had to swerve right. So this time I'm going tack right a little earlier."
Sorry for noob questions. Imagining a ubiquitous self-driving future, I keep thinking of boids and uncoordinated collective action, like flocking and murmurings. Wouldn't it be cool if cars did similar stuff?
Any visibility or opinion into AutoX? I hadn't really heard much news about them until that Electrek[1] article the other day that presents AutoX as being far ahead on disengagements relative to Waymo (who I thought was the leader in the space).
I did not spent a few years in self driving, but correct if I'm wrong, but how would Waymo win when they can work only with HD maps (aka, nearly nowhere) while Tesla FSD work nearly perfectly now even on dirt roads (aka anywhere) with no map at all?
Waymo has proven driverless operations in Chandler, then Downtown Phoenix, then San Francisco. Truly driverless, no people in car. They’ve demonstrated driverless capability and the ability to expand to new regions, even if it means taking HD maps.
Tesla has not proven any reliable driverless operation, anywhere. They have removed hardware from their cars (radar, uss) and have not shown any meaningful progress in the past ~5 years nor any willingness to change from their “vision only, big data” strategy.
If things continue on the current trajectory Waymo will likely be operating in all major US cities and metros in a few years while Tesla’s self driving offering will probably be forcibly renamed by regulation and end in a class action lawsuit.
Basically, Waymo has proven N and N+1 capability, meanwhile Tesla has yet to prove N, and has lied to consumers and actually reduced their chances at achieving N due to cost cutting measures.
Collecting HD Maps is an 80/20 problem (I have a patent in a subfield of this, for better or worse lol) - you can get a ton of value from a small set of focused areas. If you can solve greater metro areas (no dirt roads?), you've got a real solution.
I also think that the mapping and routing component matters a lot less than how good your collision and realtime avoidance systems are. And in that arena, Tesla is an unmitigated disaster.
This is something that seems really important, and is definitely a significant effort, but actually is inconsequential.
Think about a section of lightly used suburban road. The amount of work that went into making it involved was immense. A crew of road workers using expensive machines and large amounts of material were required to make it, and are required in it's maintenance. Don't forget the surveyors and engineers who made a highly detailed map and plans in the first place! (Though that map format isn't useful to self driving cars).
Also consider the sheer number of cars that drive that patch in a day. One car every few minutes adds up over hours, days, months, years.
So, yeah they have to drive a mapping car down the street a bunch of times to expand their coverage area. However this is insignificant compared to the effort that goes into our transportation infrastructure already.
This is a very good question. Elon is dumping on LIDAR and 3D high resolution mapping.
That may be a smokescreen. Tesla collects a lot of data from their cars. What they do not have are these supposedly superfluous high resolution maps. If Tesla's camera-sourced data proves to be insufficient, that will have been a very bad gamble, in addition to whether camera data is sufficient for real time decisions.
When they pay off, bold gambles make businessmen look smart. That's why nearly all business hagiographies are the product of survivorship bias. Just like your buddy who won in Vegas.
We will see this risk-taking play out in Starship and Starlink, too.
The cars themselves have the hardware necessary to make an HD map.
That means that Tesla could make an HD map covering 95% of miles driven in the USA within a week with their fleet of users. And next week they could make an updated version of the same map.
Waymos velocity seems to have slowed dramatically since 2015 when they first did fully driverless rides on the public road and started deploying to multiple regions.
Now, 2 billion dollars and 7 years later, they are still only in a handful of small regions with limited numbers of vehicles.
That tells me there is still some fundamental issue that is hard to solve. I wonder why they aren't more transparent and tell us what that issue is that they've been battling for 7 years?
The fundamental problem which is impossible to solve is game theory.
Suppose the collision avoidance is perfect. Now put it on the roads of NYC, or New Delhi.
There are a lot of people who will just walk in front of a car going 40mph, if they know for sure it will brake hard and stop.
The problem isn't technology, it's humanity.
The solution is to change the rules of the road, have protected lanes for self-driving buses and taxis and cars, and enforcement.
Let vehicles that can take full advantage of communicating with each other and the road go fast and use the infrastructure to maximum theoretical capacity, without having to worry about dumb human drivers.
> if they know for sure it will brake hard and stop.
I think the solution here is to issue tickets to those people. You could probably ticket them already under some statute like "endangering road users" or something.
With self driving cars having always on cameras, you only need to ticket each idiot once or twice, and they'll stop doing it.
We already punish people who run around on the runway of airports - seems no different.
> The solution is to change the rules of the road, have protected lanes for self-driving buses and taxis and cars, and enforcement.
We reached this conclusion about 150 years ago and came up with rails. In addition, you get cheap electricity so reliably that modern trains don't even bother having batteries.
Yes, rail lines as they are deployed now might not be the ideal future proof solution, but something similar which allows 'cars' to go off track for the last mile but otherwise not incur wear and tear on your own tires and engine/transmission for the long haul might be a practical idea.
Did you mean offered a service to the general public? Because Google's older self-driving car drove that one blind guy to the Taco Bell drive-thru more than 10 years ago. And they had been driving Googlers back and forth from their homes and offices for years prior.
I suspect that the issue is with cars being so cautious that they just stop as people keep walking, or at best herky jerky move fwd. In NYC, a car like that wouldn't get anywhere as the pedestrians just won't stop. the pedestrians stop when they see that the driver isn't going to stop and they gonna get hit.
So much of urban pedestrian-driver interactions depend on confirming eye contact and determining intent from body language.
I don't see how an automated car with no driver can deal with the "are they crossing or not?" question that you get every few minutes while driving in a city. Both because body language is a hard problem to get right, and because there's a lot of non-verbal communication that a driverless car doesn't have a way of participating in.
Regulation is one of the major factor that slows down. You need more and more test cases to achieve higher reliability, but data collection at scale need approvals and regulators want to see if it's reliable enough to approve. This chicken and egg problem is not something easy to solve since at its heart it's a trust problem. Tesla was an exception because they choose to put all the responsibility to the drivers by making it technically ADAS but marketing it as "full self-driving".
It's clear which bits they haven't been focussing on... There are multiple videos on youtube of rides (some where it has gone wrong) and the user experience is terrible. The car has a robotic voice which plays a long and annoying unskippable message with every ride, and 'Rider support' sounding like they are following a strict script with no ability to be helpful or fix the problem [1]...
Imagine if every time you started your car, a robotic voice said "Welcome to your Ford Pickup XYZ model. Please ensure your seatbelts are fastened. If you are too hot, you can adjust the climate with the climate controls. If you want to lower the windows, please don't put your arms out. etc etc. Have a nice ride today in your Ford(tm) Pickup(tm).".
I realize this is essentially a PR piece, but still, it makes me feel much better about the potential future of automated driving than what Tesla is doing. If I owned TSLA right now I'd sell.
A canned test should not make you feel better. This could be the first time they actually passed the test. They might still fail with a cardboard cutout half the size.
Not to defend TSLA, but I don’t think self driving is the reason why Tesla cars sell, it is more about being arguably the best mass produced EV out there.
> more about being arguably the best mass produced EV out there.
In 2018 this would be a really good argument. What does Tesla do better now, compared to another modern purpose built EV, for example a Ford Mustang Mach E, or a Hyundai Ioniq 5, Kia EV6, etc?
I struggle to identify any particular feature I would say they are better at, much less something that would make it the best mass produced EV. I say this as a two-time Model 3 owner, having just bought the most recent one two weeks ago. I don't quite have buyers remorse yet, but it's nagging at me that I may have just made a foolish choice for the wrong reasons.
Will I ever be able to have self driving on a personal vehicle, or is this just centralized automating the work of a taxi driver? IMHO, these are two very different things for the consumer. This is why I actually prefer the Tesla approach, or actually Comma AI. (If it can be made to work robustly…)
It would suck to be in a world where the only way to do self-driving is indistinguishable from the Uber or taxi service we already have (and likely wouldn’t even be cheaper if it’s proprietary to one or two mega-companies who can extract nearly all the productivity surplus from this as monopoly rents).
I do not think the outcome is only Uber / Lyft but with AI, but if that is the outcome I still think it would be a win. Today supply of Uber / Lyft in my area at off hours is spotty, and that makes it unreliable. I have gotten stuck walking home 2+ miles multiple times in the last year because I couldn't get a ride at any price. That's not a problem in Manhattan, but not everywhere is Manhattan. Driverless cars would be on 24/7/365 so wouldn't have that problem. The more reliable these taxi services are, the more viable it is for people to get rid of their cars.
I also expect long term self driving cars will be safer than humans, and as a person that primarily walks around instead of driving that's a benefit to me even if I'm not in the car.
I recall reading an analysis of the self driving safety stats a few years back that concluded, if you counted incidents requiring human intervention as if they were accidents, then based on total miles driven, humans far outperformed self driving, by like an order of magnitude. In other words, companies (waymo included) were sugar coating their stats for good PR, though Waymo was still at the front of the pack.
Some other comments on this thread suggest that self-driving is only perceived to be unsafe, but is statistically much more safe. Unless the stats improved significantly since then (and what specifically achieved that?), and an independent analysis can agree, I'm not trusting corporate PR.
>...if you counted incidents requiring human intervention as if they were accidents...
The true number is somewhere between this worst case and the numbers Waymo presents.
Most driver interventions that I've seen on video were not narrowly missed accidents; they're the car being confused by road construction, a double parked driver, pedestrians spilling onto a street etc.
I have also seen (for Tesla at least), videos of driver interventions that definitely would have been accidents if the driver hadn't stepped in.
I definitely agree with your point that I'd love to see more in-depth figures and the CA DMV might release those more detailed figures? I'm not sure.
I almost feel like AI enabled vehicles need a special light on the exterior to indicate to other drovers that this thing is a robot. As I'm driving I would probably learn to approach these vehicles differently from the average normal driver.
This sounds like an excellent idea. There is a lingering issue about drivers not paying attention and even sleeping while their vehicle is in self-driving mode, that's contrary to the requirement that the driver is actively supervising.
I don't think it's a realistic expectation that drivers are always fully attentive and able to respond in time to a crash situation when the self-driving mode is active. Such a light would give me the heads up that I should stay on my toes.
It's not necisarily a "stressor" if self-driving cars behave somewhat consistently, it's just another signal you can use when making decisions on the road.
"Will this car suddenly decide to change lanes?", "Not likely, it's self driving and there aren't exits or major changes in traffic in front of it".
There will also be quarks in any automatic system. Learning and then predicting these will make the streets safer and more comfortable. For example, perhaps self-driving cars are overly cautions around some local crosswalks. If I'm behind one of these things in the winter I might be aware I should leave even more extra room for the sudden slow downs that other cars will be less likely to do. If I'm smart this wont be the difference between an accident or not, but it will make for a smoother ride.
My larger question is, will proliferation of AI cars increase or decrease net traffic flow. People seem to be driving larger and larger cars slower and slower anyway, so maybe this target is achievable, but this all worries me.
One reason to introduce self driving cars is to hopefully remove the number of SUVs from city roads, which are only driven by people because they’re “safer and protect me”. At best they’re an absolute nuisance to every other car and pedestrian, and at worse they’re absolute death traps that clog up roads.
> are only driven by people because they’re “safer and protect me”.
That's a lovely straw man you built. I've not met anyone who bought an SUV for that reason. 99 times out of 100 it's for the utility, especially the third row. Which, if you ask me, isn't as useful as people think when they buy it, but still, it's a big factor when you expect to be regularly driving the kids around along with their friends.
Yeah a lot of HN forgets that people have families, and the kids typically are at school while the parent is driving downtown to the job in the morning. So in the moment it looks like a waste, but alas people are quick to judge.
We need personal car size limits on city streets, stricter regulation about viewing angles and heights for personally licensed cars, and car mass taxes in general. Some of these SUVs should not be drivable with normal consumer vehicle licences. They should require a higher license level and training on driving large vehicles.
Exactly. Just like we saw with Uber & Lyft, but it will be exponentially worse. When I can easily just tell the car to go be available somewhere, or take my kid somewhere, or this package, etc, then guess what -- I'm gonna do it.
I appreciate that in this they demonstrate not just rigs where a manikin is thrown in way of danger, but actual humans performing regular/irregular tasks. This to me is akin of the bullet proof {vest,glass,etc} manufacturer willing to put themselves behind their product for demonstration. With AI systems I think this is particularly important because with such high dimensional data it is possible that the vehicle picks up on things like the pull cable or that it is a manikin and not a human (e.g. pneumonia predictions strongly correlating with medical equipment within x-rays rather than inflammation). A kinda two for one confidence builder here.
Consider the hypothetical: a million self-driving cars on the road that, collectively, will have 1/10th of the fatal accidents that human drivers would have[0]. But, the ones they do have are accidents a human driver would almost certainly have avoided.
Is this something we would accept?
My guess is that no, we wouldn't. Because the accidents avoided don't make the news, but the accidents that occur- especially ones that you say "my god, how did it screw that up?" will make the news, and our perception would be that they are more dangerous.
Until Waymo's cars are better than most humans in every single situation, they won't be able to win over the public perception war.
[0]I'm making those numbers up. I acknowledge that. But it's a hypothetical so give me some leeway on this!
There have been suggestions that pedestrians could make cities interesting, since a place like New York would have every pedestrian no longer afraid to just walk into traffic. The cars will just wait until a fully clear moment that may never come
The Vertrauensgrundsatz does not account for distracted, tired and inebriated drivers.
Next round of CAPTCHA: click the squares with blind pedestrians
think about it, this is how these things are being trained--by people frantically trying to click stupid images just to get to the webpage they wanted.
Basically, maybe there’s better ways than standard vision (which for computers is still kinda basic compared to a person) to solve the pedestrian hitting problem.
I got out of a ticket once because I didn't see a "no through traffic" sign against a bright sunset. No chance that same cop gives a self-driving car a pass, nor should he.
Someone else said it here on HN, and I think they're absolutely right -- Tesla is all about vertical integration, and this is preventing them from excelling at anything other than saving pennies. A good part of why the new EV competition is doing everything better is they didn't roll their own tech. They bought packaged solutions from companies that only do one thing, but do it well.
https://www.forbes.com/sites/bradtempleton/2022/12/12/tesla-...
https://tvtropes.org/pmwiki/pmwiki.php/Main/BurningTheShips
Continuing with your hypothetical, even though we’d be 90% safer as a collective, the safety of the individual feels compromised: the risk of an accident is non-uniform when involving humans (depending on e.g. age, experience, safety, alertness, etc.), but becomes uniform (or at least more uniform) with an algorithm in charge.
That’s a tough thing for people to buy into.
"Until Waymo's cars reduce any individual's chance of an accident."
So for example, suppose a Waymo car is better than humans overall, but tends to do worse than humans when there's a small bump on the road. And suppose that all humans (in a given regulator's area, e.g., California) tend to encounter such bumps at roughly the same rate (per mile driven) over their lifetime. In that case, it's probably going to be acceptable, since every individual is better off.
I don't know, maybe this is not impactful / obvious enough for people to care about?
What certainly is obvious is that the safest drivers are much safer than an average driver (does anyone know of a study that estimates this ratio?). Therefore, at the very least, the threshold for Waymo should be not the average accident rate, but the accident rate for the safest drivers.
Exactly. I have had zero accidents in 20 years; I'm not interested in a car that will lower the overall accident rate if it increases mine.
If anything, legislation and social acceptance has moved faster than the technology. That's the opposite what many of us observing this space expected 10 years ago.
At this point I'm starting to have doubts about whether the full dream of self-driving cars will even be realized within my lifetime.
This level of sophistication makes me think it will not "frequently deadlock".
The modern Level 2 car is already today at below 1 fatality per billion vehicle miles travelled (VMT). The autonomous car then needs to be below 0.1 fatalities per billion VMT. Meaning that if you have 1 million vehicles of your make deployed, they each need to have driven 30-40 000 miles autonomously before you have enough statistics!
That means proving the safety of a Level 4/5 autonomous system is extremely expensive and slow, and requires significant public adoption before it's proven to be safe. The consequence is that, assuming proven safety is necessary before public adoption, it becomes impossible to prove safety.
Another point is that OTA upgrades for autonomy become entirely pointless, as you'll be polluting your statistics if you change the code more frequently than every ~3 years.
Does this statistic count fatalities to people outside the car? I ask because (in the US, at least) car safety ratings don't take those into consideration.
About comparable to the risk of falling. Lower risk than suicide. Or death by opioid overdose. And of course, the most dangerous thing most Americans do, by a huge huge huge margin, is overeat and lounge on the couch.
What if we would make cars and rules that are adapted to AI cars and ignore human drivers? e.g. Ban human drivers from some roads, allow AI cars with designs that exploit AI advantages (e.g. much better reaction time) but do not require or even allow human backup (enabling us to put the passengers in a secured shell), etc. I suspect we could than reach a 1/20th rate today.
My partial answer is, making an extremely reliable system is hard. If someone wrote a deadly bug even only happen at a very corner case, it still can kill people. And it's quite hard to prove there isn't such bugs.
You only need 1 or 2 drivers for a huge train carrying a lot of ppl/stuff. Optimizing away the driving barely reduces the cost of operating the train as a whole.
With cars, everyone drives themselves. There’s a lot more driving happening and so if you can automate that away, you create more value.
The current situation as basically the opposite: waymo's cars are better than humans in almost zero situations. It's hard to gain the my trust when your car can barely drive in a drizzle.
> But, the ones they do have are accidents a human driver would almost certainly have avoided.
I suspect most human-driver accidents are also accidents that (other) human drivers almost certainly would have avoided.
That's scant consolation for all the people dying in traffic accidents each day, of course.
And in fact some regulators fully understand the tradeoffs and will prefer autonomy for the better good of the public. An example of this is the Boeing 737 Max, those crashes wouldn't have happened if there were no autopilot systems. But regulators are not suggesting that all autonomous systems on planes be turned off because of the safety and financial advantages of keeping them in place even though they are obviously not perfect.
Bad example. MCAS was an obvious case of a criminal corporate behavior, not a tradeoff between overall safety vs. technical perfection.
Airplanes are not fully autonomous, even in the instrument flight rules system which is highly standardized. You don't have non-standard or non-predictable things. In instrument conditions, only instrument planes and pilots exist. While portions of segments are automated, transitions between segments and phases of flight aren't. It's ripe for automation, yet isn't fully automated.
The automobile environment has more objects thus more density of complexity, more non-standard and non-predictable actors like non-autonomous vehicles along with bicycles, mopeds, pedestrians, etc.
Relatedly, another "problem" with "self-driving" cars is that we want all of the convenience and ease of use, without adjusting liability and ownership considerations. Consider, if Waymo gets to the point where they have a self driving car that you have to have a subscription to use, do you own the car? Are you the liable for any accidents it has?
To lean in on that hypothetical. I'd imagine a lot of families will use self driving cars to send kids to school. Is effectively a bus that terminates at your house. Who is liable for a mistake if the operation of it is completely remote?
Aside from some niche cases in very dense cities, is this generally true anywhere? I've visited a lot of cities with various levels of public transit, and I can't think of many where it was faster. More convenient sometimes, sure, cheaper, yep, but faster? Not often.
I'd imagine the company would take on liability, as long as humans can't drive the vehicle or they aren't driving when the accident occurred. Mercedes already got the ball rolling on this [1].
[1] https://www.roadandtrack.com/news/a39481699/what-happens-if-...
> Is this something we would accept?
No, and that's good.
Human drivers behave mostly like humans, even the worst ones. We all have millenia of evolution fine-tuned to recognize human behavior based on the most subtle of cues. So human bad driving is recognizable and thus far more avoidable.
AI drivers are effectively an alies species who make errors that make no sense whatsoever to a human mind, thus they behave, for all practical purposes, apparently completely randomly.
Deleted Comment
Moreover, other players also have cars covered in cameras. If anyone thought vision only was the best path forward because they couldn't figure out sensor fusion, they'd already have done it and saved the BOM cost.
For what it's worth, my experience has been that cameras are one of the more problematic sensor systems overall. Vendor software is garbage, any particular tuning is finicky in extreme conditions, you have to clean the damn things, cameras streams take up lots of bandwidth, etc.
If this becomes true then society would have an even bigger problem with organ donations - available supply would plummet. Some of the largest sources of organ donations are from car crash victims.
Surely we’re in agreement that this is a good thing, right? That a shortage of organ donors due to the donors not dying is a good thing. Sometimes it feels like people are suggesting otherwise and I can’t fathom the logic.
The incentives are misaligned. Tesla should want to make the software better not just to attract some more customers, but because if they screw it up they're going to be on the hook.
How many times do you think this has happened?
> it's a hypothetical so give me some leeway on this!
IMO you should not base (and broadcast) your opinions about safety on hypothetical statistics. I don't even believe it's true that overall statistics show self-driving is safer than humans. IIRC prior reports showed that companies were selectively picking statistics about safety.
Then you've not been following the space. The one that immediately comes into mind is the Tesla that slammed into the side of the semi truck because it was painted blue like the sky.
If you follow the events with self-driving accidents, most of them are nonsensical crashes that no human would ever have done.
I think I'm relatively safe from cars on the sidewalk. Yet with fsd cars I'm not so sure anymore.
Not even true. I've been hit by a car (with human driver) on a sidewalk. Driveways cross sidewalks, and drivers seem particularly inattentive when crossing them. When you're walking on the sidewalk, do you ever walk somewhere that's more than one block away? If so, you're going to have to cross a street anyway.
My impression of Tesla is mostly shaped from (1) nonparticipation in the research community, (2) a very early "mission accomplished" declaration by calling their cars fully self driving, and (3) a longterm refusal to use LIDAR.
I don't consider Tesla a player in self-driving (edit: self-driving research), but I don't think Tesla does either. There's no reason for them to try to "win" the space.
From Tesla's side, it makes more sense to continue on their current tack: Applying results from existing research. I think Tesla's strategy is to be the highest bidder when it comes time for Waymo (or Didi, etc) to sell their tech.
I do appreciate that my car can do full distance control and assist if I am drifting, but it doesn't control itself, so I can never disengage. Personally I feel that this is wonderful and should be the limit. Anything past that should just be fully autonomous. Otherwise you're asking for trouble.
These augmented systems will probably reduce the risk of accidents so greatly that the value proposition for Level 5 driving systems just won't be there.
Even if a hypothetical, future Tesla FSD sometimes crashes in ways that could be prevented had the driver payed attention, it could still be statistically safer than a fully human driver (ie the number of FSD crashes even if left unattended < the number of crashes by humans driving).
To clarify, I'm not talking about the current state of FSD, I'm talking about a hypothetical, future Level 3.
Intuitively I would have agreed with you, except Tesla has been doing it for years and their cars are statistically safer by every metric (fatalities, indicents, etc.).
They already have robotaxis in SF and are expanding into Arizona and Texas by the end of this year.
- only after 10pm - used such an odd slice of the city I not only can't get picked up, it doesn't GO any where I go.
I would love to use either of these programs both for the novelty and because I think Autonomous driving is great, but I literally can't use the program I do have access to.
Do self-driving cars model, learn from, or both, from other cars? Say there's a new obstruction (traffic cones around maintenance crew). When deciding what to do, eg left or right, does the self-driver observe what the proceeding cars did?
Do self-drivers remember prior decisions? On the daily commute, there's a speed bump, pot hole, or whatever. Does the car anticipate the remembered road feature? Like maybe "Oh, last time I saw this pot hole, I had to swerve right. So this time I'm going tack right a little earlier."
Sorry for noob questions. Imagining a ubiquitous self-driving future, I keep thinking of boids and uncoordinated collective action, like flocking and murmurings. Wouldn't it be cool if cars did similar stuff?
[1]https://electrek.co/2022/12/14/tesla-full-self-driving-data-...
Deleted Comment
Tesla has not proven any reliable driverless operation, anywhere. They have removed hardware from their cars (radar, uss) and have not shown any meaningful progress in the past ~5 years nor any willingness to change from their “vision only, big data” strategy.
If things continue on the current trajectory Waymo will likely be operating in all major US cities and metros in a few years while Tesla’s self driving offering will probably be forcibly renamed by regulation and end in a class action lawsuit.
Basically, Waymo has proven N and N+1 capability, meanwhile Tesla has yet to prove N, and has lied to consumers and actually reduced their chances at achieving N due to cost cutting measures.
I also think that the mapping and routing component matters a lot less than how good your collision and realtime avoidance systems are. And in that arena, Tesla is an unmitigated disaster.
Think about a section of lightly used suburban road. The amount of work that went into making it involved was immense. A crew of road workers using expensive machines and large amounts of material were required to make it, and are required in it's maintenance. Don't forget the surveyors and engineers who made a highly detailed map and plans in the first place! (Though that map format isn't useful to self driving cars).
Also consider the sheer number of cars that drive that patch in a day. One car every few minutes adds up over hours, days, months, years.
So, yeah they have to drive a mapping car down the street a bunch of times to expand their coverage area. However this is insignificant compared to the effort that goes into our transportation infrastructure already.
Tesla currently works not at all.
It's not valid to compare Waymo's current capability unfavorably to a version of Tesla's capability that only exists in someone's head.
I would bet on Waymo working on a dirt road before Tesla does.
That may be a smokescreen. Tesla collects a lot of data from their cars. What they do not have are these supposedly superfluous high resolution maps. If Tesla's camera-sourced data proves to be insufficient, that will have been a very bad gamble, in addition to whether camera data is sufficient for real time decisions.
When they pay off, bold gambles make businessmen look smart. That's why nearly all business hagiographies are the product of survivorship bias. Just like your buddy who won in Vegas.
We will see this risk-taking play out in Starship and Starlink, too.
That means that Tesla could make an HD map covering 95% of miles driven in the USA within a week with their fleet of users. And next week they could make an updated version of the same map.
So, making and updating an HD map isn't an issue.
Now, 2 billion dollars and 7 years later, they are still only in a handful of small regions with limited numbers of vehicles.
That tells me there is still some fundamental issue that is hard to solve. I wonder why they aren't more transparent and tell us what that issue is that they've been battling for 7 years?
I seems like the hardest 90% of the work is the last 10%.
Suppose the collision avoidance is perfect. Now put it on the roads of NYC, or New Delhi.
There are a lot of people who will just walk in front of a car going 40mph, if they know for sure it will brake hard and stop.
The problem isn't technology, it's humanity.
The solution is to change the rules of the road, have protected lanes for self-driving buses and taxis and cars, and enforcement.
Let vehicles that can take full advantage of communicating with each other and the road go fast and use the infrastructure to maximum theoretical capacity, without having to worry about dumb human drivers.
I think the solution here is to issue tickets to those people. You could probably ticket them already under some statute like "endangering road users" or something.
With self driving cars having always on cameras, you only need to ticket each idiot once or twice, and they'll stop doing it.
We already punish people who run around on the runway of airports - seems no different.
We reached this conclusion about 150 years ago and came up with rails. In addition, you get cheap electricity so reliably that modern trains don't even bother having batteries.
Yes, rail lines as they are deployed now might not be the ideal future proof solution, but something similar which allows 'cars' to go off track for the last mile but otherwise not incur wear and tear on your own tires and engine/transmission for the long haul might be a practical idea.
(I appreciate 2019 feels like 7 years ago)
I don't see how an automated car with no driver can deal with the "are they crossing or not?" question that you get every few minutes while driving in a city. Both because body language is a hard problem to get right, and because there's a lot of non-verbal communication that a driverless car doesn't have a way of participating in.
Imagine if every time you started your car, a robotic voice said "Welcome to your Ford Pickup XYZ model. Please ensure your seatbelts are fastened. If you are too hot, you can adjust the climate with the climate controls. If you want to lower the windows, please don't put your arms out. etc etc. Have a nice ride today in your Ford(tm) Pickup(tm).".
[1]: https://youtu.be/2ZmdxkBV5Tw?t=180
In 2018 this would be a really good argument. What does Tesla do better now, compared to another modern purpose built EV, for example a Ford Mustang Mach E, or a Hyundai Ioniq 5, Kia EV6, etc?
I struggle to identify any particular feature I would say they are better at, much less something that would make it the best mass produced EV. I say this as a two-time Model 3 owner, having just bought the most recent one two weeks ago. I don't quite have buyers remorse yet, but it's nagging at me that I may have just made a foolish choice for the wrong reasons.
They are not obvious winner among EV cars currently. They were first to do actual high end EV car and that vision changed the market back then.
It would suck to be in a world where the only way to do self-driving is indistinguishable from the Uber or taxi service we already have (and likely wouldn’t even be cheaper if it’s proprietary to one or two mega-companies who can extract nearly all the productivity surplus from this as monopoly rents).
I also expect long term self driving cars will be safer than humans, and as a person that primarily walks around instead of driving that's a benefit to me even if I'm not in the car.
i may be biased since i use public transit or bike for everything
Some other comments on this thread suggest that self-driving is only perceived to be unsafe, but is statistically much more safe. Unless the stats improved significantly since then (and what specifically achieved that?), and an independent analysis can agree, I'm not trusting corporate PR.
The true number is somewhere between this worst case and the numbers Waymo presents.
Most driver interventions that I've seen on video were not narrowly missed accidents; they're the car being confused by road construction, a double parked driver, pedestrians spilling onto a street etc.
I have also seen (for Tesla at least), videos of driver interventions that definitely would have been accidents if the driver hadn't stepped in.
I definitely agree with your point that I'd love to see more in-depth figures and the CA DMV might release those more detailed figures? I'm not sure.
EDIT: Sure enough, the CA DMV disengagement reports list the exact cause (https://www.dmv.ca.gov/portal/vehicle-industry-services/auto...)
I don't think it's a realistic expectation that drivers are always fully attentive and able to respond in time to a crash situation when the self-driving mode is active. Such a light would give me the heads up that I should stay on my toes.
Deleted Comment
"Will this car suddenly decide to change lanes?", "Not likely, it's self driving and there aren't exits or major changes in traffic in front of it".
There will also be quarks in any automatic system. Learning and then predicting these will make the streets safer and more comfortable. For example, perhaps self-driving cars are overly cautions around some local crosswalks. If I'm behind one of these things in the winter I might be aware I should leave even more extra room for the sudden slow downs that other cars will be less likely to do. If I'm smart this wont be the difference between an accident or not, but it will make for a smoother ride.
My larger question is, will proliferation of AI cars increase or decrease net traffic flow. People seem to be driving larger and larger cars slower and slower anyway, so maybe this target is achievable, but this all worries me.
Deleted Comment
That's a lovely straw man you built. I've not met anyone who bought an SUV for that reason. 99 times out of 100 it's for the utility, especially the third row. Which, if you ask me, isn't as useful as people think when they buy it, but still, it's a big factor when you expect to be regularly driving the kids around along with their friends.
https://www.theguardian.com/cities/2019/oct/07/a-deadly-prob...
Want a minivan? Good luck.
Station wagon? You need a time machine.
I suspect the cyclist in the video is not a $500k/year ML engineer, it's a $50K/year veteran trying to stay out of the welfare line.