Serious crash rates are a hockey stick pattern. 20% of the drivers cause 80% of the crashes, to a rough approximation. For the worst 20% of drivers, the Waymo is almost certainly better already.
Honestly, at this point I am more interested in whether they can operate their service profitably and affordably, because they are clearly nailing the technical side.
For example data from a 100 driver study, see table 2.11, p. 29.
https://rosap.ntl.bts.gov/view/dot/37370
Roughly the same number of drivers had 0 or 1 near-crashes as had 13-50+. One of the drivers had 56 near crashes and 4 actual crashes in less than 20K miles! So the average isn't that helpful here.
Hmmm, perhaps a more-valuable representation would be how the average Waymo vehicle would place as a percentile ranking among human drivers, in accidents-per-mile.
Ex: "X% of humans do better than Waymo does in accidents per mile."
That would give us an intuition for what portion of humans ought to let the machine do the work.
P.S.: On the flip-side, it would not tell us how often those people drove. For example, if the Y% of worse-drivers happen to be people who barely ever drive in the first place, then helping automate that away wouldn't be as valuable. In contrast, if they were the ones who did the most driving...
It may be more fair to compare them to Uber drivers and taxis and at least on that comparison haven't ridden in thousands of Uber and taxis and a couple dozen waymos, it is better than 100%.
Anecdotal of course but within my circle people are becoming Waymo first over other options almost entirely because of the better experience and perceived better driving. And parents in my circle also trust that a waymo won't mow them down in a crosswalk. Which is more than you can say for many drivers in SF.
I saw a transit enthusiast YouTube video try out Waymo from the most distant part of the network to fisherman’s wharf in SF and it cost twice as much as an Uber while having a longer wait time for a car.
It also couldn’t operate on the highway so the transit time was nearly double.
One shouldn’t underestimate how economical real human operators are. It’s not like Uber drivers make a ton of money. Uber drivers often have zero capital expense since they are driving vehicles they already own. Waymo can’t share the business expense of their vehicles with their employees and have them drive them home and to the grocery store.
I’m sure it’ll improve but this tells me that Waymo’s price per vehicle including all the R&D expenses must be astronomical. They are burning $2 billion a year at the current rate even though they have revenue service.
Plus, they actually have a lot of human operators to correct issues and talk to police and things like that. Last number I found on that was over one person per vehicle but I’m not sure if anyone knows for sure.
I saw a transit enthusiast YouTube video try out Waymo from the most distant part of the network to fisherman’s wharf in SF and it cost twice as much as an Uber, had a longer wait time for a car, and cost about double.
That's literally an edge case. For shorter trips, I've found it to be slightly cheaper (especially factoring in the lack of tips) with maybe a slightly longer wait.
The wait times have gotten better, they're getting freeway approval shortly which will be nice, the price is still at a premium (but worth it imo). I only take Waymo in SF now.
The only time I take Uber in the bay area is to the airport (and when they approve Waymo for SFO I won't take Uber then either).
Surely incidental since the typical price per ride is about the same. Generally though, the relationship between the cost to operate a service profitably and the price presented to the user is very complex, so just because the price happens to be x right now doesn't tell you much. For example, something like 30% of the price of an iPhone is markup.
> while having a longer wait time for a car
Obviously incidental?
> It also couldn’t operate on the highway so the transit time was nearly double.
Obviously easily fixable?
> One shouldn’t underestimate how economical real human operators are.
There's nothing to underestimate, human drivers don't scale the way software drivers do. It doesn't matter how little humans cost, they are competing with software that can be copied for free.
> Waymo can’t share the business expense of their vehicles with their employees
They can share parking space, cleaning services, maintenance, parts for repair, etc.
> I’m sure it’ll improve but this tells me that Waymo’s price per vehicle including all the R&D expenses must be astronomical.
Obviously, they're in the development phase. None of this matters long term.
> They are burning $2 billion a year at the current rate even though they have revenue service.
"The stock market went up 2% yesterday so it will go up 2% today too and every day after that."
> Plus, they actually have a lot of human operators to correct issues and talk to police and things like that.
Said operators are shared between all vehicles and their number will go down over time as the driving software improves.
---
To sum up, every single part of what Waymo is trying to do scales. Every problem you've mentioned is either incidental or a one-off cost long term.
My experience using Waymos in SF is that they are a little less expensive than an Uber. The other advantage is that you aren't stuck with a driver who hits on you or wants to share his opinions on the best way to slaughter goats.
I mean yeah, right now they've hit the point of being quite safe, but they're not necessarily as fast as human drivers. They'll keep making incremental progress and will get there eventually, probably.
So far, every time there's been self driving car progress, someone's been like, "okay yeah, but can they do <the next thing they're working on> yet??" like some weird gotcha. Tech progress is incremental, shocking I know.
> One shouldn’t underestimate how economical real human operators are
That's such a silly statement. One shouldn’t underestimate how UNeconomical real humans are.
In the past 12,000 years, human efficiency has improved, maybe, 10x. In the past 100 years, technological efficiency has improved, maybe, 1,000,000x.
Any tiny technological improvement can be instantly replicated and scaled. Meanwhile, every individual human needs to be re-trained and re-grown. They're extremely temperamental, with expensive upkeep, very short lifespans and even shorter productive lifespans.
In fact, humans have improved so little, that every time, they scoff at the new technology and say it will never take off, and they're still doing it 12,000 years later, right now, right above this post.
Waymo rides are also potentially slower because they strictly follow speed limits. Not really problematic in downtown SF but it’ll be interesting to see how it’ll be received by riders when they expand to highway driving where most people generally expect to drive over the speed limit.
That's the correct indicator to look for: the number of Waymos on the road is still very small compared to the number of other vehicles. Alphabet wouldn't risk the cost of expanding to the current number of cities without very strong confidence that they're not going to lose their shirt doing it.
The evidence so far is that they are throttling demand by keeping the prices above that of an Uber. It's definitely still an experiment. If the experiment is successful, expect to see more cities and more vehicles in each city in expanding service areas.
There are step changes that have to be made to keep waymo expanding. The tariff situation is blocking plans to have dedicated vehicles from China. That has to get sorted out. The exact shape of the business model is still experimental.
Of course it's got to be safe. But there are dozens of dull details that all have to work between now and having a profitable business. The best indicator of a plausible success is that Waymo appears to be competent at managing these details. So far anyway.
> One of the drivers had 56 near crashes and 4 actual crashes in less than 20K miles!
There would be a strong argument to simply banning the worst 1% of drivers from driving, and maybe even compensating them with lifetime free taxi rides, on the taxpayers dime.
Nah, just revoke their licenses and make it much harder to get one to begin with. Autonomous driving removes the economic necessity of having one. Just get a proper car that can drive you to work. No need for you to do anything. Catch up on lost sleep (a common cause of accidents is people being to tired to drive) or whatever.
Expect to pay for the privilege of driving yourself and putting others at risk. If you really want to drive yourself, you'll just have to skill up to get a license and proper training, get extra insurance for the increased liability, etc. And then if you prove to be unworthy of having a license after all, it will be taken away. Because it's a privilege and not a right to have one and others on the road will insist that you are competent to drive. And with all the autonomous and camera equipped cars, incompetent drivers will be really easy to spot and police.
It will take a while before we get there; this won't happen overnight. But that's where it's going. Most people will choose not to drive most of the time for financial reasons. Driving manually then becomes a luxury. Getting a license becomes optional, not a rite of passage that every teenager takes. Eventually, owning cars that enable manual driving will become more expensive or may not even be road legal in certain areas. Etc.
It kinda works already without outright banning them: the mandatory insurance will get more and more expensive the more accidents they have.
So they price themselves out.
Of course, they may then decide not to have insurance at all. In most countries that is illegal and doing that in a premeditated way is criminality and something else entirely.
Not sure if insurance is mandatory in the US or not - I assume instead you just get into a gunfight with the other party instead?/s
Sorry if you're having a car crash every 6 months or less, you shouldn't have a license.
Driving a car is privilege granted to you by your state, and this state is negligent in its protection of everyone else by letting this idiot continue to drive. Sell your car, take the bus, move closer to work, I don't care.
More than 3 at-fault crashes in a year or more than 10 at-fault crashes ever and you should permanently lose your license forever. That seems more than generous enough.
> Sorry if you're having a car crash every 6 months or less, you shouldn't have a license.
Actual traffic enforcement does not seem to produce this result. This woman is fairly famous on Reddit for her erratic driving, and was reported in 2019 as having been involved in 31 crashes since 2000: https://www.wral.com/story/lawyer-stayumbl-driver-a-victim-o...
There is already a mechanism for this that the government doesn’t even have to be directly involved in - insurance. At some point you become prohibitively expensive to insure.
However, the government still has to do its part and actually enforce insurance requirements.
My pet hypothesis is that there is a tipping point where the feedback loop between driver safety, ai advancements, and insurance costs will doom manually driven cars faster than most people think.
Here in Norway we've got a point system[1], and I'm sure we didn't invent it.
Each point lasts for 3 years, and if you accumulate more than 8 you lose your license for 6 months.
A speeding ticket is at least two points, and running a red light or tailgating is three for example. You get double points the first two years after getting your license.
It's probably some old "bingo and church" driver who has a 50-50 shot of winding up in the ditch if it snows during Bingo and that "20k" is actually "8yr", the kind of thing insurance would never know about if you're not getting towing coverage through them.
> Serious crash rates are a hockey stick pattern. 20% of the drivers cause 80% of the crashes, to a rough approximation. For the worst 20% of drivers, the Waymo is almost certainly better already.
I would wager that those 20% of drivers also are disproportionally under the influence of drugs, impaired in any way (i.e., stroke, heart attack, etc), or experiencing sudden unexpected events such as equipment malfunction.
You forgot "being an idiot" and it's strange, because the vast majority of the accidents are caused by that. Have you never watched "idiots driving" videos on YouTube?
You'd be correct. At least as far as fatalities are concerned. 50% of all fatalities involve drugs or alcohol. Around 50% of all fatalities are single vehicle accidents though. 15% are motorcycles. 15% are pedestrians.
And of course around 80% involve youth, testosterone and horsepower in some combination. The rest are almost always weather or terrain related in some way. Massive pileups on the highway in the winter and upside down vehicles on waterways in the summer.
Very rarely does a fatal accident happen without several factors being present.
What about the benefit to the 80% if the 20% were obligated to use software instead of their own wetware in a hypothetical world where this was feasible in all respects. Imagine if you transitioned to most new drivers for instance being issued only permits to use self driving vehicles and older drivers being obligated to switch at 65.
As someone getting on towards 65 I have to point out that insurance rates are less for the 65-70s than for any group younger that 55, and claim rates are lower than for any of the under 65s. My relatives didn't really start crashing into stuff till they got to about 90. And then it was kind of slow motion. (for this data https://www.abi.org.uk/products-and-issues/choosing-the-righ...)
New drivers become better drivers by driving and gaining experience. This is why some states implement a mandatory minimum practice duration before you can get a license. Mandating they don't practice would be detrimental to the driving culture as it would skew in favor of AI by preventing learning in the first place.
I think that no matter how good Waymo is doing, there is still the problem of who is responsible when a self driving is involved in a serious accident.
The only solution to that is probably to only let self driving cars onto the road, in an all-or-nothing solution.
I was initially skeptical about self-driving cars but I've been won over by Waymo's careful and thoughtful approach using visual cues, lidar, safety drivers and geo-fencing. That said I will never trust my life to a Tesla robotaxi that uses visual cues only and will drive into a wall painted to look like the road ahead like Wile E. Coyote. Beep beep.
I started digging into this rabbit hole and I found it fairly telling how much energy is being expended on social media over LiDAR vs no LiDAR. Much of it feels like sock puppetry led by Tesla investors and their couterparties.
I see this whole thing is a business viability narrative wherein Tesla would be even further under water if they were forced to admit that LiDAR may possess some degree of technical superiority and could provide a reliability and safety uplift. It must have taken millions of dollars in marketing budget to erase the customer experiences around the prior models of their cars that did have this technology and performed accordingly.
I use FSD every day and it has driven easily 98% of the miles on my model 3. I would never let it drive unsupervised. I honestly have no idea how they think they're ready for robotaxis. FSD is an incredible driver assistance system. It's actually a joy to use, but it's simply not capable of reliable unsupervised performance. A big reason, it struggles exactly where you think it would based on a vision only system. It needs a more robust mechanism of building it's world model.
A simple example. I was coming out of a business driveway, turning left onto a two lane road. It was dark out with no nearby street lights. There was a car approaching from the left. FSD could see that a car was coming. However, from the view of a camera, it was just a ball of light. There was no reasonable way the camera could discern the distance given the brightness of the headlights. I suspected this was the case and was prepared to intervene, but left FSD on to see how it would respond. Predictably, it attempted to pull out in front of the car and risked a collision.
That kind of thing simply can not be allowed to happen with a truly autonomous vehicle and would never happen with lidar.
Hell, just this morning on my way to work FSD was going run a flashing red light. It's probably 95% accurate with flashing reds, but that needs to be 100%. That being said, my understanding is the current model being trained has better temporal understanding such that flashing lights will be more comprehensible to the system. We'll see.
Tesla sold a million Model Ys last year. So having a safety increasing part like lidar would reduce the profit by hundreds millions. Removal of ultrasonic sensors saved Tesla tens of millions. Ok, model Y is a big car and I don’t aim for tightest parking spots anymore. But basically removal of anything is very profitable for Tesla. And vice versa adding something useful is very expensive.
Just because Tesla uses shitty 2MP sensors of 2013 vintage (at least for HW3) doesn’t mean that robotaxi levels of safety can’t be achieved with just modern cameras and radars (plural)
As someone in the industry, I find the LiDAR discussion distracting from meaningful discussions about redundancy and testing
We all see our perspectives as getting quashed. I see the opposite of you - people pushing arguments that make no sense to me in terms of criticizing Tesla for not using lidar, which is an argument that seemingly deliberately glances over the very real and valid reasons for Tesla choosing not to use it
Mark Rober's video is misleading. First, he used autopilot, not FSD. Second, he sped up to 42mph and turned on autopilot a few seconds before impact[1], but he edited the Youtube video to make it look like he started autopilot a from a standstill far away from the barrier. Third, there is an alert message on his screen. It's too small to read in the video, but it could be the "autopilot will not brake" alert that happens when you put your foot on the gas.
In the water test, Rober has the Tesla driving down the center of the road, straddling the double yellow line. Autopilot will not do this, and the internal shots of the car crop out the screen. He almost certainly manually drove the car through the water and into the dummy.
One person tried to reproduce Rober's Wile E. Coyote test using FSD. FSD v12 failed to stop, but FSD v13 detected the barrier and stopped in time.[2]
Lidar would probably improve safety, but Rober's video doesn't prove anything. He decided on an outcome before he made the video.
To be fair, I'm sure there's a few humans that would crash into a giant painted road in the middle of a straight road in the middle of nowhere. Humans crash due to less.
It's where a bunch of cycling nutters (I'm one of them) post local news stories where a driver has crashed into a building ("It wasn't wearing hi-viz!")
A wall painted to look like a road would likely cause human accidents and the painter would be very much criminally liable for them.
That said, I do think using only visual cues is a stupid self-imposed restriction. We shouldn't be making self-driving cars like humans, because humans suck horse testicles at driving.
The painted wall was just a gimmick to make the video entertaining. What’s more concerning is the performance in fog, rain and other visually challenging conditions.
In addition, humans have a lot of senses. Not just 5 - but dozens. A lot of them working in the background, subconsciously. It’s why I can feel someone staring at me, even if I never explicitly saw them.
Hardly. We drive hundreds of billions of miles every month and trillions every year. In the US alone. You're more likely to die from each of the flu, diabetes or a stroke than a car accident.
If those don't get you, you are either going to get heart disease or cancer, or most likely, involve yourself in a fatal accident; which, will most likely be a fall of a roof or a ladder.
It is truly astonishing how much Musk hypes up the robotaxi when no Tesla has ever driving a single mile autonomously while Tesla was liable for crashing.
My conclusion: If Tesla drivers are comfortable with vision-only FSD, that’s fine — it’s their responsibility to supervise and intervene. But when Tesla wants to deploy a fully autonomous robotaxi with no human oversight, it should be subject to higher safety requirements, including an independent redundant sensing system like LiDAR. Passengers shouldn’t be responsible for supervising their own taxi ride.
The issue with self-driving is (1) how it generalises across novel environments without "highly-available route data" and provider-chosen routes; (2) how failures are correlated across machines.
In safe driving failures are uncorrelated and safety procedures generalise. We do not yet know if, say, using self-driving very widely will lead to conditions in which "in a few incidents" more people are killed in those incidents than were ever hypothetically saved.
Here, without any confidence intervals, we're told we've saved ~70 airbag incidents in 20 mil miles. A bad update to the fleet will easily eclipse that impact.
> The issue with self-driving is (1) how it generalises across novel environments
That's also an issue with humans though. I'd argue that traffic usually appears to flow because most of the drivers have taken a specific route daily for ages - i.e., they are not in a novel environment.
When someone drives a route for the first time, they'll be confused, do last-minute lane changes, slow down to try to make a turn, slow down more than others because because they're not 100% clear where they're supposed to go, might line up for and almost do illegal turns, might try to park in impossible places, etc.
Even when someone has driven a route a handful of times they won't know and be ready for the problem spots and where people might surprise they, they'll just know the overall direction.
(And when it is finally carved in their bones to the point where they're placing themselves perfectly in traffic according to the traffic flow and anticipating all the usual choke points and hazards, they'll get lenient.)
You've a very narrow definition of novel, which is based soley on incidental features of the environment.
For animals, a novel situation is one in which their learnt skills to adapt to the environment fail, and have to acquire new skills. In this sense, drivers are rarely in novel environments.
For statistical systems, novelty can be much more narrowly defined as simply the case where sensory data fails a similar-distribution test with historical data --- this is vastly more common, since the "statistical profile of historical cases, as measured, in data" is narrow.. whilst the "situations skills apply to" is wide.
An example definition of narrow/wide, here: the amount of situations needed to acquire safety in the class of similar environments is exponential for narrow systems, and sublinear for wide ones. ie., A person can adapt a skill in a single scenario, whereas a statistical system will require exponentially more data in the measures of that class of novel scenarios.
I travel and drive in a lot of new places and even the novelty of novelty wears off.
At some point you’ll see a car careen into the side of the curb across three lanes due to slick and you’ll be like ehhh I’ll just cut through with this route and move on about your day.
After driving for 20 years, about the only time I got scared in a novel situation was when I was far from cell service next to a cliff and sliding a mountain fast in deep mud running street tires due to unexpected downpour in southern Utah. I didn’t necessarily know what to do but I could reason it out.
I don’t really find “using a new route” difficult at all. If I miss my exit, I’m just going to keep driving and find a U-turn — no point to stress over it.
Generalizing across novel environments is optimal, but I'm not sure the bar needs to be that high to unlock a huge amount of value.
We're probably well past the point where removing all human-driven vehicles (besides bikes) from city streets and replacing them with self-driving vehicles would be a net benefit for safety, congestion, vehicle utilization, road space, and hours saved commuting, such that we could probably rip up a bunch of streets and turn them into parks or housing and still have everyone get to their destinations faster and safer.
The future's here, even if it still has room for improvement.
I'd think congestion would go up as AVs become more popular, with average occupancy rates per vehicle going down. Since some of the time the vehicle will be driving without any passengers inside. Especially with personally owned AVs. Think of sending a no-human-passenger car to pick up the dog at the vets office. Or a car circling the neighborhood when it is inconvenient to park (parking lot full, expensive, whatever).
I don't agree with this novel environment argument about routes. As a human, there are a limited number of roads that I have driven on. A taxi driver drives better than me because none of the routes are considered novel: the taxi driver has likely driven on every road in a city in his/her career. The self-driving machine has most definitely driven on every single road in the city, perhaps first as testing with human backup, then testing with no passengers, and finally passenger revenue miles.
I think you underestimate how many novelties the car will encounter on existing routes and how adept these cars are at navigating novel routes.
I imagine this route data is an extra extra safeguard which allows them to quantify/measure the risk to an extent and also speed up journey's/reduce level of interventions.
I wonder if you can decrease the impact of (2) with a policy of phased rollout for updates. I.E. you never update the whole fleet simultaneously; you update a small percentage first and confirm no significant anomalies are observed before distributing the update more widely.
Ideally you'd selectively enable the updated policy on unoccupied trips on the way to pick someone up, or returning after a drop-off, such that errors (and resultant crashes) can be caught when the car is not occupied.
One measure of robustness could be something like: the ability to resist correlation of failure states under environmental/internal shift. Danger: that under relevant time horizons the integral of injury-to-things-we-care-about is low. And then "safety", a combination: that the system resists correlating failure states in order to preserve a low expected value of injury.
The problem with machines-following-rules is that they're trivially susceptible to violations of this kind of safety. No doubt there are mitigations and strategies for minimising risk, but its not avoidable.
The danger in our risk assessment of machine systems is that we test them under non-adversarial conditions, and observe safety --- because they can quickly cause more injury than they have ever helped.
This is why we worry, of course, about "fluoride in the water" (, vaccines, etc.) and other such population-wide systems... this is the same sitation. A mass public health programme has the same risk profile.
You would save more lives by harshly punishing drunk or influenced driving; however, most of the lives you save would be that of the drinker or the abuser.
You would save more lives by outlawing motorcycles; however, it would just be the motorcyclists themselves.
Another thing people don't consider is that not all seats in a vehicle are equally safe. The drivers seat is the safest. Front passenger is less safe but still often twice as safe as sitting in the backseat. If you believe picking up your elderly parents and then escorting them in your backseat is safer than them driving alone you might be wrong. This is a fatality mode you easily recognize in the FARS data. Where do most people in a robotaxi sit?
Your biggest clear win would be building better pedestrian infrastructure and improving roadway lighting to reduce pedestrian deaths.
> In safe driving failures are uncorrelated and safety procedures generalise. We do not yet know if, say, using self-driving very widely will lead to conditions in which "in a few incidents" more people are killed in those incidents than were ever hypothetically saved.
I usually think about it in the other direction: every time an accident occurs, a human learns something novel (even if it be a newfound appreciation of their own mortality) that can't be directly transmitted to other humans. Our ability to take collective driving wisdom and dump it into the mind of every learner's-permit-holder is woefully inadequate.
In contrast, every time a flaw is discovered in a self-driving algorithm, the whole fleet of vehicles is one over-the-air update away from getting safer.
I can imagine whole city areas well known closed to manual drivers.
Sure I would love to read a book while car is driving me to visit family in the countryside but practically I need city transportation to work and back, to supermarkets and back where I don’t have to align to a bus schedule and have 2-3 step overs but plan my trip 30 min in advance and have direct pick up and drop off.
If that would be possible then I see value in not owning a car.
I was waiting for a Waymo in Austin during the weekend storm and the Waymo suddenly cancelled on us right after a power outage that lasted a second or two. According to local news the vehicles had stopped and were blocking traffic.
> The issue with self-driving is (1) how it generalises across novel environments without "highly-available route data" and provider-chosen routes; (2) how failures are correlated across machines.
Consider London: a series of randomly moving construction sites connected by patches of city.
Waymo, as far as I recall, relies on pretty active route mapping and data sharing -- ie., the cars arent "driving themselves" in the sense of discovering the environment as a self-driving system would.
Any time there is a detour, or a construction zone, or a traffic accident, or a road flooded, or whatever else your route data is not just "worse" it is completely wrong
machine don't make mistake when they are get perfected in certain route, sure human drive would be better in dynamic areas but you dnt need machine to be perfect either just want (80% scenario)
> Using human crash data, Waymo estimated that human drivers on the same roads would get into 78 crashes serious enough to trigger an airbag. By comparison, Waymo’s driverless vehicles only got into 13 airbag crashes. That represents an 83 percent reduction in airbag crashes relative to typical human drivers.
> This is slightly worse than last September, when Waymo estimated an 84 percent reduction in airbag crashes over Waymo’s first 21 million miles.
nitpick: Is it really slightly worse, or is it "effectively unchanged" with such sparse numbers? At a glance, the sentence is misleading even though it might be correct on paper. Could've said: "This improvement holds from last September..."
Of course it's not worse, these numbers have huge error bars. Statistically the two statistics are not significantly different. But trying to explain that to most people with no knowledge of statistics is tough.
83 vs 84 percent doesn't seem too difficult to present as "essentially the same". I also don't think this matters much—the result is impressive regardless of alleged rate of change.
Assuming you trust Waymo's account, the article details them, saying the following:
>So that’s a total of 34 crashes. I don’t want to make categorical statements about these crashes because in most cases I only have Waymo’s side of the story. But it doesn’t seem like Waymo was at fault in any of them.
Considering that there's a >1000:1 ratio of regular cars to Waymo AVs - Waymo would have to be EXTREMELY terrible at driving to move the numbers for the other group meaningfully - which would show up in Waymo's own crash data.
There's also historical data. So if you saw a spike in crashes for regular vehicles after Waymo arrives, it would be sus. But there is no such spike. Further evidence Waymo isn't causing problems for non AVs.
Of course anything is possible. But it's unlikely.
The number of miles driven seems large, but Gemini says there are thousands of crashes per day in the US- so 78 or 13 crashes is a really small sample size....
I know some really bad drivers that have almost no 'accidents', but have caused/nearly caused many. The cut off others, get confused in traffic and make wrong decisions etc...
Waymos, by media attention at least, have a habit of confusion and other behaviour that is highly undesired (one example going around a roundabout constantly) but that doesn't qualify as a 'crash'.
I expect that the media don't find stories of Waymos successfully moving from point A to B without incident nearly so compelling as those cases where that doesn't happen.
I experience Waymo cars pretty much every time I drive somewhere in San Francisco (somewhat frequent since I live there). Out of hundreds of encounters with cars I can only think of a single instance where I thought, "what is that things doing?!"... And in that case it was a very busy 5 way intersection where most of the human driven cars were themselves violating some rule trying to get around turning vehicles and such. When I need a ride, I can also say I'm only using Waymo unless I going somewhere like the airport where they don't go; my experience in this regard is I feel much more secure in the Waymo than with Lyft or Uber.
Just because this car doesn't crash, that doesn't mean it doesn't cause crashes (with fatalities, injuries, or just property damage), and that's inherently much harder to measure.
You can only develop an effective heuristic function if you are actually taking into account all the meaningful inputs.
Worth repeating the same comment I've left on every variant of this article for the last 10 years.
Being better than "average" is a laughably low bar for self-driving cars. Average drivers include people who drive while drunk and on drugs. It includes teenagers and those who otherwise have very little experience on the road. It includes people who are too old to be driving safely. It includes people who are habitually speed and are reckless. It includes cars that are mechanically faulty or otherwise cannot be driven safely. If you compile accident statistics the vast majority will fall into one of these categories.
For self driving to be widely adopted the bare minimum bar needs to be – is it better than the average sensible and experienced driver?
Otherwise if you replace all 80% of the good drivers with waymos and the remaining 20% stay behind the wheel, accident rates are going to go up not down.
Waymo (at this time) is an alternative to taxis and ride hailing services. I've lived in SF for 30+ years and used all modes of transit here. Some of my most frightening moments on the road have been in taxis with drivers who are reckless, in badly maintained vehicles, sometimes smelling of booze. There are certainly other ways that taxis could have been improved, but given the way things have evolved (or devolved with taxis), I feel much safer in a Waymo.
Any comparison of Waymo's safety should be done against taxis/Uber/Lyft/etc. A comparison with the general driving public could also be interesting, or other commercial drivers, but those are not the most relevant cohorts. I don't know the numbers, but I wouldn't be surprised if taxis/Uber/Lyft are worse per mile than general drivers since they are likely under more stress, and often work for long hours. A Waymo is no less safe at 4am, but a Lyft driver who's been up all night is a lot less safe. I would also guess that they are less likely than the general (auto) driving population to own their vehicle. Depending on who owns a vehicle, how long they've been driving (years), there's going to be a lot of interesting correlations.
> Being better than "average" is a laughably low bar for self-driving cars. Average drivers include people who drive while drunk and on drugs. It includes teenagers and those who otherwise have very little experience on the road. It includes people who are too old to be driving safely. It includes people who are habitually speed and are reckless... (etc)
But... that's the reality. If we replace human drivers with self-driving cars at random, or specifically the bad drivers above, then we've improved things.
We are not going to easily improve the average human driver.
>If we replace human drivers with self-driving cars at random
But that's the OPs point, we aren't. Waymo crashing less than human drivers is a tautological result because Waymo is only letting the cars drive on roads where they're confident they can drive as well as humans to begin with.
If you actually ran the (very unethical) experiment of replacing a million people at random on random streets tomorrow with waymo cars you're going to cause some carnage, they only operate in parts of four American cities.
Why wouldn't alcoholics and the elderly be early adopters of self-driving vehicles. Or what can we do to encourage them to be early adopters? You get a DUI, and you are forced to pay for FSD? Get a reduce rate on booze taxes if you "drive" an AV? Have to take a driving test every 2 years after you turn 75, unless you have an AV?
"XYZ demographic should be forced to use self driving cars" is a fantasy that the tech crowd continues to believe but will never happen. Everyone is able to drive and will continue to be able to drive. In fact you should assume that the worse someone is at driving the more likely they are to want to drive for themselves, because that's how the world usually works.
Accident statistics are not dominated by drunks or anything else.
They're dominated by normal drivers who had a momentary lapse in judgment or attention. This is why running a police state that goes hard on DUI and vehicle inspections doesn't make the roads as much safer as its proponents would leave you to believe.
Great points. My own "have to say this every time" is that Waymo only operates within the boundaries of a few cities. Most people's experience of self-driving cars is not with Waymo. It's with vastly inferior technologies, most especially Tesla's. Waymo might be great, but I get really tired of fans dismissing others' misgivings as some sort of Luddite thing when it's entirely justified by experiences people have had where they live. If people want to say that autonomous vehicles are already better, they need to stop sneering long enough to show how that works at a freeway interchange with multiple high-speed merges and lane drops back to back, at a grocery store parking lot when it's busiest, near any suburban school at pickup time. Without that data, "safer than humans" is mere cherry picking.
There’s no statistics for how much a sensible and experienced driver crashes.
Sorting people by past behavior runs into survivorship bias when looking back and people who stop being sensible going forward. I’m personally a poor driver, but I don’t drive much so my statistics still look good.
There’s _no_ statistics? Surely those statistics are precisely what all car insurance premiums are based upon. They might be proprietary but I am certain such statistics exist.
What kind of dataset do we have to determine the subset of accidents caused by sensible and experienced drivers?
I personally have doubts as to whether this dataset exists. Whenever there's an accident, and one party is determined to be at fault, would that party be automatically considered not to be a sensible driver?
If we don't have such a dataset, perhaps it would be impossible to measure self-driving vehicles against this benchmark?
This is already much better than average, enough that it's going to take over. No one cares about car accidents in this country enough to stop this, even if it was only slightly better than average. For proof: Tesla.
Self-driving being the dominant form of driving is now a done deal, thanks to Waymo (and probably Tesla, though that's a policy failure imo), it's just a question of how long it takes.
> Otherwise if you replace all 80% of the good drivers with waymos and the remaining 20% stay behind the wheel, accident rates are going to go up not down.
That's a ridiculous scenario. If anything, impaired drivers should be more likely to choose an automated driving option. But no need to to even assume that. The standard that matters is replacing the average.
> Average drivers include people who drive while drunk and on drugs
I hadn't thought of it until just now, but I guess that means the average driver is a little drunk and a little high. Kinda like how the average person has less than 2 arms.
One of the more interesting things Waymo discovered early in the project is that the actual incidents of vehicle collision were under-counted by about a factor of 3. This is because NHTSA was using accident reports and insurance data for their tracking state, but only 1/3 of collisions were bad enough for either first responders or insurance to get involved; the rest were "Well, that'll buff out and I don't want my rates to go up, so..." fender-taps.
But Waymo vehicles were recording and tracking all the traffic around them, so they ended up out-of-the-starting-gate with more accurate collision numbers by running a panopticon on drivers on the road.
I have always distrusted Waymo's and Tesla's claims of being safer. There are so many ways to fudge the numbers.
1. If the self-driving software chooses to disengage 60 seconds before it detects an anomaly and then crashes while technically not in self-driving mode, is that a fault of the software or human backup driver? This is a problem especially with Tesla, which will disengage and let the human takeover.
2. When Waymo claims to have driven X million "rider only" miles, is that because the majority of miles are on a highway which are easy to drive with cruise control? If only 1 mile of a trip is on the end-to-end "hard parts" that require a human for getting in and out tight city streets and parking lots, while 10 miles are on the highway, it is easy to rack up "rider only" miles. But those trips are not representative of true self driving trips.
3. Selective bias. Waymo only operates in 3-4 cities and only in chosen weather conditions? It’s easy to rack up impressive safety stats when you avoid places with harsh weather, poor signage, or complicated street patterns. But that’s not representative of real-world driving conditions most people encounter daily.
The NTSB should force them to release all of the raw data so we can do our own analysis. I would compare only full self-driving trips, end on end, on days with good weather, in the 3-4 cities that Waymo operates and then see how much better they fare.
Don't conflate Waymo and Tesla. Tesla FSD is by and large garbage, while Waymo is the real thing. Specifically:
1. Waymo is autonomous 100% of the time. It is not possible for a human to actually drive the car: even if you dial in support, all they can do is pick from various routes suggested by the car.
2. No, I'd guesstimate 90%+ of Waymo's mileage is city driving. Waymo in SF operates exclusively on city streets, it doesn't use the highways at all. In Phoenix, they do operate on freeways, but this only started in 2024.
3. Phoenix is driving in easy mode, but San Francisco is emphatically not. Weatherwise there are worse places, but SF drivers need to contend with fog and rain, hilly streets, street parking, a messy grid with diagonal and one-way streets, lots of mentally ill and/or drugged up people doing completely unpredictable shit in the streets, etc.
Humans remotely operate Waymos all the time. And humans routinely have to physically drive to rescue Waymos that get stuck somewhere and start blocking traffic, and famously had like 12 of them blocking a single intersection for hours.
If you think FSD is garbage then you’ve clearly never used it recently. It routinely drives me absolutely everywhere, including parking, without me touching the wheel once. Tesla’s approach to self driving is significantly more scalable and practical than waymo, and the forever repeated misleading and tired arguments saying otherwise really confuse me, since they’re simply not founded in reality
They also compare with human drivers only in places they operate and take into account driving conditions. For example, they exclude highway crashes in the human benchmarks because Waymo does not operate on highways yet.
Waymo is open about their comparison methodology and it would be helpful to read it (in the same link above) instead of assuming bad faith by default.
Tesla, on the other hand, is a completely different story.
Honestly, at this point I am more interested in whether they can operate their service profitably and affordably, because they are clearly nailing the technical side.
For example data from a 100 driver study, see table 2.11, p. 29. https://rosap.ntl.bts.gov/view/dot/37370 Roughly the same number of drivers had 0 or 1 near-crashes as had 13-50+. One of the drivers had 56 near crashes and 4 actual crashes in less than 20K miles! So the average isn't that helpful here.
Ex: "X% of humans do better than Waymo does in accidents per mile."
That would give us an intuition for what portion of humans ought to let the machine do the work.
P.S.: On the flip-side, it would not tell us how often those people drove. For example, if the Y% of worse-drivers happen to be people who barely ever drive in the first place, then helping automate that away wouldn't be as valuable. In contrast, if they were the ones who did the most driving...
A small fender bender is common in human drivers. A catastrophic crash (like t-boning into a bus) is rare (it'd make the news for example).
Autodriving, on the other hand, almost never makes fender benders. But they do t-bone into busses in rare occasions - which also makes the news.
Anecdotal of course but within my circle people are becoming Waymo first over other options almost entirely because of the better experience and perceived better driving. And parents in my circle also trust that a waymo won't mow them down in a crosswalk. Which is more than you can say for many drivers in SF.
It also couldn’t operate on the highway so the transit time was nearly double.
One shouldn’t underestimate how economical real human operators are. It’s not like Uber drivers make a ton of money. Uber drivers often have zero capital expense since they are driving vehicles they already own. Waymo can’t share the business expense of their vehicles with their employees and have them drive them home and to the grocery store.
I’m sure it’ll improve but this tells me that Waymo’s price per vehicle including all the R&D expenses must be astronomical. They are burning $2 billion a year at the current rate even though they have revenue service.
Plus, they actually have a lot of human operators to correct issues and talk to police and things like that. Last number I found on that was over one person per vehicle but I’m not sure if anyone knows for sure.
That's literally an edge case. For shorter trips, I've found it to be slightly cheaper (especially factoring in the lack of tips) with maybe a slightly longer wait.
The only time I take Uber in the bay area is to the airport (and when they approve Waymo for SFO I won't take Uber then either).
> it cost twice as much as an Uber
Surely incidental since the typical price per ride is about the same. Generally though, the relationship between the cost to operate a service profitably and the price presented to the user is very complex, so just because the price happens to be x right now doesn't tell you much. For example, something like 30% of the price of an iPhone is markup.
> while having a longer wait time for a car
Obviously incidental?
> It also couldn’t operate on the highway so the transit time was nearly double.
Obviously easily fixable?
> One shouldn’t underestimate how economical real human operators are.
There's nothing to underestimate, human drivers don't scale the way software drivers do. It doesn't matter how little humans cost, they are competing with software that can be copied for free.
> Waymo can’t share the business expense of their vehicles with their employees
They can share parking space, cleaning services, maintenance, parts for repair, etc.
> I’m sure it’ll improve but this tells me that Waymo’s price per vehicle including all the R&D expenses must be astronomical.
Obviously, they're in the development phase. None of this matters long term.
> They are burning $2 billion a year at the current rate even though they have revenue service.
"The stock market went up 2% yesterday so it will go up 2% today too and every day after that."
> Plus, they actually have a lot of human operators to correct issues and talk to police and things like that.
Said operators are shared between all vehicles and their number will go down over time as the driving software improves.
---
To sum up, every single part of what Waymo is trying to do scales. Every problem you've mentioned is either incidental or a one-off cost long term.
Highway is coming.
And scale will make it cheaper. It's only cheaper than Uber sometimes currently. That will change.
So far, every time there's been self driving car progress, someone's been like, "okay yeah, but can they do <the next thing they're working on> yet??" like some weird gotcha. Tech progress is incremental, shocking I know.
That's such a silly statement. One shouldn’t underestimate how UNeconomical real humans are.
In the past 12,000 years, human efficiency has improved, maybe, 10x. In the past 100 years, technological efficiency has improved, maybe, 1,000,000x.
Any tiny technological improvement can be instantly replicated and scaled. Meanwhile, every individual human needs to be re-trained and re-grown. They're extremely temperamental, with expensive upkeep, very short lifespans and even shorter productive lifespans.
In fact, humans have improved so little, that every time, they scoff at the new technology and say it will never take off, and they're still doing it 12,000 years later, right now, right above this post.
The evidence so far is that they are throttling demand by keeping the prices above that of an Uber. It's definitely still an experiment. If the experiment is successful, expect to see more cities and more vehicles in each city in expanding service areas.
There are step changes that have to be made to keep waymo expanding. The tariff situation is blocking plans to have dedicated vehicles from China. That has to get sorted out. The exact shape of the business model is still experimental.
Of course it's got to be safe. But there are dozens of dull details that all have to work between now and having a profitable business. The best indicator of a plausible success is that Waymo appears to be competent at managing these details. So far anyway.
I've only been in a handful of Waymo rides, but in each case it's been about half the price of an Uber.
There would be a strong argument to simply banning the worst 1% of drivers from driving, and maybe even compensating them with lifetime free taxi rides, on the taxpayers dime.
Expect to pay for the privilege of driving yourself and putting others at risk. If you really want to drive yourself, you'll just have to skill up to get a license and proper training, get extra insurance for the increased liability, etc. And then if you prove to be unworthy of having a license after all, it will be taken away. Because it's a privilege and not a right to have one and others on the road will insist that you are competent to drive. And with all the autonomous and camera equipped cars, incompetent drivers will be really easy to spot and police.
It will take a while before we get there; this won't happen overnight. But that's where it's going. Most people will choose not to drive most of the time for financial reasons. Driving manually then becomes a luxury. Getting a license becomes optional, not a rite of passage that every teenager takes. Eventually, owning cars that enable manual driving will become more expensive or may not even be road legal in certain areas. Etc.
I'd immediately donate money to and vote for any politician stupid enough to say we should revoke licenses from the worst 1% of drivers.
Revoke their licenses, let them figure it out. Get a ride from friends. Take the bus. Move closer to work. You're a danger.
If they break the law and drive anyway, put them in jail.
So they price themselves out.
Of course, they may then decide not to have insurance at all. In most countries that is illegal and doing that in a premeditated way is criminality and something else entirely.
Not sure if insurance is mandatory in the US or not - I assume instead you just get into a gunfight with the other party instead?/s
Sorry if you're having a car crash every 6 months or less, you shouldn't have a license.
Driving a car is privilege granted to you by your state, and this state is negligent in its protection of everyone else by letting this idiot continue to drive. Sell your car, take the bus, move closer to work, I don't care.
More than 3 at-fault crashes in a year or more than 10 at-fault crashes ever and you should permanently lose your license forever. That seems more than generous enough.
Actual traffic enforcement does not seem to produce this result. This woman is fairly famous on Reddit for her erratic driving, and was reported in 2019 as having been involved in 31 crashes since 2000: https://www.wral.com/story/lawyer-stayumbl-driver-a-victim-o...
She is still driving (with a new license plate after 2019): https://old.reddit.com/r/bullcity/comments/1ji3y82/jesusdos_...
However, the government still has to do its part and actually enforce insurance requirements.
My pet hypothesis is that there is a tipping point where the feedback loop between driver safety, ai advancements, and insurance costs will doom manually driven cars faster than most people think.
Each point lasts for 3 years, and if you accumulate more than 8 you lose your license for 6 months.
A speeding ticket is at least two points, and running a red light or tailgating is three for example. You get double points the first two years after getting your license.
[1]: https://www.vegvesen.no/en/driving-licences/driving-licence-...
As I understand it, they limit their range to a few cities in the American Southwest and West Coast, and don't operate in bad weather.
I would wager that those 20% of drivers also are disproportionally under the influence of drugs, impaired in any way (i.e., stroke, heart attack, etc), or experiencing sudden unexpected events such as equipment malfunction.
Defensive driving is risk mitigation.
And of course around 80% involve youth, testosterone and horsepower in some combination. The rest are almost always weather or terrain related in some way. Massive pileups on the highway in the winter and upside down vehicles on waterways in the summer.
Very rarely does a fatal accident happen without several factors being present.
The only solution to that is probably to only let self driving cars onto the road, in an all-or-nothing solution.
Man Tests If Tesla Autopilot Will Crash Into Wall Painted to Look Like Road https://futurism.com/tesla-wall-autopilot
I see this whole thing is a business viability narrative wherein Tesla would be even further under water if they were forced to admit that LiDAR may possess some degree of technical superiority and could provide a reliability and safety uplift. It must have taken millions of dollars in marketing budget to erase the customer experiences around the prior models of their cars that did have this technology and performed accordingly.
A simple example. I was coming out of a business driveway, turning left onto a two lane road. It was dark out with no nearby street lights. There was a car approaching from the left. FSD could see that a car was coming. However, from the view of a camera, it was just a ball of light. There was no reasonable way the camera could discern the distance given the brightness of the headlights. I suspected this was the case and was prepared to intervene, but left FSD on to see how it would respond. Predictably, it attempted to pull out in front of the car and risked a collision.
That kind of thing simply can not be allowed to happen with a truly autonomous vehicle and would never happen with lidar.
Hell, just this morning on my way to work FSD was going run a flashing red light. It's probably 95% accurate with flashing reds, but that needs to be 100%. That being said, my understanding is the current model being trained has better temporal understanding such that flashing lights will be more comprehensible to the system. We'll see.
I suspect it would be a major undertaking to add LiDAR at this point because none of their software is written to use it
Deleted Comment
As someone in the industry, I find the LiDAR discussion distracting from meaningful discussions about redundancy and testing
In the water test, Rober has the Tesla driving down the center of the road, straddling the double yellow line. Autopilot will not do this, and the internal shots of the car crop out the screen. He almost certainly manually drove the car through the water and into the dummy.
One person tried to reproduce Rober's Wile E. Coyote test using FSD. FSD v12 failed to stop, but FSD v13 detected the barrier and stopped in time.[2]
Lidar would probably improve safety, but Rober's video doesn't prove anything. He decided on an outcome before he made the video.
1. https://x.com/MarkRober/status/1901449395327094898
2. https://x.com/alsetcenter/status/1902816452773810409
Dead Comment
It's where a bunch of cycling nutters (I'm one of them) post local news stories where a driver has crashed into a building ("It wasn't wearing hi-viz!")
That said, I do think using only visual cues is a stupid self-imposed restriction. We shouldn't be making self-driving cars like humans, because humans suck horse testicles at driving.
Hardly. We drive hundreds of billions of miles every month and trillions every year. In the US alone. You're more likely to die from each of the flu, diabetes or a stroke than a car accident.
If those don't get you, you are either going to get heart disease or cancer, or most likely, involve yourself in a fatal accident; which, will most likely be a fall of a roof or a ladder.
https://www.youtube.com/watch?v=BO1XXRwp3mc
If you can visually detect the painted wall, what makes you think that cameras on a Tesla can't be developed to do the same?
And are deliberately deceptive road features actually a common enough concern?
The issue with self-driving is (1) how it generalises across novel environments without "highly-available route data" and provider-chosen routes; (2) how failures are correlated across machines.
In safe driving failures are uncorrelated and safety procedures generalise. We do not yet know if, say, using self-driving very widely will lead to conditions in which "in a few incidents" more people are killed in those incidents than were ever hypothetically saved.
Here, without any confidence intervals, we're told we've saved ~70 airbag incidents in 20 mil miles. A bad update to the fleet will easily eclipse that impact.
That's also an issue with humans though. I'd argue that traffic usually appears to flow because most of the drivers have taken a specific route daily for ages - i.e., they are not in a novel environment.
When someone drives a route for the first time, they'll be confused, do last-minute lane changes, slow down to try to make a turn, slow down more than others because because they're not 100% clear where they're supposed to go, might line up for and almost do illegal turns, might try to park in impossible places, etc.
Even when someone has driven a route a handful of times they won't know and be ready for the problem spots and where people might surprise they, they'll just know the overall direction.
(And when it is finally carved in their bones to the point where they're placing themselves perfectly in traffic according to the traffic flow and anticipating all the usual choke points and hazards, they'll get lenient.)
You've a very narrow definition of novel, which is based soley on incidental features of the environment.
For animals, a novel situation is one in which their learnt skills to adapt to the environment fail, and have to acquire new skills. In this sense, drivers are rarely in novel environments.
For statistical systems, novelty can be much more narrowly defined as simply the case where sensory data fails a similar-distribution test with historical data --- this is vastly more common, since the "statistical profile of historical cases, as measured, in data" is narrow.. whilst the "situations skills apply to" is wide.
An example definition of narrow/wide, here: the amount of situations needed to acquire safety in the class of similar environments is exponential for narrow systems, and sublinear for wide ones. ie., A person can adapt a skill in a single scenario, whereas a statistical system will require exponentially more data in the measures of that class of novel scenarios.
At some point you’ll see a car careen into the side of the curb across three lanes due to slick and you’ll be like ehhh I’ll just cut through with this route and move on about your day.
After driving for 20 years, about the only time I got scared in a novel situation was when I was far from cell service next to a cliff and sliding a mountain fast in deep mud running street tires due to unexpected downpour in southern Utah. I didn’t necessarily know what to do but I could reason it out.
I don’t really find “using a new route” difficult at all. If I miss my exit, I’m just going to keep driving and find a U-turn — no point to stress over it.
We're probably well past the point where removing all human-driven vehicles (besides bikes) from city streets and replacing them with self-driving vehicles would be a net benefit for safety, congestion, vehicle utilization, road space, and hours saved commuting, such that we could probably rip up a bunch of streets and turn them into parks or housing and still have everyone get to their destinations faster and safer.
The future's here, even if it still has room for improvement.
I'd think congestion would go up as AVs become more popular, with average occupancy rates per vehicle going down. Since some of the time the vehicle will be driving without any passengers inside. Especially with personally owned AVs. Think of sending a no-human-passenger car to pick up the dog at the vets office. Or a car circling the neighborhood when it is inconvenient to park (parking lot full, expensive, whatever).
I imagine this route data is an extra extra safeguard which allows them to quantify/measure the risk to an extent and also speed up journey's/reduce level of interventions.
The problem with machines-following-rules is that they're trivially susceptible to violations of this kind of safety. No doubt there are mitigations and strategies for minimising risk, but its not avoidable.
The danger in our risk assessment of machine systems is that we test them under non-adversarial conditions, and observe safety --- because they can quickly cause more injury than they have ever helped.
This is why we worry, of course, about "fluoride in the water" (, vaccines, etc.) and other such population-wide systems... this is the same sitation. A mass public health programme has the same risk profile.
You would save more lives by outlawing motorcycles; however, it would just be the motorcyclists themselves.
Another thing people don't consider is that not all seats in a vehicle are equally safe. The drivers seat is the safest. Front passenger is less safe but still often twice as safe as sitting in the backseat. If you believe picking up your elderly parents and then escorting them in your backseat is safer than them driving alone you might be wrong. This is a fatality mode you easily recognize in the FARS data. Where do most people in a robotaxi sit?
Your biggest clear win would be building better pedestrian infrastructure and improving roadway lighting to reduce pedestrian deaths.
Is there a good source for this? I was always under the impression that it was the exact opposite….
Can you provide some examples of what you mean?
Deleted Comment
In contrast, every time a flaw is discovered in a self-driving algorithm, the whole fleet of vehicles is one over-the-air update away from getting safer.
Sure I would love to read a book while car is driving me to visit family in the countryside but practically I need city transportation to work and back, to supermarkets and back where I don’t have to align to a bus schedule and have 2-3 step overs but plan my trip 30 min in advance and have direct pick up and drop off.
If that would be possible then I see value in not owning a car.
https://www.msn.com/en-us/technology/tech-companies/waymo-ve...
Meaning, humans choosing to drive in more difficult conditions probably means they sometimes drive in conditions that they shouldn't.
Why is (1) an issue? Route data never gets worse.
Waymo, as far as I recall, relies on pretty active route mapping and data sharing -- ie., the cars arent "driving themselves" in the sense of discovering the environment as a self-driving system would.
Construction? Parade? Giant tire-crunching pothole in the middle of the freeway?
Any time there is a detour, or a construction zone, or a traffic accident, or a road flooded, or whatever else your route data is not just "worse" it is completely wrong
machine don't make mistake when they are get perfected in certain route, sure human drive would be better in dynamic areas but you dnt need machine to be perfect either just want (80% scenario)
> This is slightly worse than last September, when Waymo estimated an 84 percent reduction in airbag crashes over Waymo’s first 21 million miles.
nitpick: Is it really slightly worse, or is it "effectively unchanged" with such sparse numbers? At a glance, the sentence is misleading even though it might be correct on paper. Could've said: "This improvement holds from last September..."
>So that’s a total of 34 crashes. I don’t want to make categorical statements about these crashes because in most cases I only have Waymo’s side of the story. But it doesn’t seem like Waymo was at fault in any of them.
There's also historical data. So if you saw a spike in crashes for regular vehicles after Waymo arrives, it would be sus. But there is no such spike. Further evidence Waymo isn't causing problems for non AVs.
Of course anything is possible. But it's unlikely.
I know some really bad drivers that have almost no 'accidents', but have caused/nearly caused many. The cut off others, get confused in traffic and make wrong decisions etc...
Waymos, by media attention at least, have a habit of confusion and other behaviour that is highly undesired (one example going around a roundabout constantly) but that doesn't qualify as a 'crash'.
I experience Waymo cars pretty much every time I drive somewhere in San Francisco (somewhat frequent since I live there). Out of hundreds of encounters with cars I can only think of a single instance where I thought, "what is that things doing?!"... And in that case it was a very busy 5 way intersection where most of the human driven cars were themselves violating some rule trying to get around turning vehicles and such. When I need a ride, I can also say I'm only using Waymo unless I going somewhere like the airport where they don't go; my experience in this regard is I feel much more secure in the Waymo than with Lyft or Uber.
* a crash with a fatality
* a crash with an injury
* any crash at all
* a driverless car going around a roundabout constantly
for me, the answer is pretty clear: crashes per distance traveled remains the most important metric.
Just because this car doesn't crash, that doesn't mean it doesn't cause crashes (with fatalities, injuries, or just property damage), and that's inherently much harder to measure.
You can only develop an effective heuristic function if you are actually taking into account all the meaningful inputs.
Well, yeah? Or rather, if it's not, then I think the burden of proof is on the person making that argument.
Even taking your complaints at their maximum impact: would you rather be delayed by a thousand confused robots or run over by one certain human?
Depending on the relative rates and costs of each type of mishap it could go either way. There is a crossover point somewhere.
The fact that you're coming right out the gate with a false dichotomy and appeal to emotion on top tells me that deep down you know this.
So if we're just measuring how many crashes the robot has been involved in, we can't account for how many crashes the robot indirectly caused.
Being better than "average" is a laughably low bar for self-driving cars. Average drivers include people who drive while drunk and on drugs. It includes teenagers and those who otherwise have very little experience on the road. It includes people who are too old to be driving safely. It includes people who are habitually speed and are reckless. It includes cars that are mechanically faulty or otherwise cannot be driven safely. If you compile accident statistics the vast majority will fall into one of these categories.
For self driving to be widely adopted the bare minimum bar needs to be – is it better than the average sensible and experienced driver?
Otherwise if you replace all 80% of the good drivers with waymos and the remaining 20% stay behind the wheel, accident rates are going to go up not down.
Any comparison of Waymo's safety should be done against taxis/Uber/Lyft/etc. A comparison with the general driving public could also be interesting, or other commercial drivers, but those are not the most relevant cohorts. I don't know the numbers, but I wouldn't be surprised if taxis/Uber/Lyft are worse per mile than general drivers since they are likely under more stress, and often work for long hours. A Waymo is no less safe at 4am, but a Lyft driver who's been up all night is a lot less safe. I would also guess that they are less likely than the general (auto) driving population to own their vehicle. Depending on who owns a vehicle, how long they've been driving (years), there's going to be a lot of interesting correlations.
But... that's the reality. If we replace human drivers with self-driving cars at random, or specifically the bad drivers above, then we've improved things.
We are not going to easily improve the average human driver.
But that's the OPs point, we aren't. Waymo crashing less than human drivers is a tautological result because Waymo is only letting the cars drive on roads where they're confident they can drive as well as humans to begin with.
If you actually ran the (very unethical) experiment of replacing a million people at random on random streets tomorrow with waymo cars you're going to cause some carnage, they only operate in parts of four American cities.
They're dominated by normal drivers who had a momentary lapse in judgment or attention. This is why running a police state that goes hard on DUI and vehicle inspections doesn't make the roads as much safer as its proponents would leave you to believe.
Sorting people by past behavior runs into survivorship bias when looking back and people who stop being sensible going forward. I’m personally a poor driver, but I don’t drive much so my statistics still look good.
I personally have doubts as to whether this dataset exists. Whenever there's an accident, and one party is determined to be at fault, would that party be automatically considered not to be a sensible driver?
If we don't have such a dataset, perhaps it would be impossible to measure self-driving vehicles against this benchmark?
Deleted Comment
Self-driving being the dominant form of driving is now a done deal, thanks to Waymo (and probably Tesla, though that's a policy failure imo), it's just a question of how long it takes.
My money is on a decade.
That's a ridiculous scenario. If anything, impaired drivers should be more likely to choose an automated driving option. But no need to to even assume that. The standard that matters is replacing the average.
I hadn't thought of it until just now, but I guess that means the average driver is a little drunk and a little high. Kinda like how the average person has less than 2 arms.
But Waymo vehicles were recording and tracking all the traffic around them, so they ended up out-of-the-starting-gate with more accurate collision numbers by running a panopticon on drivers on the road.
1. If the self-driving software chooses to disengage 60 seconds before it detects an anomaly and then crashes while technically not in self-driving mode, is that a fault of the software or human backup driver? This is a problem especially with Tesla, which will disengage and let the human takeover.
2. When Waymo claims to have driven X million "rider only" miles, is that because the majority of miles are on a highway which are easy to drive with cruise control? If only 1 mile of a trip is on the end-to-end "hard parts" that require a human for getting in and out tight city streets and parking lots, while 10 miles are on the highway, it is easy to rack up "rider only" miles. But those trips are not representative of true self driving trips.
3. Selective bias. Waymo only operates in 3-4 cities and only in chosen weather conditions? It’s easy to rack up impressive safety stats when you avoid places with harsh weather, poor signage, or complicated street patterns. But that’s not representative of real-world driving conditions most people encounter daily.
The NTSB should force them to release all of the raw data so we can do our own analysis. I would compare only full self-driving trips, end on end, on days with good weather, in the 3-4 cities that Waymo operates and then see how much better they fare.
1. Waymo is autonomous 100% of the time. It is not possible for a human to actually drive the car: even if you dial in support, all they can do is pick from various routes suggested by the car.
2. No, I'd guesstimate 90%+ of Waymo's mileage is city driving. Waymo in SF operates exclusively on city streets, it doesn't use the highways at all. In Phoenix, they do operate on freeways, but this only started in 2024.
3. Phoenix is driving in easy mode, but San Francisco is emphatically not. Weatherwise there are worse places, but SF drivers need to contend with fog and rain, hilly streets, street parking, a messy grid with diagonal and one-way streets, lots of mentally ill and/or drugged up people doing completely unpredictable shit in the streets, etc.
If you think FSD is garbage then you’ve clearly never used it recently. It routinely drives me absolutely everywhere, including parking, without me touching the wheel once. Tesla’s approach to self driving is significantly more scalable and practical than waymo, and the forever repeated misleading and tired arguments saying otherwise really confuse me, since they’re simply not founded in reality
No need. Waymo releases raw data on crashes voluntarily: https://waymo.com/safety/impact/#downloads
They also compare with human drivers only in places they operate and take into account driving conditions. For example, they exclude highway crashes in the human benchmarks because Waymo does not operate on highways yet.
Waymo is open about their comparison methodology and it would be helpful to read it (in the same link above) instead of assuming bad faith by default.
Tesla, on the other hand, is a completely different story.