One aspect that comes from this is that now car crashes can be treated more like aircraft crashes. Each self-driving car now has a black box in it with a ton of telemetary.
So it's not just "don't drink and drive", knowing that they'll probably reoffend soon anyway. Every crash and especially fatality can be thoroughly investigated and should be prevented from ever happening again.
Hopefully there's enough data in the investigation so that Tesla / Waymo and all other car companies can include the circumstances of the failure in their tests.
Every lesson from aviation is earned in blood. This death wasn't necessary though. The Otto/Uber guys have been informed about their car's difficulty sensing and stopping for pedestrians. I know this because I informed them myself when one almost ran me down in a crosswalk in SF. Can't learn anything from your lessons unless you listen. Maybe they can pause and figure out how to actually listen to their reports of unsafe vehicle behavior.
this comment section should be read in front of congress the next time regulating the tech industry is in the table. these people literally think its ok to perform experiments that kill people.
Although it's comforting that this exact situation shouldn't happen again in an Uber autonomous car... there is no mechanism to share that learning with the other car companies. There seriously fucking needs to be a consortium for exactly this purpose: sharing system failures.
Also my problem with this is that a human death is functionally treated as finding edge cases that are missing a unit test, and progressing the testing rate of the code... and that really bothers me somehow. We need to avoid treating deaths as progress in the pursuit of better things
> We need to avoid treating deaths as progress in the pursuit of better things
Au contraire. Go read building codes some time. There's a saying that they're "written in blood" - every bit, no matter how obvious or arbitrary seeming, was earned through some real-world failure.
The death itself isn't progress, of course. But we owe it to the person to who died to learn from what happened.
>There seriously fucking needs to be a consortium for exactly this purpose: sharing system failures.
This implies they're using the same systems and underlying models. If one model hit a pedestrian because of a weakness in training data plus a sub-optimal model hyperparameter, and therefore was classified a pedestrian in that specific pose as some trash on the street, how do you share that conclusion with other companies models?
Well, for one thing, I think I have misgivings in part because Uber hasn't really demonstrated an attitude that makes me think they'll be very careful to reduce accidents. (Also, the extent to which they've bet the farm on autonomous driving presents a lot of temptation to cut corners)
It makes me uncomfortable too. I think it's because it's a real world trolley problem.
We all make similar decisions all of our lives, but nearly always at some remove. (With rare exceptions in fields like medicine and military operations.) But autonomous vehicles are a very visceral and direct implementation. The difference betweeen the trolley problem and in autonomous vehicles is in time delay and the amount of effort and skill required in execution.
Plus, we're taking what is pretty clearly a moral decision and putting it into a system that doesn't do moral decision-making.
Thinking about different scenarios as unit tests it shouldn’t be hard for them to simulate all sorts of different scenarios and share those tests. Perhaps that would become part of a new standard for safety measures in addition to crash tests with dummies.
In fact, I really think this will become the norm in the near future. It might even be out there already in some form.
> We need to avoid treating deaths as progress in the pursuit of better things
Then by all means lets stay at home and avoid putting humans in rockets ever again because if you think space exploration will be done without deaths you are in for a surprise.
This is by far the most insightful comment in the entire thread.
The real news isn't "Self-Driving Car Kills Pedestrian", the real news is "Self-Driving Car Fatalities are Rare Enough to be World News". I'm one-third a class M planet away from the incident and reading about it.
With Uber's reputation, I wouldn't be surprised if they try to write an app to falsify black box telemetry in the event of a crash to put the liability on the victim. Maybe they'll call it "grey box" or "black mirror".
Does the NTSB have regulation on how black boxes are allowed to function?
That's knee-jerk reaction that may open a can of worm. Do you need personal details of the victim as well as the driver? Says if the victim had attempted suicide before? at the same crossroad? Or the driver had history of depression? Would that be a violation of their privacy? Would that cause a witch hunt?
To ensure that all automotive software incorporates lessons learned from such fatalities, it would be beneficial to develop a common data set of (mostly synthetic) data replicating accident and 'near miss' scenarios.
As we understand more about the risks associated with autonomous driving, we should expand and enrich this data-set, and to ensure public safety, testing against such a dataset should be part of NHTSA / Euro NCAP testing.
I.e. NHTSA and Euro NCAP should start getting into the business of software testing.
Dr. Mary Cummings has been working on talking to NHTSA about implementing V and V for autopilots/AI in unmanned vehicles for a few years now. She's also been compiling a dataset exactly like what you are talking about.
I think the idea is to build a "Traincar of Reasonability" to test future autonomous vehicles with.
They were unwilling to legally obtain a self-driving license in California because they did not want to report "disengagements" (situations in which a human driver has to intervene).
Uber would just set their self-driving cars free and eat whatever fine/punishment comes with it.
This is a strange question to ask. The regulation is not there to benefit Uber, it is to benefit public good. Very few companies would follow regulation if it was a choice. The setup of such regulation would be for it to be criminal to not comply. And if Uber could not operate in California (or the USA) if they did not comply, it would in their interest to provide the requested information.
Very cynical but - if your self-driving is way behind of your competitors- wouldnt it help to have your lousy car in a accident - so that your competitors get hit with over-regulation and you thus kill a market- on which you cant compete?
Possibly there's enough business risk that if Uber doesn't, someone else will, and then they will have SDCs but Uber won't, and then Uber will go bankrupt just about instantly.
There may eventually be standard test suites that can be applied to any of the self-driving systems in simulation. This would give us a basis of comparison for safety, but also for speed and efficiency.
As well as some core set of tests that define minimum competence, these tests could include sensor failure, equipment failure (tire blowout, the gas pedal gets stuck, the brakes stop working) and unexpected environmental changes (ice on the road, a swerving bus).
Manufacturers could even let the public develop and run their own test cases.
You ultimately have to at some stage, since any test track is a biased test by its nature.
It is more an issue of how sophisticated these vehicles should be before they're let loose on public roads. At some stage they have to be allowed onto public roads or they'd literally never make it into production.
This is what's going to happen. If you've ever seen a machine learning algorithm in action, this isn't surprising at all. Basically, they'll behave as expected some well known percentage of the time. But when they don't, the result will not be just a slight deviation from the normal algorithm, but a very unexpected one.
So we will have overall a much smaller number of deaths caused by self driving cars, but ones that do happen will be completely unexpected and scary and shitty. You can't really get away from this without putting these cars on rails.
Moreover, the human brain won't like processing these freak accidents. People die in car crashes every damn day. But we have become really accustomed to rationalizing that: "they were struck by a drunk driver", "they were texting", "they didn't see the red light", etc. These are "normal" reasons for bad accidents and we can not only rationalize them, but also rationalize how it wouldn't happen to us: "I don't drive near colleges where young kids are likely to drive drunk", "I don't text (much) while I drive", "I pay attention".
But these algorithms will not fail like that. Each accident will be unique and weird and scary. I won't be surprised if someone at some point wears a stripy outfit, and the car thinks they are a part of the road, and tries to explicitly chase them down until they are under the wheels. Or if the car suddenly decides that the road continues at a 90 degree angle off a bridge. Or that the splashes from a puddle in front is actually an oncoming car and it must swerve into the school kids crossing the perpendicular road. It'll always be tragic, unpredictable and one-off.
Very little of what goes into a current-generation self-driving car is based on machine learning [1]. The reason is exactly your point -- algorithmic approaches to self-driving are much safer and more predictable than a machine learning algorithms.
Instead, LIDAR should exactly identify potential obstacles to the self-driving car on the road. The extent to which machine learning is used is to classify whether each obstacle is a pedestrian, bicyclist, another car, or something else. By doing so, the self-driving car can improve its ability to plan, e.g., if it predicts that an obstacle is a pedestrian, it can plan for the event that the pedestrian is considering crossing the road, and can reduce speed accordingly.
However, the only purpose of this reliance on the machine learning classification should be to improve the comfort of the drive (e.g., avoid abrupt braking). I believe we can reasonably expect that within reason, the self-driving car nevertheless maintains an absolute safety guarantee (i.e., it doesn't run into an obstacle). I say "within reason", because of course if a person jumps in front of a fast moving car, there is no way the car can react. I think it is highly unlikely that this is what happened in the accident -- pedestrians typically exercise reasonable precautions when causing the road.
Actually, because there's a severe shortage of LIDAR sensors (much like video cards & crypto currencies, self driving efforts have outstripped supply by a long shot), machine learning is being used quite broadly in concert with cameras to provide the model of the road ahead of the vehicle.
But that's the issue: identifying a pedestrian vs a snowman or a mailbox or a cardboard cutout is important when deciding whether to swerve left or right. It's an asymptotic problem: you'll never get 100% identification, and based on that, even the rigid algorithms will make mistakes.
LIDAR is also not perfect when the road is covered in 5 inches of snow and you can't tell where the lanes are. Or at predicting a driver that's going to swerve into your lane because they spilled coffee on their lap or had a stroke.
With erratic input, you will get erratic output. Even the best ML vision algorithm will sometimes produce shit output, which will become input to the actual driving algorithm.
> I won't be surprised if someone at some point wears a stripy outfit, and the car thinks they are a part of the road, and tries to explicitly chase them down until they are under the wheels. Or if the car suddenly decides that the road continues at a 90 degree angle off a bridge. Or that the splashes from a puddle in front is actually an oncoming car and it must swerve into the school kids crossing the perpendicular road.
Are you working on the next season of Black Mirror?
In all seriousness, my fear (and maybe not fear, maybe it's happy expectation in light of the nightmare scenarios) is that if a couple of the "weird and terrifying" accidents happen, the gov't would shut down self-driving car usage immediately.
I am definitely not. Their version of the future is too damn bleak for me.
Your fear is very much grounded in reality. US lawmakers tend to be very reactionary, except in rare cases like gun laws. So it won't take much to have restrictions imposed like this. Granted, I believe some regulation is good; after all the reason today's cars are safer than those built 20 years ago isn't because the free market decided so, but because of regulation. But self driving cars are so new and our lawmakers are by and large so ignorant, that I wouldn't trust them to create good regulation from the get go.
will never happen. least not with any explicit depth sensors. any car with lidar has depth perception orders of magnitude better than yours and would never chase an object merely because it resembles road markings
>So we will have overall a much smaller number of deaths caused by self driving cars
Why? This is what the self-driving cars industry insists on, but has nowhere near been proven (only BS stats, under ideal conditions, no rain, no snow, selected roads, etc -- and those as reported by the companies itself).
I can very well imagine a greater than average human driving AI. But I can also imagine being able to write it anytime soon not being a law of nature.
It might take decades or centuries to get out of some local maxima.
General AI research had also promised the moon once again in the 60s and 70s, and it all died with little to show of in the 80s. It was always "a few years down the line".
I'm not so certain that we're gonna get this good car AI anytime soon.
If self-driving cars 1. don't read texts whilst driving, 2. don't drink alcohol, 3. stick to the speed limit, 4. keep a 3-4s distance to the car in front, 5. don't drive whilst tired 6. don't jump stop signs / red lights it will solve a majority of crashes and deaths. [0]
The solutions to not killing people whilst driving aren't rocket science but too many humans seem to be incapable of respecting the rules.
Well, one answer is either it will be positively demonstrated to be statistically safer, or the industry won't exist. So once you start talking about what the industry is going to look like, you can assume average safety higher than manual driving.
> This is what the self-driving cars industry insists on, but has nowhere near been proven
Because machines have orders of magnitude fewer failure modes than humans, but with greater efficiency. It's why so much human labour has been automated. There's little reason to think driving will be any different.
You can insist all you like that the existing evidence is under "ideal conditions", but a) that's how humans pass their driving tests too, and b) we've gone from self-driving vehicles being a gleam in someone's eye to actual self-driving vehicles on public roads in less than 10 years. Inclement weather won't take another 10 years.
It's like you're completely ignoring the clear evidence of rapid advancement just because you think it's a hard problem, while the experts actually building these systems expect fully automated transportation fleets within 15 years.
Seems like you're jumping to conclusions here. Let's wait to see what exactly happened. I highly doubt that any of these companies just use "straight" ML. For complex applications, there's generally a combination of rules-based algorithm and statistical ML based ones applied to solve problems. So to simply: highly suspect predictions aren't just blinded followed.
Totally agree on your premise that we can rationalize humans killing humans - but we cannot do so with machines killing humans.
If self-driving cars really are safer in the long-run for drivers and pedestrians - maybe what people need is a better grasp on probability and statistics? And self-driving car companies need to show and publicize the data that backs this claim up to win the trust of the population.
It's a sense of control. I as a pedestrian (or driver) can understand and take precautions against human drivers. If I'm alert I can instantly see in my peripheral vision if a car is behaving oddly. That way I can seriously reduce the risk of an accident and reduce the consequences in the very most cases.
If the road was filled with self-driving cars there would be less accidents but I wouldn't understand them and with that comes distrust.
Freak accidents without explanations are not going to cut it.
Also, my gut feeling says this was a preventable accident that only happened because of many layers of poor judgement. I hope I'm wrong but that is seriously what I think of self-driving attempts in public so far. Irresponsible.
If you ask me which one is coming first, Quantum computing or "better grasp of probability and statistics" among the general public - I take the first with 99% confidence.
If a human kills a human, we have someone in the direct chain of command that we can punish. If an algorithm kills a person... who do we punish? How do we punish them in a severe enough way to encourage making things better?
Perhaps, similar to airline crashes, we should expect Uber to pay out to the family, plus a penalty fine. 1m per death? 2? What price to we put on a life?
> maybe what people need is a better grasp on probability and statistics
Definitely, though my interpretation of your statement is "self driving cars have only killed a couple people ever but human cars have killed hundreds of thousands". If that's correct, that's not going to win anyone over nor is it necessarily correct.
While the state of AZ definitely has some responsibility for allowing testing of the cars on their roads, Uber needs (imo) to be able to prove the bug that caused the accident was so much of an edge case that they couldn't easily have been able to foresee it.
Are they even testing this shit on private tracks as much as possible before releasing anything on public roads? How much are they ensuring a human driver is paying attention?
I'd be surprised if you could educate this problem away just by publishing statistics. Generally, people don't seem to integrate statistics well on an emotional level, but do make decisions based on emotional considerations.
I mean, people play the lottery. That's a guaranteed loss, statistically speaking. In fact, it's my understanding that, where I live, you're more likely to get hit by a (human-operated) car on your way to get your lottery ticket than you are to win any significant amount of money. But still people brave death for a barely-existent chance at winning money!
> These are "normal" reasons for bad accidents and we can not only rationalize them, but also rationalize how it wouldn't happen to us: "I don't drive near colleges where young kids are likely to drive drunk", "I don't text (much) while I drive", "I pay attention".
Tangent: is there a land vehicle designed for redundant control, the way planes are? I've always wondered how many accidents would have been prevented if there were classes of vehicles (e.g. large trucks) that required two drivers, where control could be transferred (either by push or pull) between the "pilot" and "copilot" of the vehicle. Like a driving-school car, but where both drivers are assumed equally fallible.
Pilots don't share control of an aircraft; the copilot may help with some tasks, but unless the captain relinquishes control of the yoke (etc.) he's flying it. So you'd still have issues where a cars "pilot" gets distracted, or makes a poor decision.
> This is what's going to happen. If you've ever seen a machine learning algorithm in action, this isn't surprising at all. Basically, they'll behave as expected some well known percentage of the time. But when they don't, the result will not be just a slight deviation from the normal algorithm, but a very unexpected one.
Do we even know yet what's happened?
It seems rather in bad taste to take someones death, not know the circumstances then wax lyrical about how it matches what you'd expect.
This is a great point. Solve it one step at a time.
But the problem is Uber's business plan is to replace drivers with autonomous vehicles ferrying passengers. i.e. take the driver cost out of the equation. Same goes for Waymo and others trying to enter/play in this game. It's always about monetization which kills/slows innovation.
Just highway-mode is not going to make a lot of money except in the trucking business and I bet they will succeed soon enough and reduce transportation costs. But passenger vehicles, not so much. May help in reducing fatigue related accidents but not a money making business for a multi-billion dollar company.
That being said, really sad for the victim in this incident.
Another quirk of people, particularly acting via "People in Positions of Authority" is that they will need to do something to prevent next time.
Why did this happen? What steps have we make sure it will never happen again? These are both methods of analysing & fixing problems and methods of preserving a decision making authority. Sometimes this degrades into a cynical "something must be done" for the sake of doing, but... it's not all (or even mostly) cynical. It just feels wrong going forward without correction, and we won't tolerate this from our decision makers. EVen if we will, they will assume (out of habit) that we won't
We can't know how this happened. There is nothing to do. ..and.. this will happen again, but at a rate lower than human driver's more less opaque accidents.... I'm not sure how that works as an alternative finding out what went wrong and doing something.
Your comment is easily translated into "you knew there was a glitch in the software, but you let this happen anyway." Something will need to be done.
Even if we assume that we wanted to address this for real, I fear that it will be next to impossible to actually assess whether whatever mistake caused this has actually been addressed when all the technology behind it is proprietary. I can easily see people being swayed by a well-written PR speech about how "human safety" is their "top priority" without anything substantial actually being done behind the scenes.
I think any attempts to address such issues have to come with far-ranging transparency regulations on companies, possibly including open-sourcing (most of) their code. I don't think regulatory agencies alone would have the right incentives to actually check up on this properly.
It‘s amazing how quickly things can happen after an accident.
In a nearby town, people have petitioned for a speed limit for a long time. Nothing happened until a 6 year old boy was killed. Within a few weeks a speed limit was in place.
> So we will have overall a much smaller number of deaths caused by self driving cars, but ones that do happen will be completely unexpected and scary and shitty. You can't really get away from this without putting these cars on rails.
One of the big questions I have about autonomous driving is if it's really a better solution to the problems it's meant to solve than more public transportation.
Do you have any experience developing autonomous driving algorithms? Because you are making a lot of broad claims about their characteristics that only someone with a fairly deep level of expertise could speculate about.
Interesting insight. While self-driving cars should reduce the number of accidents, there are going to be some subset of people who are excellent drivers for which self-driving cars will increase their accident rates (for example, the kind of person who stays home when its icy, but decides that their new self-driving car can cope with the conditions).
> Moreover, the human brain won't like processing these freak accidents.
I think this is really key. The ability to put the blame on something tangible, like the mistakes of another person, somehow allows for more closure than if it was a random technical failure.
I don't believe I'm delusional, my code is certainly medium-to-okay. I've put a lot of thought into this. I think autonomous cars are a very good idea, I want to work on building them and I want to own one as soon as safely possible.
Autonomous cars, as they exist right now, are not up to the task at hand.
That's why they should still have safety drivers and other safeguards in place. I don't know enough to understand their reasoning, but I was very surprised when Waymo removed safety drivers in some cases. This accident is doubly surprising, since there WAS a safety driver in the car in this case. I'll be interested to see the analysis of what happened and what failures occurred to let this happen.
Saying that future accidents will be "unexpected" and therefore scary is FUD in its purest form, fear based on uncertainty and doubt. It will be very clear exactly what happened and what the failure case was. Even as the parent stated, "it saw a person with stripes and thought they were road" - that's incredibly stupid, but very simple and explainable. It will also be explainable (and expect-able) the other failures that had to occur for that failure to cause a death.
What set of systems (multiple cameras, LIDAR, RADAR, accelerometers, maps, GPS, etc.) had to fail in what combined way for such a failure? Which one of N different individual failures could have prevented the entire failure cascade? What change needs to take place to prevent future failures of this sort - even down to equally stupid reactions to failure as "ban striped clothing"? Obviously any changes should take place in the car itself, either via software or hardware modifications, or operational changes i.e. maximum speed, minimum tolerances / safe zones, even physical modifications to configuration of redundant systems. After that should any laws or norms be changed, should roads be designed with better marking or wider lanes? Should humans have to press a button to continue driving when stopped at a crosswalk, even if they don't have to otherwise operate the car?
Lots of people have put a lot of thought into these scenarios. There is even an entire discipline around these questions and answers, functional safety. There's no one answer, but autonomy engineers are not unthinking and delusional.
We look at the alternative which is our co-workers, and people giving us our specs, and our marketing teams and think 'putting these people in charge of a large metal box travelling at 100kmh interacting with people just like them - that is a good idea'...
It is not that we think that software is particularly good, it is that we have a VERY dim view of humanity's ability to do better.
You can't prove a negative and should be careful about promising what may turn out to be false. There is potentially quite a bit of money to be made by people with the auto version of slipping on a pickle jar. When there is money to be made, talented but otherwise misguided people apply their efforts.
To put this accident in perspective, Uber self-driving cars totaled about 2 to 3 millions miles, while the fatality rate on US roads is approximately 1.18 deaths per 100 millions miles [1].
This Rand study looks at impact of lives saved and recommends that "Results Suggest That More Lives Will Be Saved the Sooner HAVs Are Deployed". Any mishap, while most unfortunate and tragic for everyone concerned, should not result in kneejerk reactions!
It's impossible to know. The simulation needs to be very good with thousands and thousands of all the weird situations that can occur in the real world. Without knowing how sofisticated the simulation is and if they also are using generative algorithms to try to break the autonomous system you can't even ballpark it.
The people leading the development should demonstrate that they can stop for pedestrians by personally jumping out in front of them on a closed test road. If they're not able to demonstrate this, they shouldn't be putting them on public roads.
Self driving cars are still subject to the laws of physics... unless you're going to dictate that self-driving cars never go above 15mph, I wouldn't advocate jumping in front of even a "perfect" self-driving car.
Braking distance (without including any decision time) for a 15mph car is 11 ft, for a 30mph is 45 ft. Self driving cars won't change these limits. (well, they may be a little better than humans at maximizing braking power through threshold braking on all 4 wheels, but it won't be dramatically different)
So even with perfect reaction times, it will still be possible for a self-driving car to hit a human who enters its path unexpectedly.
Once upon a time when I was learning to drive, one of the exercises my instructor used was to put me in the passenger seat while he drove, and have me try to point out every person or vehicle capable of entering the lane he was driving in, as soon as I became aware of them. Every parked vehicle along the side of a road. Every vehicle approaching or waiting to enter an intersection. Every pedestrian standing by a crosswalk or even walking along the sidewalk adjacent to the traffic lane. Every bicycle. Every vehicle traveling the opposite direction on streets without a hard median. And every time I missed one, he would point and say "what about that car over there?" or "what about that person on the sidewalk?" He made me do this until I didn't miss any.
And then he started me on watching for suspicious gaps in the parked cards along the side that could indicate a loading bay or a driveway or an alley or a hidden intersection. And so on though multiple categories of collision hazards, and then verbally indicating them to him while I was driving.
And the reason for that exercise was to drive home the point that if there's a vehicle or a person that could get into my lane, it's my job as a defensive driver to be aware of that and be ready to react. Which includes making sure I could stop or avoid in time if I needed to.
I don't know how driving is taught now, but I would hope a self-driving system could at the very least match what my human driving instructor was capable of.
Apologies for going off topic here, but I'm curious about this. I've tested every car I've ever owned and all of the recent cars with all-round disc brakes have outperformed this statistic, but I've never been able to get agreement from other people (unless I demonstrate it to them in person).
I'm talking about optimal conditions here, wet roads would change things obviously but each of these cars was able to stop within it's own car length (around 15 feet) from 30mph, simply by stamping on the brake pedal with maximum force, triggering the ABS until the car stops:
2001 Nissan Primera SE
2003 BMW 325i Touring (E46)
2007 Peugeot 307 1.6 S
2011 Ford S-Max
I can't work out how any modern car, even in the wet, could need 45 feet to stop. In case it's not obvious, this is only considering mechanical stopping distance, human reaction time (or indeed computer reaction time which is the main topic here) would extend this distance, but the usual 45 feet from 30mph statistic doesn't include reaction time either.
So, there's a performance envelope expected. Sure, someone can bungee jump off an overpass and not be avoidable. :)
But they should be willing to walk in front of it in an in-spec performance regime. There's some really good Volvo commercials along that line, with engineers standing in front of a truck.
Unlike humans who have limited vision, self-driving cars are generally able to observe all obstacles in all directions and compute, in real-time, the probability of a collision.
If a car can't observe any potential hazards that might impact it using different threat models it should drive more slowly. Blowing down a narrow street with parked cars on both sides at precisely the speed limit is not a good plan.
If self driving cars are limited to speeds that allow them to stop within their lidar max range is that too slow? Humans don't have the pinpoint accuracy of lidar but our visual algorithms are very flexible and robust and also have very strong confidence signals e.g driving more carefully in dark rain.
Cameras are not accurate enough though, their dynamic range being terrible. Wonder how humans would fare if forced to wear goggles that approximated a lidar sensors information.
The advantage self-drivers have is in
1. minimizing distraction / optimizing perception
2. minimizing reaction time.
Theoretically self-drivers will always see everything that is relevant, unlike a human driver. And theoretically a robot-driver will always react more quickly than even a hyper-attentive human driver, who has to move meat in order to apply the brake.
But is that the actual situation we're talking about? Or are we actually talking about a situation where the person may have been jaywalking but would have had a reasonable expectation that a human driver would stop? I walk a decent distance to work every day and I don't think anyone totally adheres to the lights and crosswalks (not least because if you do you will be running into the road just as everyone races to make a right turn into the same crosswalk you're in).
Then we should at least learn their capabilities by throwing jumping dummies at them. Call it the new dummy testing. It's the least we can do. Did Travis bro do this when he set the plan in motion?
The ancient Romans would have the civil engineer stand under the bridge they'd just built while a Legion marched over it. That's why Roman structures are still around today!
>The ancient Romans would have the civil engineer stand under the bridge they'd just built while a Legion marched over it. That's why Roman structures are still around today!
A similar story comes from World War II where an alarmingly high number of parachutes were failing to open when deployed. They started picking random chutes and their packer and sent them up for a test drop. The malfunction rate dropped to near zero.
Heard a similar myth about a Danish king. We was tired of cannons blowing up so he ordered the manufacturer to sit on top of the cannon when it was fired the first time.
The comparison is odd to me. Somehow building bridges seems more of an exact science to me than making cars drive themselves. I sure wouldn't step on a bridge if its engineer doesn't dare going under it. Shit is supposed to stand up.
"The people leading the development should demonstrate that they can stop for pedestrians by personally jumping out in front of them on a closed test road. If they're not able to demonstrate this, they shouldn't be putting them on public roads."
Although the actual logistics of your proposal might be challenging (child comments point out that some speeds/distances might be impossible to solve) your instinct is a correct one: the people designing and deploying these solutions need to have skin in the game.
I don't think truly autonomous cars are possible to deploy safely with our current level of technology but if I did ... I would want to see their family and their children driving in, and walking around, these cars before we consider wide adoption.
My opinion is that we will come to a point where self-driving cars are demonstrably, but only marginally, safer road users than humans.
From an ethical standpoint the interesting phase will only start then. It‘s one thing to bring a small fleet of high tech (e.g. having LIDAR) vehicles to the road. It‘s another to bring that technology to a saturated mass market which is primarily cost driven. Yes, I assume self-driving cars will eventually compete with human driven ones.
Will we, as a society, accept some increase in traffic fatalities in return for considerable savings that self-driving cars will bring?
Will you or me as an individual accept a slightly higher risk in exchange for tangible time savings?
> demonstrably, but only marginally, safer road users than humans.
I believe the claim is 38 multitudes better than humans, significantly better than marginally.
> accept some increase in traffic fatalities
No. And the question is more about "some" than "some increase"
> accept a slightly higher risk in exchange for tangible time savings?
Texting while driving and even hands free talking were becoming laws in many states before smart phones -- and my experience is that many people readily accept this risk and the legal risk just to communicate faster. The same can be said for the risk of drunk driving -- it's a risk that thousands of Americans take all of the time.
This isn't a good argument because it implies if these AVs have successfully not killed their CEOs during a closed (i.e. controlled) test, that they are safe on public roads. But it seems like the majority of AV accidents so far involve unpredictable and uncontrolled conditions.
IOW, setting this up as some kind of quality standard gives unjustified cover ("Hey, our own CEO risked his life to prove the car was safe!") if AVs fail on the open road, because the requirements of open and closed tests are so different.
IIRC the people who programmed the first auto pilot for a major airliner were required to be on board the first test flight, so I have to think their testing methodology was pretty meticulous.
> The people leading the development of these horse-drawn carriages should demonstrate that they can stop for pedestrians by jumping in front of them on a closed test dirt-path. If they're not able to demonstrate this, they shouldn't be putting them on public carriageways
Sounds silly when compared against old tech.
Accidents happen, best we can do is try to prevent them.
This reminds me of a story I heard in college... an owner of a company that builds table saws demonstrated his saw's safety feature - a killswitch that pulls the blade down into the machine as soon as it detects flesh - by putting his hand on it while it was spinning.
very good point, I would not be surprised if Uber put those cars out with a barely working model to collect enough training data, the human operator to correct such errors was tired and didn't intervene. Some basic driving test for self driving cars or other mitigating factors need to be added IMMEDIATELY otherwise tons more will probably die or be injured. Train operators need to prove they are awake and with attention using some button presses, similar things need to be required for those research vehicle if you want to allow them at all.
human driven cars are currently, as we speak running people over on the streets and they have human drivers who don't particularly want to run over other humans. It was inevitable that this happened, and no matter how many people self driving cars run over will be worth it, since it will still be less than what cars are currently doing in terms of death toll.
goes to show, sdv's have to be perfect in the eyes of the public. you wouldn't seriously recommend adding a test like that to a regular driving licence.
also that testing does happen in the case of every av program i know of. closed obstacle courses with crap that pops out forcing the cars to react. look up gomentum station.
Said that few times: how does any AI recognize whether object in front of me is just a white plastic bag rolling in the street, or a baby that rolled out of crib. AI cannot know and will never know. And we cannot have cars drive smoth in traffic if every self driven car will stop before hitting an empty plastic bag.
How do we recognize whether an object in front of a car is just a plastic bag in the wind or a baby?
o At speed we're pretty ok with cars hitting things people drop on the road, examples of cars hitting wagons and babies are already plentiful
o Visual recognition & rapidly updated multi-mode sensor data, backed by hard-core neural networks and multi-'brained' ML solutions, have every reason to be way better at this job than we are given sufficient time... those models will be working with aggregate crash data of every accident real and simulated experienced by their systems and highly accurate mathematical models custom made to chose the least-poor solution to dilemmas human drivers fail routinely
o AIs have multiple vectors of possible improvement over human drivers in stressed situations. Assuming they will bluntly stop ignores how vital relative motion is to safe traffic, not to mention the car-to-car communication options available given a significant number of automated cars on the road -- arguably human drivers can never be as smooth as a fleet of automated cars, and "slamming the brakes" is how people handle uncertainty already
Presently the tech is immature, but the potential for automated transport systems lies well beyond the realms of human capabilities.
I'm surprisingly okay with self-driving cars stopping for plastic bags in the near term.
Self driving tech is up against the marketing arm of the auto-industry, they've got to be better than perfect to avoid public backlash. If they're a bit slower but far safer then I think they'll do well.
Why? It’s not a problem if Uber kills pedestrians, even in situations where it’s completely avoidable. It’s only (legally) a problem if they’re violating the rules of the road while doing so.
This opens up an interesting question going forward. We can't rely on Uber themselves to analyse the telemetry data and come to a conclusion, they're biased. So really, we need self driving car companies to turn over accident telemetry data to the police. But the police are ill equipped to process that data.
We need law enforcement to be able to keep pace with advances in technology and let's face it, they're not going to have the money to employ data analysts with the required skills. Do we need a national body for this? Is there any hope of a Republican government spending the required money to do so? (no)
Yeah, pilot here. NTSB is the right organization to handle this. The investigators over there do an amazing job of determining root cause from forensic evidence. I assume that will be the process here.
Shouldnt this actually be pretty easy? The system on the Uber, should have a massive amount of cameras, and lidar. Basicly dashcams on speed, recording multiple angles of the accident. I would assume that everything is beeing recorded, for debug/Testing purposes.
Aren't you basically just proposing that the NTSB analyze automobile telemetry, the same way they analyze aircraft telemetry data? Doesn't seem wildly outside the realm of possibility.
The NTSB could certainly do it. But they'd need to expand a lot in a world where self driving cars are an everyday reality, which again comes back to the question of funding.
Law enforcement already conducts traffic accident investigations where the involved drivers are biased parties of inconsistent honesty and imperfect memory that can and do both accidentally and intentionally misrepresent the facts.
I don't think self-driving cars and their sensor data, even if they rely on the operator to explain what the car “remembers”, fundamentally shift the landscape.
"The Tesla was equipped with multiple electronic systems capable of recording and transmitting vehicle performance data. NTSB investigators will continue to collect and analyze these data, and use it along with other information collected during the investigation in evaluating the crash events."
Don't we already have a huge infrastructure of police, the courts, and insurance companies in place to decide these very things everyday?
I mean, how is this different then all of the other accidents that occur every day? Yes a self-driving car is involved, but do people really think autonomous cars aren't going to be involved in fatal accidents?
Of course they are...but I've always thought that autonomous vehicles only have to be like 10% safer for them to make tons of sense to replace human drivers.
For the same reason we don't leave the investigation of plane-crashes to the attorney general and the courts.
We care about more than 'who should we punish for this' in this case.
We want to know what happened, how it happened, how we could have prevented it, how likely it is to happen again, what assumptions or overlooked details lie at the heart of this.
The required level of expertise, effort and precision here are higher than in a regular traffic accident.
Moreover, the required skill-set, knowledge base, and willingness to work in a new area here make this an exceptional case.
Finally, the outcome of this will be much more than liability in a single case. This could set the precedent for an entire field of industry. This could be the moment we find out self-driving cars are nearly a pipe-dream, or it could be the moment we kill self-driving cars at the cost of millions of preventable traffic accidents.
This investigation just might determine a lot, again, that makes it exceptional.
> We need law enforcement to be able to keep pace with advances in technology
Agree, Kumail Nanjiani (comedian) has a great rant on twitter about exactly this, ethical implications of tech-
> As a cast member on a show about tech, our job entails visiting tech companies/conferences etc. We meet ppl eager to show off new tech. Often we'll see tech that is scary. I don't mean weapons etc. I mean altering video, tech that violates privacy, stuff w obv ethical issues.
And we'll bring up our concerns to them. We are realizing that ZERO consideration seems to be given to the ethical implications of tech.
They don't even have a pat rehearsed answer. They are shocked at being asked. Which means nobody is asking those questions. "We're not making it for that reason but the way ppl choose to use it isn't our fault. Safeguard will develop." But tech is moving so fast. That there is no way humanity or laws can keep up. We don't even know how to deal with open death threats online. Only "Can we do this?" Never "should we do this? We've seen that same blasé attitude in how Twitter or Facebook deal w abuse/fake news. Tech has the capacity to destroy us. We see the negative effect of social media. & no ethical considerations are going into dev of tech.You can't put this stuff back in the box. Once it's out there, it's out there. And there are no guardians. It's terrifying. The end.
https://twitter.com/kumailn/status/925828976882282496?lang=e...
It is scary, big tech orgs have no incentive or motivation to even consider ethical implications, whats worse is the American consumer has shown repeatedly that you're OK to do really shady stuff and as long as it means a lower price product/service for the consumer. We're in a kind of dark age of tech regulation and he's right it is terrifying.
How about we wait for the problem to present it and actually cause harm before we throw law enforcement, government regulation, a regulatory body, etc at it?
Again... 'Uber has temporarily suspended the testing of its self-driving cars following the crash of one of them in Tempe, Ariz. The ride-hailing company had also been conducting experiments in Pittsburgh and San Francisco.'
"The Uber vehicle was reportedly driving early Monday when a woman walking outside of the crosswalk was struck.
...
Tempe Police says the vehicle was in autonomous mode at the time of the crash and a vehicle operator was also behind the wheel."
That's a very bad look; the whole point of self-driving cars is that they can react to unexpected circumstances much more quickly than a human operator, such as when someone walks out into the road. Sounds like Uber's platform may not be up to that standard yet, which makes me wonder why they're on public roads.
On the other hand, it sounds like it happened very recently; I guess we'll have to wait and see what happened.
> That's a very bad look; the whole point of self-driving cars is that they can react to unexpected circumstances much more quickly than a human operator, such as when someone walks out into the road.
Some of these accidents are unpreventable by the (autonomous) driver. If a pedestrian suddenly rushes out into the street or a cyclist swerves into your path, the deciding factor is often simply the coefficient of friction between your tyres and the road.
The autonomous vehicle and the human attendant might have made a glaring error, or they might have done everything correctly and still failed to prevent a fatality. It's far too early to say. It's undoubtedly a dent to the public image of autonomous vehicles, but hopefully the car's telemetry data will reveal whether this was a case of error, negligence or unavoidable tragedy.
>If a pedestrian suddenly rushes out into the street or a cyclist swerves into your path, the deciding factor is often simply the coefficient of friction between your tyres and the road.
This only true for the uninitiated, never let it get to that. I drove for Uber/Lyft/Via in New York city so I can experience and study these situation. These sort of accidents are preventable. The following is the basic:
1.) Drive much slower in areas where a pedestrian and cyclist can suddenly cross your past.
2.) Know the danger zone. Before people "jump into traffic" or a cyclist swerve in front of you, they have to get into position, this position is the danger zone.
3.) Extra diligence in surveying the area/danger zone to predict a potential accident.
4.) Make up for the reduce speed by using highways and parkways as much as possible.
It helps that Manhattan street traffic tends to be very slow to begin with. Ideally I will like use my knowledge to offer a service to help train autonomous vehicles to deal with these situation. It has to be simulated numerous times in a closed circuit for the machine to learn what I've learn intuitively driving professionally in NYC.
> If a pedestrian suddenly rushes out into the street or a cyclist swerves into your path, the deciding factor is often simply the coefficient of friction between your tyres and the road.
This holds iff the control input you apply is only braking. Changing the steering angle is generally far more effective for the "pedestrian darts out from hidden place onto road" situation. It's far better to sharply swerve away to ensure that there's no way the pedestrian can get into your path before your car arrive there than it is to stand on the brakes and hope for the best.
Indeed, the faster you're moving, the more you should consider swerving away over braking -- take advantage of that lethal speed to clear the pedestrian's potential paths before he can get into yours.
Yes, this intentionally violates the letter of the traffic laws (and might involve colliding into a a parked or moving automobile on the other side of the road) and also involves potentially unusual manoeuvring on a very short deadline; but it's far better to avoid a vehicle-pedestrian collision even if it's at the cost of possibly busting into the opposing lane / driving off the road / hitting a parked car. Decently experienced drivers can do this, I can do this (and have successfully avoided a collision with a pedestrian who ran out between parked cars on a dark and raining night), there's no fundamental reason that computer-controlled cars can't do this.
That’s technically true but the number of truly unavoidable cases is orders of magnitude lower. With human drivers, “jumped out in front of me” really means inattention 99.9% of the time and police departments historically tend to be unlikely to question such claims. (For example, here in DC there have been a number of cases where that was used to declare a cyclist at fault only to have private video footage show the opposite - which mattered a lot for medical insurance claims)
With self-driving cars this really seems to call for mandatory investigations by a third-party with access to the raw telemetry data. There’s just too much incentive for a company to say they weren’t at fault otherwise.
you seem to be giving uber a big benefit of the doubt. these autonomous cars generally go slow. tempe has flat roads with great line of sight and clear weather. coefficient of friction? highly doubt it. the sensors should be looking more than just straight a head
It's not unreasonable for there to an expectation of basically zero accidents of this nature during testing in cities. The public puts a huge amount of trust in private companies when they do this. And, pragmatically, Google, Uber, etc, all know that it would be horrible publicity for something like this to happen. One would think they'd be overly cautious and conservative to avoid even the possibility of this.
Lastly, the whole point of the human operator is to be the final safety check.
You're right that we have no idea of the cause until the data is analyzed (and the human operator interviewed). Yet, my first thought was, "Of course it'd be Uber."
If a pedestrian suddenly rushes out into the street or a cyclist swerves into your path, the deciding factor is often simply the coefficient of friction between your tyres and the road.
I mentioned in another comment that something I use to try to improve my own driving is watching videos from /r/roadcam on reddit, and trying to guess where the unexpected vehicle or pedestrian is going to come from.
Here's an example of a pedestrian suddenly appearing from between stopped cars (and coming from a traffic lane, not from a sidewalk), and a human driver spotting it and safely stopping:
Agreed that it's too early to really say one way or another. In maybe 100k miles of urban driving I've had one cyclist run into my car and a girl on her phone walk directly into the corner of the front, I was at a complete stop watching them both times.
Until there's a detailed report it's really hard to say if it was preventable or not - but I think regardless the optics are bad and this is going to chill a lot of people's feelings on self driving whether or not that is an emotion backed by data.
The hope is that AV can see 360 and observe things outside of blind spots further away. So the kid running from a yard into the road after a ball should be safer, but a person walking out from behind a parked truck wouldn't.
I’m kind of in the Elon Musk camp here where you gotta break some eggs to make an omelette? Human-driven cars kill a lot of pedestrians today, but we can actually do something to improve the human-recognition algorithms in a self-driving car.
As long as self driving cars represent an improvement over human drivers, I’m ok with them having a non-zero accident date while we work out the kinks.
>> Sounds like Uber's platform may not be up to that standard yet, which makes me wonder why they're on public roads.
They're on public roads because they've decided that deploying their tech is more important than safety.
I don't see any other explanation. We know that it's pretty much impossible to prove the safety of autonomous vehicles by driving them, so Uber (and almost everyone else) have decided that, well, they don't care. They'll deploy them anyway.
How do we know that? The report by RAND corporation:
Key Findings
* Autonomous vehicles would have to be driven hundreds
of millions of miles and sometimes hundreds of billions
of miles to demonstrate their reliability in terms of
fatalities and injuries.
* Under even aggressive testing assumptions, existing
fleets would take tens and sometimes hundreds of years
to drive these miles — an impossible proposition if the
aim is to demonstrate their performance prior to
releasing them on the roads for consumer use.
* Therefore, at least for fatalities and injuries,
test- driving alone cannot provide sufficient evidence
for demonstrating autonomous vehicle safety.
That report ends by saying essentially, "it may not be possible to prove the safety of self-driving cars". [1] So the value here is questionable and the same logic could apply to anything with a low frequency of occurrence. The value of air bags by this measure was not proven until they were already mandated.
[1] "even with these methods, it may not be possible
to establish the safety of autonomous vehicles prior to making them available for public use"
The main problem with self-driving cars is that they can't "read" humans' body language. A human driver can see pedestrians and cyclists (and other cars) and have a rough idea of what they're likely to do in the next few seconds, i.e. the pedestrian leaning out on the crosswalk curb is likely to step into the road soon. Even a reaction time of milliseconds can't make up for the (currently exclusive) human ability to read other humans and prepare accordingly.
They also fail to "write" human body language. Nobody else can predict what the autonomous vehicle will do.
It gets worse when a person is sitting in what appears to be the driver's seat. If the car is stopped and that person is looking toward another passenger or down at a phone, nobody will expect the vehicle to begin moving. Making eye contact with the person in that seat is meaningless, but people will infer meaning.
Humans also can't read humans' body language. A pedestrian waiting at a corner isn't waiting for the weather to change. They are waiting for passing cars to stop, as required by law. But passing cars speed by instead of stopping, unless the pedestrian does a lunge into the street -- preferable a bluff lunge, since most drivers still won't stop, preferring to race the pedestrian to the middle of the lane.
With sufficient data, I'd expect self-driving cars to be better at predicting what such leans mean. Moreover, for every one human driver who notices such a lean, there may be another human driver that doesn't even notice a pedestrian who has already started walking.
OT, but I would love to see how self-driving AIs handle something like Vietnam moped traffic and pedestrian crossings. The standard behavior for pedestrians is to walk slowly and keep walking -- stopping, even if it seems necessary to avoid a speeding driver, can be very dangerous, as generally, all of the street traffic is expecting you to go on your continuous path. It's less about reading body language than expectation of the status quo:
Could a human have reacted fast enough to stop for someone jumping out in front of them? If the person jumped out so fast that nobody could have possibly reacted in time, then it's not a stain on the technology- even with an instantaneous application of brakes, a car still takes a while to come to a stop. If the human jumped out ten seconds earlier and was waving her hands for help, then it's an issue.
"the whole point of self-driving cars is that they can react to unexpected circumstances much more quickly than a human"
This statement isn't really true. Do you think Uber is investing in this because it makes their passengers safer? No, they are pretty much immune to any car crashes with a human driver, the risk and insurance hit is assumed by the driver. They are doing this to save money. They don't have to pay a driver for each trip. The safety aspect is just a perk, but Uber will be pushing this full force as long as the self-driving cars are good enough.
I'm more concerned that Uber will "lose" the video evidence showing what kind of situation it was, and we'll never be able to know if a human would have had ample time to react.
I've been saying this from the beginning: it's not enough for self-driving cars to be "better than the average driver". They need to be at least 10x better than the best drivers.
I find it crazy that so many people think it is. First off, by definition like 49% of the drivers are technically better than the "average driver".
Second, just like with "human-like translation" from machines, errors made by machines tend to be very different than errors made by humans. Perhaps self-driving cars can never cause a "drunken accident", but they could cause accidents that would almost never happen with most drivers.
Third, and perhaps the most important to hear by fans of this "better than average driver" idea, is that self-driving cars are not going to take off if they "only" kill almost as many people as humans do. If you want self-driving cars to be successful then you should be demanding from the carmakers to make these systems flawless.
I have seen no evidence that any autonomous vehicle currently deployed can react faster than an alert and aware human. The commentariat tends to imagine that they can, and it's certainly plausible that they may eventually be. But I've never seen anyone official claim it, and the cars certainly don't drive as though they can quickly understand and respond to situations.
"Alert and aware human" is already a high standard, given how most humans drive in routine mode, which is well understood to be much worse than "alert and aware".
From what I've seen I wouldn't trust autonomous cars to "understand" all situations. I would trust Waymo cars to understand enough to avoid hitting anything (at a level better than a human), at the risk of being rear-ended more often. Everything I've seen from Tesla and Uber has given me significantly less confidence than that.
> I have seen no evidence that any autonomous vehicle currently deployed can react faster than an alert and aware human.
The argument I've always heard is that an autonomous systems will outperform humans mostly by being more vigilant (not getting distracted, sleepy, etc.) rather than using detectors with superhuman reaction times. Obviously, whether or not this outweighs the frequency of situations where the autonomous system gets confused when a human would not is an empirical question that will change as the tech improves.
Not a truly autonomous vehicle example but this is a case where most likely the car reacted before the driver was even aware of a problem: https://www.youtube.com/watch?v=APnN2mClkmk
I agree with the sentiment though. This has been a major selling point for this technology, but it has not been sufficiently demonstrated yet.
I live and commute in Waymo country, and see evidence of quick reactions, though I can't say for sure whether it's an alert human taking over. Mostly, the Waymo vehicles still drive conservatively.
> That's a very bad look; the whole point of self-driving cars is that they can react to unexpected circumstances much more quickly than a human operator, such as when someone walks out into the road.
No, the whole point of self-driving vehicles is that firms operating them can pay the cheaper capital and maintenance costs of the self-driving system rather than the labor cost of a driver.
I imagine these vehicles (especially test vehicles) are also recording regular video and therefore getting a clear picture of what happened should be straight forward.
I agree completely. One thing that is important to me is that the whole self-driving field will learn from every mistake. In other words, every self-driving car should only make the same mistake once, whereas with humans, each human has to learn only from their own mistakes.
If there's one company who hasn't demonstrated it's learned from its mistakes it's Uber.
But let's extrapolate. Say one day there are 20 self driving car companies. Should they be required to share what they learn so the same mistakes aren't repeated by each company or does the competitive advantage outweigh the public benefit from this type of information sharing?
Airlines are the same way. Every time there’s been a crash they’ve learned something and changed procedures to prevent it again. It’s made flying pretty safe, unless you’re an animal flying on United...
That isn't correct. Culture is learning from other people's mistakes. The only question is whether the human can and is willing to accept the lessons (and whether the lessons are correct).
> Sounds like Uber's platform may not be up to that standard yet, which makes me wonder why they're on public roads.
Who exactly do you think is granting Google, Uber, etc. approval for trials like this on public roads? It's going to be some bureocrat with zero ability to gauge what sort of safety standard these companies' projects have reached.
There are no standards here... what were you expecting would happen?
That's a really good point that many (myself included) aren't mentioning. Stopping distance is a universal, reaction times be damned. Would be curious to see if that played a part.
I'd also add the goal of self-driving cars is to decrease costs for ride-sharing, home-delivery companies, etc, and also to decrease congestion via coordination amongst autonomous vehicles.
It's an interesting dynamic. We want this tech to be much better than us terrible humans before we deploy it and anything like us terrible humans is not acceptable.
> That's a very bad look; the whole point of self-driving cars is that they can react to unexpected circumstances much more quickly than a human operator, such as someone walking out into the road. Sounds like Uber's platform may not be up to that standard yet, which makes me wonder why they're on public roads.
That's an invalid conclusion to draw from this accident.
There were 34,439 fatal motor vehicle crashes in the United States in 2016 in which 37,461 deaths occurred. This resulted in 11.6 deaths per 100,000 people and 1.16 deaths per 100 million miles traveled.[1] As far as I know, the instance this article is about is the first death in an autonomous vehicle accident, meaning that all the 2016 accidents were humans driving.
Why is it that you see one death from an autonomous car and conclude that autonomous cars aren't ready to be driving, but you see 37,461 deaths from human drivers and don't conclude that humans aren't ready to be driving?
I admit that there just aren't enough autonomous cars on the road to prove conclusively that autonomous cars are safer than human-operated cars at this point. But there's absolutely no statistical evidence I know of that indicates the opposite.
EDIT: Let's be clear here: I'm not saying autonomous cars are safer than human drivers. I'm saying that one can't simply look at one death caused by an autonomous car and conclude that autonomous cars are less safe.
As of late 2017, Waymo/Google reported having driven a total of 4 million miles on roads. Given that Waymo is one of the biggest players, it's hard to see how all of autonomous cars have driven 100 million miles at this point.
Nevermind that the initial rollouts almost always involve locales in which driving conditions are relatively safe (e.g. nice weather and infrastructure). Nevermind that the vast majority of autonomous testing so far has involved the presence of a human driver, and that there have been hundreds of disengagements:
Humans may be terrible at driving, but the stats are far from being in favor of autonomous vehicles. Which is to be expected given the early stage of the tech.
He's arguing from foundations. The hypothesis is that autonomous drivers should perform better at task X than the null hypothesis (aka human drivers). So any instances where autonomous drivers do not seem to perform better are all potential counter-arguments to that hypothesis.
The fact that human drivers aren't particularly good isn't really relevant, beyond setting a correspondingly low bar within the null hypothesis.
This all ties to regulation allowing these vehicles to drive on public roads, because that regulation was permitted due to the above hypothesis and hopeful expectations that it would be true.
Obviously, I haven't seen the entirety of the data set to know fatalities per car-mile. Which would be the relevant statistic here. I also didn't see such a number in your post, which I'm assuming means you are probably not aware either. But simply providing the numbers for the null hypothesis doesn't do anything.
you have to take into account miles driven. Yes, we have 37K fatalities, but trillions of miles driven. So, it comes out something like 1.2 fatalities per 100 million miles driven for human drivers. Which means, so far, self driving cars have a worse record per 100 million miles driven. So far.
Actually there was a metric of accident per distance drived. It was often pull out by Tesla before their first fatal accident last year (IA didn’t saw rear of truck because of sun or smthg like that).
This metric was often decried because it had poor statistical significance. It would however be nice to update this metric according to this death.
Maybe now this metric would indicate that IA are more dangerous than human, will be interesting if the perception of this flawed metric will evolve in reaction...
If solution does not solve anything, what's the point of the „solution“?
Most of the deaths on the road I hear in my country are the result of someone doing something really reckless and stupid. „Normal“ drivers do not kill themselves or others. So if self-driving car is only as good as a dumb driver — I do not want these cars on the road.
Also, interetingly enough there is little talk about human driver assisted by technology, not replaced by it. For some reason it is binary: either humand driver or self-driving. How about some human drivers plus collsion avoidance system, infrared sensing system (way too many people die there simply becaue they walk on the road in the dark without any reflectors/lights), etc.
> don't conclude that humans aren't ready to be driving?
I'm not sure I see anyone here making that conclusion. I think you're the only one who's brought it up.
For one, I personally sure as fuck do not think humans are ready to be driving.
> That's an invalid conclusion to draw from this accident.
The conclusion was not invalid at all. Other self-driving car companies have driven more miles than Uber has and they have done so safely. Uber has even taken its cars off the road, so even Uber agrees that their self-driving cars are not ready for the roads yet.
It is also important to take into account what Uber is like when it comes to safety and responsibility. They have knowingly hired convicted rapists for drivers, they have ridiculed and mocked and attempted to discredit their paying customers who have been raped by their employees/drivers, they have spied on journalists, they have completely disregarded human safety on numerous accounts. A company with a track record like Uber's probably not should be granted a license to experiment with technology like these self-driving cars on public roads.
They will have video for sure if since they are testing, then we will see. Your statement about reaction time is assuming so many things, may as well declare the AI guilty now right?
So it's not just "don't drink and drive", knowing that they'll probably reoffend soon anyway. Every crash and especially fatality can be thoroughly investigated and should be prevented from ever happening again.
Hopefully there's enough data in the investigation so that Tesla / Waymo and all other car companies can include the circumstances of the failure in their tests.
Deleted Comment
Dead Comment
Dead Comment
Also my problem with this is that a human death is functionally treated as finding edge cases that are missing a unit test, and progressing the testing rate of the code... and that really bothers me somehow. We need to avoid treating deaths as progress in the pursuit of better things
Au contraire. Go read building codes some time. There's a saying that they're "written in blood" - every bit, no matter how obvious or arbitrary seeming, was earned through some real-world failure.
The death itself isn't progress, of course. But we owe it to the person to who died to learn from what happened.
This implies they're using the same systems and underlying models. If one model hit a pedestrian because of a weakness in training data plus a sub-optimal model hyperparameter, and therefore was classified a pedestrian in that specific pose as some trash on the street, how do you share that conclusion with other companies models?
We all make similar decisions all of our lives, but nearly always at some remove. (With rare exceptions in fields like medicine and military operations.) But autonomous vehicles are a very visceral and direct implementation. The difference betweeen the trolley problem and in autonomous vehicles is in time delay and the amount of effort and skill required in execution.
Plus, we're taking what is pretty clearly a moral decision and putting it into a system that doesn't do moral decision-making.
Then by all means lets stay at home and avoid putting humans in rockets ever again because if you think space exploration will be done without deaths you are in for a surprise.
Dead Comment
The real news isn't "Self-Driving Car Kills Pedestrian", the real news is "Self-Driving Car Fatalities are Rare Enough to be World News". I'm one-third a class M planet away from the incident and reading about it.
Does the NTSB have regulation on how black boxes are allowed to function?
Would be better if the code and data was open for public review.
This comment by Animats in the context of self-driving trucks is quite telling. He warns precisely of this danger.
[0]: https://www.youtube.com/watch?v=LDprUza7yT4
Take a look at the cost of airplanes, even small ones.
As we understand more about the risks associated with autonomous driving, we should expand and enrich this data-set, and to ensure public safety, testing against such a dataset should be part of NHTSA / Euro NCAP testing.
I.e. NHTSA and Euro NCAP should start getting into the business of software testing.
I think the idea is to build a "Traincar of Reasonability" to test future autonomous vehicles with.
You might want to check out her research https://hal.pratt.duke.edu/research
They were unwilling to legally obtain a self-driving license in California because they did not want to report "disengagements" (situations in which a human driver has to intervene).
Uber would just set their self-driving cars free and eat whatever fine/punishment comes with it.
This is a strange question to ask. The regulation is not there to benefit Uber, it is to benefit public good. Very few companies would follow regulation if it was a choice. The setup of such regulation would be for it to be criminal to not comply. And if Uber could not operate in California (or the USA) if they did not comply, it would in their interest to provide the requested information.
Uber doesn't get to pick and choose what regulations they wish to follow.
> Uber would just set their self-driving cars free and eat whatever fine/punishment comes with it.
That sounds quite negligent and a cause for heightened repercussions if anything happens.
The strange attitude you display is the _reason_ there are regulations.
It’s the only real solution to corporations misbehaving.
As well as some core set of tests that define minimum competence, these tests could include sensor failure, equipment failure (tire blowout, the gas pedal gets stuck, the brakes stop working) and unexpected environmental changes (ice on the road, a swerving bus).
Manufacturers could even let the public develop and run their own test cases.
It is more an issue of how sophisticated these vehicles should be before they're let loose on public roads. At some stage they have to be allowed onto public roads or they'd literally never make it into production.
So we will have overall a much smaller number of deaths caused by self driving cars, but ones that do happen will be completely unexpected and scary and shitty. You can't really get away from this without putting these cars on rails.
Moreover, the human brain won't like processing these freak accidents. People die in car crashes every damn day. But we have become really accustomed to rationalizing that: "they were struck by a drunk driver", "they were texting", "they didn't see the red light", etc. These are "normal" reasons for bad accidents and we can not only rationalize them, but also rationalize how it wouldn't happen to us: "I don't drive near colleges where young kids are likely to drive drunk", "I don't text (much) while I drive", "I pay attention".
But these algorithms will not fail like that. Each accident will be unique and weird and scary. I won't be surprised if someone at some point wears a stripy outfit, and the car thinks they are a part of the road, and tries to explicitly chase them down until they are under the wheels. Or if the car suddenly decides that the road continues at a 90 degree angle off a bridge. Or that the splashes from a puddle in front is actually an oncoming car and it must swerve into the school kids crossing the perpendicular road. It'll always be tragic, unpredictable and one-off.
Instead, LIDAR should exactly identify potential obstacles to the self-driving car on the road. The extent to which machine learning is used is to classify whether each obstacle is a pedestrian, bicyclist, another car, or something else. By doing so, the self-driving car can improve its ability to plan, e.g., if it predicts that an obstacle is a pedestrian, it can plan for the event that the pedestrian is considering crossing the road, and can reduce speed accordingly.
However, the only purpose of this reliance on the machine learning classification should be to improve the comfort of the drive (e.g., avoid abrupt braking). I believe we can reasonably expect that within reason, the self-driving car nevertheless maintains an absolute safety guarantee (i.e., it doesn't run into an obstacle). I say "within reason", because of course if a person jumps in front of a fast moving car, there is no way the car can react. I think it is highly unlikely that this is what happened in the accident -- pedestrians typically exercise reasonable precautions when causing the road.
[1] https://www.cs.cmu.edu/~zkolter/pubs/levinson-iv2011.pdf
LIDAR is also not perfect when the road is covered in 5 inches of snow and you can't tell where the lanes are. Or at predicting a driver that's going to swerve into your lane because they spilled coffee on their lap or had a stroke.
With erratic input, you will get erratic output. Even the best ML vision algorithm will sometimes produce shit output, which will become input to the actual driving algorithm.
Are you working on the next season of Black Mirror?
In all seriousness, my fear (and maybe not fear, maybe it's happy expectation in light of the nightmare scenarios) is that if a couple of the "weird and terrifying" accidents happen, the gov't would shut down self-driving car usage immediately.
Your fear is very much grounded in reality. US lawmakers tend to be very reactionary, except in rare cases like gun laws. So it won't take much to have restrictions imposed like this. Granted, I believe some regulation is good; after all the reason today's cars are safer than those built 20 years ago isn't because the free market decided so, but because of regulation. But self driving cars are so new and our lawmakers are by and large so ignorant, that I wouldn't trust them to create good regulation from the get go.
In Black Mirror, all cars in the world will simultaneously swerve into nearby pedestrians, buildings, or other cars.
Why? This is what the self-driving cars industry insists on, but has nowhere near been proven (only BS stats, under ideal conditions, no rain, no snow, selected roads, etc -- and those as reported by the companies itself).
I can very well imagine a greater than average human driving AI. But I can also imagine being able to write it anytime soon not being a law of nature.
It might take decades or centuries to get out of some local maxima.
General AI research had also promised the moon once again in the 60s and 70s, and it all died with little to show of in the 80s. It was always "a few years down the line".
I'm not so certain that we're gonna get this good car AI anytime soon.
The solutions to not killing people whilst driving aren't rocket science but too many humans seem to be incapable of respecting the rules.
[0]: http://www.slate.com/articles/technology/future_tense/2017/1...
Dead Comment
Because machines have orders of magnitude fewer failure modes than humans, but with greater efficiency. It's why so much human labour has been automated. There's little reason to think driving will be any different.
You can insist all you like that the existing evidence is under "ideal conditions", but a) that's how humans pass their driving tests too, and b) we've gone from self-driving vehicles being a gleam in someone's eye to actual self-driving vehicles on public roads in less than 10 years. Inclement weather won't take another 10 years.
It's like you're completely ignoring the clear evidence of rapid advancement just because you think it's a hard problem, while the experts actually building these systems expect fully automated transportation fleets within 15 years.
If self-driving cars really are safer in the long-run for drivers and pedestrians - maybe what people need is a better grasp on probability and statistics? And self-driving car companies need to show and publicize the data that backs this claim up to win the trust of the population.
If the road was filled with self-driving cars there would be less accidents but I wouldn't understand them and with that comes distrust.
Freak accidents without explanations are not going to cut it.
Also, my gut feeling says this was a preventable accident that only happened because of many layers of poor judgement. I hope I'm wrong but that is seriously what I think of self-driving attempts in public so far. Irresponsible.
Perhaps, similar to airline crashes, we should expect Uber to pay out to the family, plus a penalty fine. 1m per death? 2? What price to we put on a life?
Definitely, though my interpretation of your statement is "self driving cars have only killed a couple people ever but human cars have killed hundreds of thousands". If that's correct, that's not going to win anyone over nor is it necessarily correct.
While the state of AZ definitely has some responsibility for allowing testing of the cars on their roads, Uber needs (imo) to be able to prove the bug that caused the accident was so much of an edge case that they couldn't easily have been able to foresee it.
Are they even testing this shit on private tracks as much as possible before releasing anything on public roads? How much are they ensuring a human driver is paying attention?
Maybe because its unexpected - the victim is not involved until they are dead?
I mean, people play the lottery. That's a guaranteed loss, statistically speaking. In fact, it's my understanding that, where I live, you're more likely to get hit by a (human-operated) car on your way to get your lottery ticket than you are to win any significant amount of money. But still people brave death for a barely-existent chance at winning money!
Tangent: is there a land vehicle designed for redundant control, the way planes are? I've always wondered how many accidents would have been prevented if there were classes of vehicles (e.g. large trucks) that required two drivers, where control could be transferred (either by push or pull) between the "pilot" and "copilot" of the vehicle. Like a driving-school car, but where both drivers are assumed equally fallible.
Do we even know yet what's happened?
It seems rather in bad taste to take someones death, not know the circumstances then wax lyrical about how it matches what you'd expect.
But the problem is Uber's business plan is to replace drivers with autonomous vehicles ferrying passengers. i.e. take the driver cost out of the equation. Same goes for Waymo and others trying to enter/play in this game. It's always about monetization which kills/slows innovation.
Just highway-mode is not going to make a lot of money except in the trucking business and I bet they will succeed soon enough and reduce transportation costs. But passenger vehicles, not so much. May help in reducing fatigue related accidents but not a money making business for a multi-billion dollar company.
That being said, really sad for the victim in this incident.
Why did this happen? What steps have we make sure it will never happen again? These are both methods of analysing & fixing problems and methods of preserving a decision making authority. Sometimes this degrades into a cynical "something must be done" for the sake of doing, but... it's not all (or even mostly) cynical. It just feels wrong going forward without correction, and we won't tolerate this from our decision makers. EVen if we will, they will assume (out of habit) that we won't
We can't know how this happened. There is nothing to do. ..and.. this will happen again, but at a rate lower than human driver's more less opaque accidents.... I'm not sure how that works as an alternative finding out what went wrong and doing something.
Your comment is easily translated into "you knew there was a glitch in the software, but you let this happen anyway." Something will need to be done.
I think any attempts to address such issues have to come with far-ranging transparency regulations on companies, possibly including open-sourcing (most of) their code. I don't think regulatory agencies alone would have the right incentives to actually check up on this properly.
In a nearby town, people have petitioned for a speed limit for a long time. Nothing happened until a 6 year old boy was killed. Within a few weeks a speed limit was in place.
One of the big questions I have about autonomous driving is if it's really a better solution to the problems it's meant to solve than more public transportation.
[1] https://github.com/Hyperparticle/one-pixel-attack-keras
I think this is really key. The ability to put the blame on something tangible, like the mistakes of another person, somehow allows for more closure than if it was a random technical failure.
It boggles my mind that a forum full of computer programmers can look at autonomous cars and think "this is a good idea".
They are either delusional and think their code is a gift to humanity or they haven't put much thought into it.
Autonomous cars, as they exist right now, are not up to the task at hand.
That's why they should still have safety drivers and other safeguards in place. I don't know enough to understand their reasoning, but I was very surprised when Waymo removed safety drivers in some cases. This accident is doubly surprising, since there WAS a safety driver in the car in this case. I'll be interested to see the analysis of what happened and what failures occurred to let this happen.
Saying that future accidents will be "unexpected" and therefore scary is FUD in its purest form, fear based on uncertainty and doubt. It will be very clear exactly what happened and what the failure case was. Even as the parent stated, "it saw a person with stripes and thought they were road" - that's incredibly stupid, but very simple and explainable. It will also be explainable (and expect-able) the other failures that had to occur for that failure to cause a death.
What set of systems (multiple cameras, LIDAR, RADAR, accelerometers, maps, GPS, etc.) had to fail in what combined way for such a failure? Which one of N different individual failures could have prevented the entire failure cascade? What change needs to take place to prevent future failures of this sort - even down to equally stupid reactions to failure as "ban striped clothing"? Obviously any changes should take place in the car itself, either via software or hardware modifications, or operational changes i.e. maximum speed, minimum tolerances / safe zones, even physical modifications to configuration of redundant systems. After that should any laws or norms be changed, should roads be designed with better marking or wider lanes? Should humans have to press a button to continue driving when stopped at a crosswalk, even if they don't have to otherwise operate the car?
Lots of people have put a lot of thought into these scenarios. There is even an entire discipline around these questions and answers, functional safety. There's no one answer, but autonomy engineers are not unthinking and delusional.
It is not that we think that software is particularly good, it is that we have a VERY dim view of humanity's ability to do better.
[1] https://www.nhtsa.gov/press-releases/usdot-releases-2016-fat...
Considering that humans would likely slow down if they see a pedestrian - even if one appeared suddenly - this is even more disconcerting.
https://www.rand.org/pubs/research_reports/RR2150.html
In fact, NHTSA statistics includes miles driven under adverse conditions (rain, snow, etc) while I'd bet that this is not the case for Uber.
[1] https://waymo.com/ontheroad/
Braking distance (without including any decision time) for a 15mph car is 11 ft, for a 30mph is 45 ft. Self driving cars won't change these limits. (well, they may be a little better than humans at maximizing braking power through threshold braking on all 4 wheels, but it won't be dramatically different)
So even with perfect reaction times, it will still be possible for a self-driving car to hit a human who enters its path unexpectedly.
And then he started me on watching for suspicious gaps in the parked cards along the side that could indicate a loading bay or a driveway or an alley or a hidden intersection. And so on though multiple categories of collision hazards, and then verbally indicating them to him while I was driving.
And the reason for that exercise was to drive home the point that if there's a vehicle or a person that could get into my lane, it's my job as a defensive driver to be aware of that and be ready to react. Which includes making sure I could stop or avoid in time if I needed to.
I don't know how driving is taught now, but I would hope a self-driving system could at the very least match what my human driving instructor was capable of.
In fact, self-driving cars may actually improve the situation if cars actually start complying with speed limits en masse.
Apologies for going off topic here, but I'm curious about this. I've tested every car I've ever owned and all of the recent cars with all-round disc brakes have outperformed this statistic, but I've never been able to get agreement from other people (unless I demonstrate it to them in person).
I'm talking about optimal conditions here, wet roads would change things obviously but each of these cars was able to stop within it's own car length (around 15 feet) from 30mph, simply by stamping on the brake pedal with maximum force, triggering the ABS until the car stops:
2001 Nissan Primera SE
2003 BMW 325i Touring (E46)
2007 Peugeot 307 1.6 S
2011 Ford S-Max
I can't work out how any modern car, even in the wet, could need 45 feet to stop. In case it's not obvious, this is only considering mechanical stopping distance, human reaction time (or indeed computer reaction time which is the main topic here) would extend this distance, but the usual 45 feet from 30mph statistic doesn't include reaction time either.
But they should be willing to walk in front of it in an in-spec performance regime. There's some really good Volvo commercials along that line, with engineers standing in front of a truck.
If a car can't observe any potential hazards that might impact it using different threat models it should drive more slowly. Blowing down a narrow street with parked cars on both sides at precisely the speed limit is not a good plan.
Cameras are not accurate enough though, their dynamic range being terrible. Wonder how humans would fare if forced to wear goggles that approximated a lidar sensors information.
Theoretically self-drivers will always see everything that is relevant, unlike a human driver. And theoretically a robot-driver will always react more quickly than even a hyper-attentive human driver, who has to move meat in order to apply the brake.
JFYI, it is most probably apocryphal: https://skeptics.stackexchange.com/questions/18558/were-roma...
still it's a nice one, I have heard the same about engineers/architects in ancient Babylon and Egypt.
"In order to gain the trust of the employees he asked his wife to enter the building while the movement was taking place."
Although the actual logistics of your proposal might be challenging (child comments point out that some speeds/distances might be impossible to solve) your instinct is a correct one: the people designing and deploying these solutions need to have skin in the game.
I don't think truly autonomous cars are possible to deploy safely with our current level of technology but if I did ... I would want to see their family and their children driving in, and walking around, these cars before we consider wide adoption.
From an ethical standpoint the interesting phase will only start then. It‘s one thing to bring a small fleet of high tech (e.g. having LIDAR) vehicles to the road. It‘s another to bring that technology to a saturated mass market which is primarily cost driven. Yes, I assume self-driving cars will eventually compete with human driven ones.
Will we, as a society, accept some increase in traffic fatalities in return for considerable savings that self-driving cars will bring?
Will you or me as an individual accept a slightly higher risk in exchange for tangible time savings?
I believe the claim is 38 multitudes better than humans, significantly better than marginally.
> accept some increase in traffic fatalities
No. And the question is more about "some" than "some increase"
> accept a slightly higher risk in exchange for tangible time savings?
Texting while driving and even hands free talking were becoming laws in many states before smart phones -- and my experience is that many people readily accept this risk and the legal risk just to communicate faster. The same can be said for the risk of drunk driving -- it's a risk that thousands of Americans take all of the time.
Deleted Comment
IOW, setting this up as some kind of quality standard gives unjustified cover ("Hey, our own CEO risked his life to prove the car was safe!") if AVs fail on the open road, because the requirements of open and closed tests are so different.
Sounds silly when compared against old tech.
Accidents happen, best we can do is try to prevent them.
Deleted Comment
Deleted Comment
https://www.youtube.com/watch?v=_47utWAoupo
also that testing does happen in the case of every av program i know of. closed obstacle courses with crap that pops out forcing the cars to react. look up gomentum station.
† i did it for you. http://gomentumstation.net/
Just like they take the repair mechanics on the first test flight after a major repair of a big airplane.
o At speed we're pretty ok with cars hitting things people drop on the road, examples of cars hitting wagons and babies are already plentiful
o Visual recognition & rapidly updated multi-mode sensor data, backed by hard-core neural networks and multi-'brained' ML solutions, have every reason to be way better at this job than we are given sufficient time... those models will be working with aggregate crash data of every accident real and simulated experienced by their systems and highly accurate mathematical models custom made to chose the least-poor solution to dilemmas human drivers fail routinely
o AIs have multiple vectors of possible improvement over human drivers in stressed situations. Assuming they will bluntly stop ignores how vital relative motion is to safe traffic, not to mention the car-to-car communication options available given a significant number of automated cars on the road -- arguably human drivers can never be as smooth as a fleet of automated cars, and "slamming the brakes" is how people handle uncertainty already
Presently the tech is immature, but the potential for automated transport systems lies well beyond the realms of human capabilities.
Self driving tech is up against the marketing arm of the auto-industry, they've got to be better than perfect to avoid public backlash. If they're a bit slower but far safer then I think they'll do well.
I doubt that's true, but even if it were, I believe the rules of the road are "always yield to pedestrians, even if you have the right-of-way".
We need law enforcement to be able to keep pace with advances in technology and let's face it, they're not going to have the money to employ data analysts with the required skills. Do we need a national body for this? Is there any hope of a Republican government spending the required money to do so? (no)
I guess we'll find out how competent they are at this, but I think they did a surprisingly good job with the previous Tesla investigation: https://dms.ntsb.gov/pubdms/search/hitlist.cfm?docketID=5998...
On the other hand, I'd wonder if increasing NTSB scope this much would drastically decrease the average quality of NTSB's work. Scaling ain't easy.
Bonus: Job creation to replace the jobs lost to automation
I don't think self-driving cars and their sensor data, even if they rely on the operator to explain what the car “remembers”, fundamentally shift the landscape.
For instance, take a look at:
https://www.ntsb.gov/investigations/AccidentReports/Pages/HW...
"The Tesla was equipped with multiple electronic systems capable of recording and transmitting vehicle performance data. NTSB investigators will continue to collect and analyze these data, and use it along with other information collected during the investigation in evaluating the crash events."
I mean, how is this different then all of the other accidents that occur every day? Yes a self-driving car is involved, but do people really think autonomous cars aren't going to be involved in fatal accidents?
Of course they are...but I've always thought that autonomous vehicles only have to be like 10% safer for them to make tons of sense to replace human drivers.
We want to know what happened, how it happened, how we could have prevented it, how likely it is to happen again, what assumptions or overlooked details lie at the heart of this.
The required level of expertise, effort and precision here are higher than in a regular traffic accident. Moreover, the required skill-set, knowledge base, and willingness to work in a new area here make this an exceptional case.
Finally, the outcome of this will be much more than liability in a single case. This could set the precedent for an entire field of industry. This could be the moment we find out self-driving cars are nearly a pipe-dream, or it could be the moment we kill self-driving cars at the cost of millions of preventable traffic accidents. This investigation just might determine a lot, again, that makes it exceptional.
Agree, Kumail Nanjiani (comedian) has a great rant on twitter about exactly this, ethical implications of tech-
> As a cast member on a show about tech, our job entails visiting tech companies/conferences etc. We meet ppl eager to show off new tech. Often we'll see tech that is scary. I don't mean weapons etc. I mean altering video, tech that violates privacy, stuff w obv ethical issues. And we'll bring up our concerns to them. We are realizing that ZERO consideration seems to be given to the ethical implications of tech. They don't even have a pat rehearsed answer. They are shocked at being asked. Which means nobody is asking those questions. "We're not making it for that reason but the way ppl choose to use it isn't our fault. Safeguard will develop." But tech is moving so fast. That there is no way humanity or laws can keep up. We don't even know how to deal with open death threats online. Only "Can we do this?" Never "should we do this? We've seen that same blasé attitude in how Twitter or Facebook deal w abuse/fake news. Tech has the capacity to destroy us. We see the negative effect of social media. & no ethical considerations are going into dev of tech.You can't put this stuff back in the box. Once it's out there, it's out there. And there are no guardians. It's terrifying. The end. https://twitter.com/kumailn/status/925828976882282496?lang=e...
It is scary, big tech orgs have no incentive or motivation to even consider ethical implications, whats worse is the American consumer has shown repeatedly that you're OK to do really shady stuff and as long as it means a lower price product/service for the consumer. We're in a kind of dark age of tech regulation and he's right it is terrifying.
Though that would of course depend on how insurance even looks for SDCs. Maybe big companies like Uber will self-insure.
Deleted Comment
"[Uber] said it had suspended testing of its self-driving cars in Tempe, Pittsburgh, San Francisco and Toronto"[1]
1: https://www.nytimes.com/2018/03/19/technology/uber-driverles...
I was resumed shortly after.
27 Mar 2017 (1 year ago)
https://spectrum.ieee.org/cars-that-think/transportation/sel...
"The Uber vehicle was reportedly driving early Monday when a woman walking outside of the crosswalk was struck.
...
Tempe Police says the vehicle was in autonomous mode at the time of the crash and a vehicle operator was also behind the wheel."
That's a very bad look; the whole point of self-driving cars is that they can react to unexpected circumstances much more quickly than a human operator, such as when someone walks out into the road. Sounds like Uber's platform may not be up to that standard yet, which makes me wonder why they're on public roads.
On the other hand, it sounds like it happened very recently; I guess we'll have to wait and see what happened.
Some of these accidents are unpreventable by the (autonomous) driver. If a pedestrian suddenly rushes out into the street or a cyclist swerves into your path, the deciding factor is often simply the coefficient of friction between your tyres and the road.
The autonomous vehicle and the human attendant might have made a glaring error, or they might have done everything correctly and still failed to prevent a fatality. It's far too early to say. It's undoubtedly a dent to the public image of autonomous vehicles, but hopefully the car's telemetry data will reveal whether this was a case of error, negligence or unavoidable tragedy.
This only true for the uninitiated, never let it get to that. I drove for Uber/Lyft/Via in New York city so I can experience and study these situation. These sort of accidents are preventable. The following is the basic:
1.) Drive much slower in areas where a pedestrian and cyclist can suddenly cross your past.
2.) Know the danger zone. Before people "jump into traffic" or a cyclist swerve in front of you, they have to get into position, this position is the danger zone.
3.) Extra diligence in surveying the area/danger zone to predict a potential accident.
4.) Make up for the reduce speed by using highways and parkways as much as possible.
It helps that Manhattan street traffic tends to be very slow to begin with. Ideally I will like use my knowledge to offer a service to help train autonomous vehicles to deal with these situation. It has to be simulated numerous times in a closed circuit for the machine to learn what I've learn intuitively driving professionally in NYC.
This holds iff the control input you apply is only braking. Changing the steering angle is generally far more effective for the "pedestrian darts out from hidden place onto road" situation. It's far better to sharply swerve away to ensure that there's no way the pedestrian can get into your path before your car arrive there than it is to stand on the brakes and hope for the best.
Indeed, the faster you're moving, the more you should consider swerving away over braking -- take advantage of that lethal speed to clear the pedestrian's potential paths before he can get into yours.
Yes, this intentionally violates the letter of the traffic laws (and might involve colliding into a a parked or moving automobile on the other side of the road) and also involves potentially unusual manoeuvring on a very short deadline; but it's far better to avoid a vehicle-pedestrian collision even if it's at the cost of possibly busting into the opposing lane / driving off the road / hitting a parked car. Decently experienced drivers can do this, I can do this (and have successfully avoided a collision with a pedestrian who ran out between parked cars on a dark and raining night), there's no fundamental reason that computer-controlled cars can't do this.
With self-driving cars this really seems to call for mandatory investigations by a third-party with access to the raw telemetry data. There’s just too much incentive for a company to say they weren’t at fault otherwise.
Lastly, the whole point of the human operator is to be the final safety check.
You're right that we have no idea of the cause until the data is analyzed (and the human operator interviewed). Yet, my first thought was, "Of course it'd be Uber."
I mentioned in another comment that something I use to try to improve my own driving is watching videos from /r/roadcam on reddit, and trying to guess where the unexpected vehicle or pedestrian is going to come from.
Here's an example of a pedestrian suddenly appearing from between stopped cars (and coming from a traffic lane, not from a sidewalk), and a human driver spotting it and safely stopping:
https://www.youtube.com/watch?v=wYvKPMaz9rI
Why can't a self-driving car do this?
Until there's a detailed report it's really hard to say if it was preventable or not - but I think regardless the optics are bad and this is going to chill a lot of people's feelings on self driving whether or not that is an emotion backed by data.
As long as self driving cars represent an improvement over human drivers, I’m ok with them having a non-zero accident date while we work out the kinks.
They're on public roads because they've decided that deploying their tech is more important than safety.
I don't see any other explanation. We know that it's pretty much impossible to prove the safety of autonomous vehicles by driving them, so Uber (and almost everyone else) have decided that, well, they don't care. They'll deploy them anyway.
How do we know that? The report by RAND corporation:
https://www.rand.org/pubs/research_reports/RR1478.html
[1] "even with these methods, it may not be possible to establish the safety of autonomous vehicles prior to making them available for public use"
It gets worse when a person is sitting in what appears to be the driver's seat. If the car is stopped and that person is looking toward another passenger or down at a phone, nobody will expect the vehicle to begin moving. Making eye contact with the person in that seat is meaningless, but people will infer meaning.
Even on crosswalks. If I just strolled out on a crosswalk without looking, and waiting for drivers who were paying no attention, I’d be long dead.
https://www.youtube.com/watch?v=nKPbl3tRf_U
"the whole point of self-driving cars is that they can react to unexpected circumstances much more quickly than a human"
This statement isn't really true. Do you think Uber is investing in this because it makes their passengers safer? No, they are pretty much immune to any car crashes with a human driver, the risk and insurance hit is assumed by the driver. They are doing this to save money. They don't have to pay a driver for each trip. The safety aspect is just a perk, but Uber will be pushing this full force as long as the self-driving cars are good enough.
I'm more concerned that Uber will "lose" the video evidence showing what kind of situation it was, and we'll never be able to know if a human would have had ample time to react.
I find it crazy that so many people think it is. First off, by definition like 49% of the drivers are technically better than the "average driver".
Second, just like with "human-like translation" from machines, errors made by machines tend to be very different than errors made by humans. Perhaps self-driving cars can never cause a "drunken accident", but they could cause accidents that would almost never happen with most drivers.
Third, and perhaps the most important to hear by fans of this "better than average driver" idea, is that self-driving cars are not going to take off if they "only" kill almost as many people as humans do. If you want self-driving cars to be successful then you should be demanding from the carmakers to make these systems flawless.
From what I've seen I wouldn't trust autonomous cars to "understand" all situations. I would trust Waymo cars to understand enough to avoid hitting anything (at a level better than a human), at the risk of being rear-ended more often. Everything I've seen from Tesla and Uber has given me significantly less confidence than that.
The argument I've always heard is that an autonomous systems will outperform humans mostly by being more vigilant (not getting distracted, sleepy, etc.) rather than using detectors with superhuman reaction times. Obviously, whether or not this outweighs the frequency of situations where the autonomous system gets confused when a human would not is an empirical question that will change as the tech improves.
I agree with the sentiment though. This has been a major selling point for this technology, but it has not been sufficiently demonstrated yet.
Deleted Comment
No, the whole point of self-driving vehicles is that firms operating them can pay the cheaper capital and maintenance costs of the self-driving system rather than the labor cost of a driver.
Perhaps there will be unexpected 'issues' meaning that such data will have been lost in this case.
But let's extrapolate. Say one day there are 20 self driving car companies. Should they be required to share what they learn so the same mistakes aren't repeated by each company or does the competitive advantage outweigh the public benefit from this type of information sharing?
Who exactly do you think is granting Google, Uber, etc. approval for trials like this on public roads? It's going to be some bureocrat with zero ability to gauge what sort of safety standard these companies' projects have reached.
There are no standards here... what were you expecting would happen?
People still need to look both ways when coming out from behind cars.
I want more details.
http://www.nytimes.com/1998/04/05/automobiles/behind-the-whe...
Uber cutting corners and playing fast and loose with legislation? Unheard of!
Here's hoping they get hit with a massive, massive wrongful death lawsuit.
That's an invalid conclusion to draw from this accident.
There were 34,439 fatal motor vehicle crashes in the United States in 2016 in which 37,461 deaths occurred. This resulted in 11.6 deaths per 100,000 people and 1.16 deaths per 100 million miles traveled.[1] As far as I know, the instance this article is about is the first death in an autonomous vehicle accident, meaning that all the 2016 accidents were humans driving.
Why is it that you see one death from an autonomous car and conclude that autonomous cars aren't ready to be driving, but you see 37,461 deaths from human drivers and don't conclude that humans aren't ready to be driving?
I admit that there just aren't enough autonomous cars on the road to prove conclusively that autonomous cars are safer than human-operated cars at this point. But there's absolutely no statistical evidence I know of that indicates the opposite.
[1] http://www.iihs.org/iihs/topics/t/general-statistics/fatalit...
EDIT: Let's be clear here: I'm not saying autonomous cars are safer than human drivers. I'm saying that one can't simply look at one death caused by an autonomous car and conclude that autonomous cars are less safe.
https://techcrunch.com/2017/11/27/waymo-racks-up-4-million-s...
Nevermind that the initial rollouts almost always involve locales in which driving conditions are relatively safe (e.g. nice weather and infrastructure). Nevermind that the vast majority of autonomous testing so far has involved the presence of a human driver, and that there have been hundreds of disengagements:
https://www.theverge.com/2018/1/31/16956902/california-dmv-s...
Humans may be terrible at driving, but the stats are far from being in favor of autonomous vehicles. Which is to be expected given the early stage of the tech.
The fact that human drivers aren't particularly good isn't really relevant, beyond setting a correspondingly low bar within the null hypothesis.
This all ties to regulation allowing these vehicles to drive on public roads, because that regulation was permitted due to the above hypothesis and hopeful expectations that it would be true.
Obviously, I haven't seen the entirety of the data set to know fatalities per car-mile. Which would be the relevant statistic here. I also didn't see such a number in your post, which I'm assuming means you are probably not aware either. But simply providing the numbers for the null hypothesis doesn't do anything.
https://www.newscientist.com/article/2095740-tesla-driver-di...
This metric was often decried because it had poor statistical significance. It would however be nice to update this metric according to this death.
Maybe now this metric would indicate that IA are more dangerous than human, will be interesting if the perception of this flawed metric will evolve in reaction...
Dead Comment
I'm not sure I see anyone here making that conclusion. I think you're the only one who's brought it up.
For one, I personally sure as fuck do not think humans are ready to be driving.
> That's an invalid conclusion to draw from this accident.
The conclusion was not invalid at all. Other self-driving car companies have driven more miles than Uber has and they have done so safely. Uber has even taken its cars off the road, so even Uber agrees that their self-driving cars are not ready for the roads yet.
It is also important to take into account what Uber is like when it comes to safety and responsibility. They have knowingly hired convicted rapists for drivers, they have ridiculed and mocked and attempted to discredit their paying customers who have been raped by their employees/drivers, they have spied on journalists, they have completely disregarded human safety on numerous accounts. A company with a track record like Uber's probably not should be granted a license to experiment with technology like these self-driving cars on public roads.
They just aren't responsible enough.
AI (short of AGI) is never guilty; it's creators and operators, OTOH...