From the video, it seems that the woman didn't even look at the street (the direction where the car came). Actually, she didn't even seem to look at the street before she crossed.
I do believe that autonomous vehicle should have prevented this kind of accidents (since they are 100% aware, unlike humans), but why did she act like that? Is it common there to just cross the road and expect cars to stop?
The reason I ask is culture difference. From my experience, in some countries (e.g. Indonesia), people cross the road like they have superpowers. They use their hand to signal the car to stop while taking time to cross. In Malaysia, even if you pay attention to the road, the drivers _seem_ to want to run over you when you cross (they go faster).
I can't speak to Arizona in particular, but in all the parts of the USA I've been to, the common wisdom has always been to look both ways before crossing the road no matter what, because regardless of who has the right of way, there's always the possibility of some idiot, drunk driver, or other hazard that ignores right of way rules.
Common wisdom isn't necessarily followed. I've see many people cross the street blind especially on college campuses. Worse, where I currently live (Ann Arbor) I have seen people do this when they very clearly do not have the right of way.
> From the video, it seems that the woman didn't even look at the street (the direction where the car came). Actually, she didn't even seem to look at the street before she crossed.
IME, at least in places I've driven, this is not at all uncommon for apparently-homeless people in urban and often suburban environments, at points that are not legal crosswalks and at points that are controlled crosswalks but against the controls (also, of course, at proper crosswalks without or consistent with controls, though that's less remarkable.)
> From the video, it seems that the woman didn't even look at the street (the direction where the car came). Actually, she didn't even seem to look at the street before she crossed.
You saw the pedestrian _enter_ the street in that video? I saw nothing of the sort when I watched it.
I think the point the poster is trying to make is that if the pedestrian was looking while crossing the street, they would have ample time to react to the oncoming car.
Car headlights can be seen from over 1000 feet away, and at the speed the Uber car was traveling, the pedestrian had 20 seconds to see & and react to the vehicle coming at her.
From the video, it's clear they were not looking for oncoming vehicles while crossing the road.
In India itself, Hyderabad is like @dragonwriter described Indonesia - the traffic keeps flowing and pedestrians walk across anywhere - the vehicles will just weave around you. Bangalore is like Malaysia is described - if a driver sees you attempt to cross, he or she will actively speed up.
> The police may not have a good way to evaluate the vastly superior dynamic range of human vision compared to the camera.
The solution is simple and even pointed out later in the article:
> Note that the streetlamps are actually not that far from her crossing point, so I think she should have been reasonably illuminated even for non-HDR cameras or the human eye, but I would need to go to the site to make a full determination of that.
Seems like the question of whether a driver would have seen the pedestrian is simply solved by going to the site another comparable night and seeing how visibility is to a human pair of eyes.
This is what the people who investigate railway accidents in the UK do.
They would go to exactly the site, with an identical car, someone wearing the woman's clothing (or identical clothing) and an identical bicycle, and reenact the scene, measuring light levels and so on as the car approaches.
To all that say it's not the fault of the driver (be it human or computer) please, remember, that for every collision there is many, many near misses that were avoided because the driver was paying attention and took necessary actions to avoid collision regardless of whether he had or had not the right of way.
The right of way is not a permission to plow into other road users.
What if your kid jumps suddenly on the street, would you be ok if the driver excercised his right of way?
If AIs are allowed to drive they are expected to perform to avoid collisions (and especially with slow moving objects like pedestrians). This was supposed to include devices like lidars which were supposed to perform better than human driver could hope to do.
The added information and reaction time was supposed to offset relative dumbness of the driver.
Now we are learning that the car is only operating on what seems a visual spectrum camera and doesn't seem to be reacting in a simplest situation and the driver isn't even looking at the road.
So the car was supposed to start breaking IMMEDIATELY the threat is recognized. I don't see hard braking on the video. I see a human driver taking her eyes from what seems to be a phone to recognize the situation and NO DECELERATION THAT WOULD HAVE TO BE IMMEDIATELY EVIDENT.
Sorry for the off-topic question. I see a lot of people spelling 'breaking' when it should be 'braking'. Is breaking a good replacement for braking? I am under the impression that it is not. However, english is not my first language.
>Waymo's cars, and a few others use long-range LIDARs able to see 200m or more...There is a dark irony that this longer range LIDAR is what the Waymo vs. Uber lawsuit was about<
Prior to the accident, I gave high probabilities that Uber performed a surgery on their LIDAR systems after the lawsuit, aiming to eliminate anything with legal implications. If the above comment is true, and uber is not using any long-range LIDAR, please someone with better knowledge help to stop Uber testing of an unsafe system.
>One lesson from this accident might well be to map "illegal, but likely crossings"
Good write-up but I don’t understand this part. I would not be comfortable with these cars making moves based on calculations of the legality of observed behavior.
EDIT: Nevermind! I thought about it some more and realized they mean mapping the geographical locations where pedestrians are likely to be crossing the road. Much more humane than I previously thought! But I remain uneasy about any system with room to benefit from selective inquiry like such.
I didn't read that as a suggestion to take the legality of a crossing into account. In fact, the suggestion is to do the opposite: instead of only mapping legal crossings, map all likely crossings without regard to their legality.
The technology should be good enough that every single meter of road is considered a "likely crossing". There shouldn't be an iota of difference between a supposed "likely crossing", versus any other section of road. If the sensors and decision engines aren't good enough to be driving under normal conditions, then perhaps they should not be on the road (yet).
I think it's absurd these vehicles are on the road today. The process is being rushed for the financial gain of corporations. No matter how far up the list safety of these cars is, it's not priority #1. The greed of companies vying to be the first to stake a majority share of this new market is the only real priority.
Neither humans nor robots can treat everywhere like a crosswalk. We human drivers see pedestrians standing on the edge of the road all the time, often getting ready to cross. If we slowed down every time we saw that, roads would flow poorly. On the other hand, we do slow down when we see people poised to cross at an actual crosswalk, and the law demands we do that.
Right now, live humans are better at reading the body language of pedestrians than machines are, and this is an area of research.
My suggestion is that mapping places where people are likely to cross allows you to dial up your caution in those locations. Not to dial it all the way up to what the law demands in legal crosswalks, but not to have it at the most basic level. Of course, once a ped enters the roadway, you will dial up the caution.
On 45mph roads, peds know to expect cars not to keep them safe. You wait for a long clearing in the traffic and you run if you have to. On low speed roads, we cross even with cars coming, expecting them to see us and react.
The meat of the article is about how LIDAR, Radar, or any number of systems could have detected the pedesetrian/obstacle on the road and should have stopped the car.
If those worked, considering likely crossing locations in addition to certain crossing locations like zebra crossings should only improve safety, not be required for it.
It doesn't mean that it will take legality into account, it means that they should have a database of illegal crossings as well (they already have one of legal ones - a map).
How do you propose on building a database like that? Any patch of road which is not a legal crossing is inherently a potential* illegal crossing. I think illegal crossings are non deterministic and hence unpredictable.
Good summary of technical evaluation from equipments. I think the other thread https://news.ycombinator.com/item?id=16643056 delved pretty deep in various subject scenario too. It would be the best for Uber to release the raw sensor data during the event, and it might be worthwhile for the community to see exactly what happened.
This blog post, like much of the HN discussion, is full of baseless speculation. There's really nothing here except "LIDAR should've seen it." Duh.
The idea that Uber should release more data which won't be analyzed Donnelly but will only fuel even more speculation doesn't make much sense.
The NTSB is investigating. They are extremely through. Most importantly once all the data has been carefully analyzed and the failure mode had been understood the NTSB will be able to devise guidelines and protocols to prevent it from happening again.
Until then I don't see how all this speculation helps except to fulfill some sort of misguided anti-Uber fantasy.
> It would be the best for Uber to release the raw sensor data during the event.
I agree with that statement only if we define "best" as "for the common good." The best thing in Uber's own interest is very likely to lock that shit down and hold onto it until a judge makes them give it up. Somebody died here, and it very well may have been Uber's fault. Uber could very well be on the hook for a tremendous amount of punitive damage.
They'll probably fight that as much as they can, because it's not in _their_ best interests. Raw data will likely make what happened appear easy to stop, particularly if they were capturing LIDAR at the time and (assuming no negligence), at best will make Uber look incompetent.
People won't accept the nature of how image recognition tends to fail - ie seemingly randomly and catastrophically. Assuming a driver-less future, we'll definitely see more of these random tragedies.
I moved to the US a few years ago and have already lost count of how many cyclists wearing dark clothing and no lights on poorly lit roads I've nearly hit at night.
I had to take a cycling proficiency test in the UK, here no one seems to have a clue, they don't even always ride on the right side of the road.
This woman neglected any due care and her death, while tragic, is entirely her own fault.
Why does the entire Internet feel the need to apportion blame in this case?
There are four entities who could have and should have relatively straightforwardly avoided this death.
1. The woman shouldn't have crossed the street there and then.
2. The safety driver shouldn't have been looking at her phone.
3. Uber's automation should have caused the vehicle to brake much sooner.
4. That street should have been designed much safer. The design of a lit crosswalk on the median encourages people to cross there, so much stronger discouragement is required. Furthermore, a 35mph limit in an area with pedestrians is going to regularly cause pedestrian fatalities. That's a trade-off most people seem willing to make, but if you make that trade-off you have to own it. If the speed limit was 20mph that woman would be alive today.
As far as I can see it, all 4 entities are 100% responsible for the death of the pedestrian.
None of those 4 entities passed the "reasonable person" test with their actions, therefore all 4 are fully responsible.
Sure you can argue all you want on whether one entity's misbehaviour is more egregious than the others. It doesn't matter; all 4 engaged in behaviour that regularly kills people at a rate much higher than acceptable.
You can continue to cast the blame for frankly abysmal state of cyclist safety in this country, meanwhile cyclists die with alarming regularity however many of those factors are in play.
> To be clear, while the car had the right-of-way and the victim was clearly unwise to cross there, especially without checking regularly in the direction of traffic, this is a situation where any properly operating robocar should have avoided the accident anyway.
I'm not assuming you haven't read it, but I think this is the best answer I can give you.
I do believe that autonomous vehicle should have prevented this kind of accidents (since they are 100% aware, unlike humans), but why did she act like that? Is it common there to just cross the road and expect cars to stop?
The reason I ask is culture difference. From my experience, in some countries (e.g. Indonesia), people cross the road like they have superpowers. They use their hand to signal the car to stop while taking time to cross. In Malaysia, even if you pay attention to the road, the drivers _seem_ to want to run over you when you cross (they go faster).
Deleted Comment
IME, at least in places I've driven, this is not at all uncommon for apparently-homeless people in urban and often suburban environments, at points that are not legal crosswalks and at points that are controlled crosswalks but against the controls (also, of course, at proper crosswalks without or consistent with controls, though that's less remarkable.)
You saw the pedestrian _enter_ the street in that video? I saw nothing of the sort when I watched it.
Car headlights can be seen from over 1000 feet away, and at the speed the Uber car was traveling, the pedestrian had 20 seconds to see & and react to the vehicle coming at her.
From the video, it's clear they were not looking for oncoming vehicles while crossing the road.
The solution is simple and even pointed out later in the article:
> Note that the streetlamps are actually not that far from her crossing point, so I think she should have been reasonably illuminated even for non-HDR cameras or the human eye, but I would need to go to the site to make a full determination of that.
Seems like the question of whether a driver would have seen the pedestrian is simply solved by going to the site another comparable night and seeing how visibility is to a human pair of eyes.
They would go to exactly the site, with an identical car, someone wearing the woman's clothing (or identical clothing) and an identical bicycle, and reenact the scene, measuring light levels and so on as the car approaches.
For example, page 23-24 of this report positions a tram at various distances to check the visibility of the headlights: https://assets.publishing.service.gov.uk/media/547c8fbfed915...
This is much, much more than is done for a road accident, but would seem entirely proportionate for a robot car accident investigation at this stage.
https://www.reddit.com/r/videos/comments/86756p/police_relea...
Basically all top root comments are saying that it's the pedestrian's fault! Some math genius even proved how impossible it is to stop.
Especially compared to quite decent ARS article https://arstechnica.com/tech-policy/2018/03/video-uber-drive...
The right of way is not a permission to plow into other road users.
What if your kid jumps suddenly on the street, would you be ok if the driver excercised his right of way?
If AIs are allowed to drive they are expected to perform to avoid collisions (and especially with slow moving objects like pedestrians). This was supposed to include devices like lidars which were supposed to perform better than human driver could hope to do.
The added information and reaction time was supposed to offset relative dumbness of the driver.
Now we are learning that the car is only operating on what seems a visual spectrum camera and doesn't seem to be reacting in a simplest situation and the driver isn't even looking at the road.
So the car was supposed to start breaking IMMEDIATELY the threat is recognized. I don't see hard braking on the video. I see a human driver taking her eyes from what seems to be a phone to recognize the situation and NO DECELERATION THAT WOULD HAVE TO BE IMMEDIATELY EVIDENT.
Shame on you, Uber.
Prior to the accident, I gave high probabilities that Uber performed a surgery on their LIDAR systems after the lawsuit, aiming to eliminate anything with legal implications. If the above comment is true, and uber is not using any long-range LIDAR, please someone with better knowledge help to stop Uber testing of an unsafe system.
Good write-up but I don’t understand this part. I would not be comfortable with these cars making moves based on calculations of the legality of observed behavior.
EDIT: Nevermind! I thought about it some more and realized they mean mapping the geographical locations where pedestrians are likely to be crossing the road. Much more humane than I previously thought! But I remain uneasy about any system with room to benefit from selective inquiry like such.
I think it's absurd these vehicles are on the road today. The process is being rushed for the financial gain of corporations. No matter how far up the list safety of these cars is, it's not priority #1. The greed of companies vying to be the first to stake a majority share of this new market is the only real priority.
Right now, live humans are better at reading the body language of pedestrians than machines are, and this is an area of research.
My suggestion is that mapping places where people are likely to cross allows you to dial up your caution in those locations. Not to dial it all the way up to what the law demands in legal crosswalks, but not to have it at the most basic level. Of course, once a ped enters the roadway, you will dial up the caution.
On 45mph roads, peds know to expect cars not to keep them safe. You wait for a long clearing in the traffic and you run if you have to. On low speed roads, we cross even with cars coming, expecting them to see us and react.
If those worked, considering likely crossing locations in addition to certain crossing locations like zebra crossings should only improve safety, not be required for it.
https://news.ycombinator.com/item?id=16646415
people tend to cross ignoring the traffic laws.
Edits
The idea that Uber should release more data which won't be analyzed Donnelly but will only fuel even more speculation doesn't make much sense.
The NTSB is investigating. They are extremely through. Most importantly once all the data has been carefully analyzed and the failure mode had been understood the NTSB will be able to devise guidelines and protocols to prevent it from happening again.
Until then I don't see how all this speculation helps except to fulfill some sort of misguided anti-Uber fantasy.
Either way is a fail.
Can you list all the "baseless speculations" in the discussions/blog post?
I agree with that statement only if we define "best" as "for the common good." The best thing in Uber's own interest is very likely to lock that shit down and hold onto it until a judge makes them give it up. Somebody died here, and it very well may have been Uber's fault. Uber could very well be on the hook for a tremendous amount of punitive damage.
People won't accept the nature of how image recognition tends to fail - ie seemingly randomly and catastrophically. Assuming a driver-less future, we'll definitely see more of these random tragedies.
I had to take a cycling proficiency test in the UK, here no one seems to have a clue, they don't even always ride on the right side of the road.
This woman neglected any due care and her death, while tragic, is entirely her own fault.
Why does the entire Internet feel the need to apportion blame in this case?
There are four entities who could have and should have relatively straightforwardly avoided this death.
1. The woman shouldn't have crossed the street there and then.
2. The safety driver shouldn't have been looking at her phone.
3. Uber's automation should have caused the vehicle to brake much sooner.
4. That street should have been designed much safer. The design of a lit crosswalk on the median encourages people to cross there, so much stronger discouragement is required. Furthermore, a 35mph limit in an area with pedestrians is going to regularly cause pedestrian fatalities. That's a trade-off most people seem willing to make, but if you make that trade-off you have to own it. If the speed limit was 20mph that woman would be alive today.
As far as I can see it, all 4 entities are 100% responsible for the death of the pedestrian.
None of those 4 entities passed the "reasonable person" test with their actions, therefore all 4 are fully responsible.
Sure you can argue all you want on whether one entity's misbehaviour is more egregious than the others. It doesn't matter; all 4 engaged in behaviour that regularly kills people at a rate much higher than acceptable.
> To be clear, while the car had the right-of-way and the victim was clearly unwise to cross there, especially without checking regularly in the direction of traffic, this is a situation where any properly operating robocar should have avoided the accident anyway.
I'm not assuming you haven't read it, but I think this is the best answer I can give you.