Interesting. Watch at 1/3 speed or so to see it in real time. (Self-driving car videos tend to be published sped up, so you don't see the mistakes.)
The key part of this is, how well does it box everything in the environment? That's the first level of data reduction and the one that determines whether the vehicle hits things. It's doing OK. It's not perfect; it often misses short objects, such as dogs, backpacks on the sidewalk, and once a small child in a group about to cross a street. Fireplugs seem to be misclassified as people frequently. Fixed obstacles are represented as many rectangular blocks, which is fine, and it doesn't seem to be missing important ones. No potholes seen; not clear how well it profiles the pavement. This part of the system is mostly LIDAR and geometry, with a bit of classifier. Again, this is the part of the system essential to not hitting stuff.
This is a reasonable approach. Looks like Google's video from 2017. It's way better than the "dump the video into a neural net and get out steering commands" approach, or the "lane following plus anti-rear-ending, and pretend it's self driving" approach, or the 2D view plane boxing seen from some of the early systems.
Predicting what other road users are going to do is the next step. Once you have the world boxed, you're working with a manageable amount of data. A lot of what happens is still determined by geometry. Can a bike fit in that space? Can the car that's backing up get into the parking space without being obstructed by our vehicle? Those are geometry questions.
Only after that does guessing about human intent really become an issue.
It really really bothers me that these folks are using a live city with real, non-volunteer test subjects of all ages (little kids and old folks use public streets) as a test bed for their massive car-shaped robots.
It's bad enough that people are driving cars all over the place, car collisions have killed more Americans than all the wars we've fought put together.
I'm one of those people who say, "Self-driving cars can't happen soon enough." But I don't think that justifies e.g. killing Elaine Herzberg.
Ask yourself this, why start with cars? Why not make a self-driving golf cart? Make it out of nerf (soft foam) and program it to never go so fast that it can't brake in time to prevent collision.
Testing these heavy, fast, buggy robots in crowds of people is extremely irresponsible.
There is a different perspective that you could use (and I’m not necessarily advocating for it; hear me out):
Human driven cars are dangerous to the tune of ~36,000 deaths per year. Every year without the implementation of full self driving we pay some large percentage of that number in lives. Self driving cars won’t make it out of the lab without real driving on real roads in real scenarios. Taking appropriate precautions (a human safety driver, maybe two) and testing in the real world might save more lives overall than keeping the vehicles in a more lab-like setting for longer, and missing some of the complexity of the real thing.
I think you're missing the narrative that the self-driving industry is pushing here. They "solved the problem" and their fleets driving around "autonomously" is being done in order to demonstrate this to the public. A golf cart is obviously unsuitable for that purpose.
I think this narrative has run out of steam at this point, by the way. Waymo's valuation has gone from $175B to $105B to $30B since 2018. Zoox specifically is now laying off engineers.
You can't learn to operate in environments you don't train in. It would be great if we had a solution to the out-of-distribution inference/reward problem, but I don't think it really exists.
I'm firmly in the "Perfect for freight, questionable value for consumers" camp WRT autonomous cars. I also think it's irresponsible to do this but the reality is, they are doing all the socially "appropriate" things, like get approval from the city.
It's one of those things where I wonder if I'm not being too much of a curmudgeon. I'm sure the case could be made that these things will reduce overall traffic deaths long before they become perfect drivers, eh
> It's bad enough that people are driving cars all over the place, car collisions have killed more Americans than all the wars we've fought put together.
Do you want this to stop? Then we're going to have this people test their self-driving cars in a real environment. The more we delay this, the more people are dying because of car accidents.
Those were already made years ago at the start of SDC innovation. A few companies are way beyond the worst human drivers now, there's already a massive amount of motor vehicle death caused by intoxication that we should worry about. Not fantasy robodeaths that we can count on one hand.
They're not a fantasy, they've occurred. The reason we can count them on one hand is because few cars currently drive autonomously and there is a fail-safe human at the wheel who (most of the time) is paying attention to the road.
Even if autonomous cars are better than human drivers they will still inevitably strike and kill pedestrians and vehicle occupants; they are not a magical solution to vehicle collisions.
My understanding is that the software in the car detected Herzberg and could have stopped the car in time to avoid the collision but that that subroutine had been disabled due to to many false positives. The "safety driver" (in quotes because she turned out to be both unsafe and not actually driving at the time) is also at fault.
Certainly, Elaine Herzberg wouldn't have been killed by that car if it wasn't there, eh?
What is the general view on Zoox's progress relative to other non-waymo playes. Such as Argo, Aurora and Cruise. There is the widely reported disengagement per mile, but most robotics people know it is just smoke and mirrors meant to make the regulators go away (disclosure, studied/researched robotics in grad school).
The general consensus among my AV friends (who work at a bunch of different companies) is that their AV driving stack is really good, but obviously not perfect.
I have no idea about their business model and how COVID affects that, though.
Each company gets to decide for itself what qualifies as a disengagement and each event’s severity, and the formula used is a VERY closely guarded secret.
Yawn. Good lane markings, no rain/snow or other bad weather, perfect road surfaces.
Just like all other self-driving demos. I'd like to see a demo like this on snow covered roads, with no lane markings visible. I think that would tell a lot more about the system's ability to deal with an imperfect world.
Well, universality is not necessarily a useful end goal. Lyft is a successful company that doesn't operate even in Canada. A solution that works only in coastal California may well be sufficient.
Lots of things come to the Bay Area and Los Angeles before anywhere else. Partly that's because coastal California is an innovation hotbed. Partly because it's a single large rich market. Since one of these that succeeds entirely in the safe parts of California would be an incredible game-changer on its own (door-to-door small-group spikable public transit!), it's still amazingly exciting.
And while lots of Americans view many things as unchangeable, that's not the case in many other places. In China, if you were to talk to public planners about how autonomous vehicles will handle detours, they'll just say, "Oh, we'll use transmitters to tell you. We can sign the transmitters so you know they're trustworthy." Everything about the universe is mutable.
Yep, no ice road truckers will be autonomous in the next year, and that's okay.
Road infrastructure is going to change, by necessity. It seems like self-driving technology is as good as it can be, given current circumstances. There's no way to get self-driving cars to airplane safety numbers without on/near road devices/reflectors/computer-readable signage/etc, edge compute, better pedestrian understanding of what the cars are seeing and are capable of reacting to, and probably much more. It's time to give it the infrastructural boost it needs to become an everyday reality. We need to put sensors in the road when they're re-paved, transmitters in signs with solar chargers when they're replaced, LIDAR reflectors on the road sides and in medians, start offering clothing/accessories with transmitters or reflectors that clearly identify people as pedestrians...
Is the reason all of this makes more sense than just building tracks and trains just the fact that there’s an evolutionary path to get there with incremental releases along the way?
Because every time I hear this kind of thing I keep finding myself asking why/whether mass transit systems aren’t just the same end state?
In a parking lot the car would be starting from a position it can stay in, so requiring a person to intervene is one option. Also busy parking lots frequently don't stay busy forever and heavy rain doesn't last forever either.
A look at the local weather could see where a storm is and give an estimate of when it will be able to automate leaving and require a person otherwise. I think there are pragmatic answers to extreme situations.
I was once driving on a road I could not see at all. It was at night, in a blizzard on the road from Denver to Vail. It didn't take long until I was following the two red lights of the bus in front of me. As a human, I knew I could drive safely where the bus had been driving seconds ago.
A self-driving car would have... tell me.
Because "something better than humans can do" is the whole selling point of self-driving cars.
And plenty of us humans can and do drive reasonably-safely in snowy/icy conditions. It takes practice, like anything else driving-related, but it's something that most drivers north of the Mason-Dixon Line likely have quite a bit of practice with and have to handle a significant fraction of the year. It's not unreasonable to hold self-driving cars to the same standard.
> Yawn. Good lane markings, no rain/snow or other bad weather, perfect road surfaces.
Ok... What if I were to tell you that there is a solution to this?
The solution to this, is simply "dont drive in those conditions".
A self driving car can't get in a wreck that is caused by snowy roads, if it simply doesn't drive in the snow.
Self driving, during perfect conditions is still extremely valuable, because it turns out that there is a whole lot of driving that is in perfect conditions.
So, you would do things like prevent the taxis from running, if there is any chance of rain at all. I am sure that there are lots of places where rain is not an issue, and rain could be predicted ahead of time. Not everywhere. But still in many places.
That's the good old Pareto principle for you: the last few percent are going to take a lot more effort than the first 95%.
More to the point, this falls into the category of safety-critical systems, with the added wrinkle of potentially being used daily by millions of people. Unlike many domains where software is applied, 80% of the way there doesn't cut it, nor does 95% or 99% or even 99.9%.
(Leaving aside the fact that, for all of us not actively engaged in autonomous vehicle R&D, we likely have absolutely no idea how close we are to success here, or even what all the relevant goalposts would be.)
Possibly for driving in cities and highways on clear days, but we are nowhere close to having autonomous vehicles even match human drivers in 100% of possible/likely driving circumstances and road/weather conditions. That last few percent is the highest hurdle.
All in all I'm quite impressed with the demonstration. It was way more thorough than previous videos I've seen. The main things the car is failing at from what I see are the hard things: Object permanence and ad-hoc reasoning. So no surprises.
Regarding object permanence: I was impressed overall with their detection. Still, you could see kids walking close to parents blink in and out of awareness of the car. Now I'm not saying humans are very good at tracking a multitude of actors. So at some point the machines will be "good enough". But that point seems way off when significant objects like kids can just disappear from awareness when they pass behind a stroller.
And about the ad-hoc reasoning: They have the whole city mapped out! Including traffic lights and turn restrictions. I'm not even clear whether they try to detect the signs at all. I'd assume that they have an operations center that hot-patches the map with everything cropping up during the day. So the cars would send in unexpected changes to the road and they would classify those changes and patch the map. Meaning the car is tethered to that feed and not autonomous in the strictest sense. Sure, such a center would be marginal cost given a large enough fleet. Still it's a subscription you'd need for your own robocar.
They mention a lot of things they are prepared for. And I can't help but think "oh they're really good" when they say "detect backed up lanes" or "creep into intersections". But that always leaves the question what happens when they're not prepared for something. When the rules don't fit. Can the car go over a curb if the situation warrants it? Does it back out of a blocked off section? Is it even able to weigh whether backing out is an option at this point?
so I'd like to see a "what we're currently stuck at" video. But I understand one can't very well attract investors with such a video.
I agree with a significant amount of your point, but with regard to object permanence, I would guess that they have prediction algorithms that don't only rely on the current-time perception, so if something blips out of sight for a second the system will still infer/predict it's existence (for a time - obviously if something is hidden for a long time it won't continue to not trust perception).
I'd be very interested to know how that works. But I don't think they have it.
The boxes they draw are very wobbly and dimensions expand and contract directly with sensor input. Maybe they only show fused output (in itself an achievement) and there is a later step they don't show. That would be weird though because if they want to brag about their model they would definitely show it if it was any good.
> Handling yellow lights properly, involves us having to predict how long they will remain yellow for
No. That isn't how yellow lights work in the US. If the light turns yellow and you have enough space/time to make a safe stop you do it. There's no need to predict the remaining time on yellow phase. We don't need robot cars bending these rules.
Not sure why you're being downvoted, but I think this is a classic example of why self-driving is so hard. They're not bending the rules, just copying what humans do. We also predict how long a light will be yellow for, but do it naturally (if you just saw it turn from green, or it was yellow as soon as it was in your line of sight).
In Delaware on Route 1 if you follow this advice you are likely to get rear ended. They have traffic lights on a 50mph route that stay in yellow for a long time.
I often find myself slowing down to a stop then awkwardly realizing I’m stopped with multiple seconds of yellow remaining and drivers honking behind me.
No. That's what the law says but not how you drive.
Suppose your 4 seconds from a yellow light traveling at a high speed. You can slam on your breaks and make a very abrupt stop, or you can cruise through that light and continue on your way.
If the light is about to turn red you should probably slam on your breaks, because you risk being t-boned in the intersection.
If you have time to get through the yellow light/before the cross light turns green, you should keep going because slamming on your breaks is mildly dangerous.
The law isn't nuanced enough to understand this, with good reason. You don't want to make a bad call about the safest action made in good faith illegal.
This is really cool, but the environment is also really simple and I think we're definetly at least 15+ years out before self driving cars can handle somewhat challenging situations as well as humans.
Just try to put one of this vehicles in a situation with varying road width, no markings, snow with no sticks to mark the edges so you really have to pay attention to where the road actually is. What would this do if you meet a car on such a road? Try to figure out who should go back, and maybe go back to the latest plase where its wide enough? Do random tests to check for grip every now and then? It also needs to know whether the road is salted, understand if the salt is working and so on and on and on...
"we're definetly at least 15+ years out". Similar statements were made about Go the year it was solved. AV is a vastly harder problem and requires new techniques to get there, but AI can progress can happen any time.
The key part of this is, how well does it box everything in the environment? That's the first level of data reduction and the one that determines whether the vehicle hits things. It's doing OK. It's not perfect; it often misses short objects, such as dogs, backpacks on the sidewalk, and once a small child in a group about to cross a street. Fireplugs seem to be misclassified as people frequently. Fixed obstacles are represented as many rectangular blocks, which is fine, and it doesn't seem to be missing important ones. No potholes seen; not clear how well it profiles the pavement. This part of the system is mostly LIDAR and geometry, with a bit of classifier. Again, this is the part of the system essential to not hitting stuff.
This is a reasonable approach. Looks like Google's video from 2017. It's way better than the "dump the video into a neural net and get out steering commands" approach, or the "lane following plus anti-rear-ending, and pretend it's self driving" approach, or the 2D view plane boxing seen from some of the early systems.
Predicting what other road users are going to do is the next step. Once you have the world boxed, you're working with a manageable amount of data. A lot of what happens is still determined by geometry. Can a bike fit in that space? Can the car that's backing up get into the parking space without being obstructed by our vehicle? Those are geometry questions.
Only after that does guessing about human intent really become an issue.
The note in the top-right says its 2x.
It's bad enough that people are driving cars all over the place, car collisions have killed more Americans than all the wars we've fought put together.
I'm one of those people who say, "Self-driving cars can't happen soon enough." But I don't think that justifies e.g. killing Elaine Herzberg.
Ask yourself this, why start with cars? Why not make a self-driving golf cart? Make it out of nerf (soft foam) and program it to never go so fast that it can't brake in time to prevent collision.
Testing these heavy, fast, buggy robots in crowds of people is extremely irresponsible.
Human driven cars are dangerous to the tune of ~36,000 deaths per year. Every year without the implementation of full self driving we pay some large percentage of that number in lives. Self driving cars won’t make it out of the lab without real driving on real roads in real scenarios. Taking appropriate precautions (a human safety driver, maybe two) and testing in the real world might save more lives overall than keeping the vehicles in a more lab-like setting for longer, and missing some of the complexity of the real thing.
I can live with it. Human drivers annoy me so much that throwing the dice on autonomous cars is not a big stressor to me.
I think this narrative has run out of steam at this point, by the way. Waymo's valuation has gone from $175B to $105B to $30B since 2018. Zoox specifically is now laying off engineers.
I'm firmly in the "Perfect for freight, questionable value for consumers" camp WRT autonomous cars. I also think it's irresponsible to do this but the reality is, they are doing all the socially "appropriate" things, like get approval from the city.
Do you want this to stop? Then we're going to have this people test their self-driving cars in a real environment. The more we delay this, the more people are dying because of car accidents.
They're not a fantasy, they've occurred. The reason we can count them on one hand is because few cars currently drive autonomously and there is a fail-safe human at the wheel who (most of the time) is paying attention to the road.
Even if autonomous cars are better than human drivers they will still inevitably strike and kill pedestrians and vehicle occupants; they are not a magical solution to vehicle collisions.
I forgot how annoying it was.
Certainly, Elaine Herzberg wouldn't have been killed by that car if it wasn't there, eh?
Don't test killer robots on the public.
Dead Comment
Dead Comment
Dead Comment
I have no idea about their business model and how COVID affects that, though.
Are you saying that the numbers are inaccurately reported, or accurately reported but just don't tell the whole story?
Just like all other self-driving demos. I'd like to see a demo like this on snow covered roads, with no lane markings visible. I think that would tell a lot more about the system's ability to deal with an imperfect world.
Lots of things come to the Bay Area and Los Angeles before anywhere else. Partly that's because coastal California is an innovation hotbed. Partly because it's a single large rich market. Since one of these that succeeds entirely in the safe parts of California would be an incredible game-changer on its own (door-to-door small-group spikable public transit!), it's still amazingly exciting.
And while lots of Americans view many things as unchangeable, that's not the case in many other places. In China, if you were to talk to public planners about how autonomous vehicles will handle detours, they'll just say, "Oh, we'll use transmitters to tell you. We can sign the transmitters so you know they're trustworthy." Everything about the universe is mutable.
Yep, no ice road truckers will be autonomous in the next year, and that's okay.
What does spikable mean here?
Because every time I hear this kind of thing I keep finding myself asking why/whether mass transit systems aren’t just the same end state?
A look at the local weather could see where a storm is and give an estimate of when it will be able to automate leaving and require a person otherwise. I think there are pragmatic answers to extreme situations.
But humans can't drive well in those situations either. Why are you asking for something better than humans can do?
Ask Canadians, Swedes or any other people living in a location with long winters.
And plenty of us humans can and do drive reasonably-safely in snowy/icy conditions. It takes practice, like anything else driving-related, but it's something that most drivers north of the Mason-Dixon Line likely have quite a bit of practice with and have to handle a significant fraction of the year. It's not unreasonable to hold self-driving cars to the same standard.
Ok... What if I were to tell you that there is a solution to this?
The solution to this, is simply "dont drive in those conditions".
A self driving car can't get in a wreck that is caused by snowy roads, if it simply doesn't drive in the snow.
Self driving, during perfect conditions is still extremely valuable, because it turns out that there is a whole lot of driving that is in perfect conditions.
So, you would do things like prevent the taxis from running, if there is any chance of rain at all. I am sure that there are lots of places where rain is not an issue, and rain could be predicted ahead of time. Not everywhere. But still in many places.
did you see that 5 lane intersection going over a tram lane? I myself had no idea where I would have driven there.
I am out of my depth in terms of the topic we are discussing so I might be quite wrong.
More to the point, this falls into the category of safety-critical systems, with the added wrinkle of potentially being used daily by millions of people. Unlike many domains where software is applied, 80% of the way there doesn't cut it, nor does 95% or 99% or even 99.9%.
(Leaving aside the fact that, for all of us not actively engaged in autonomous vehicle R&D, we likely have absolutely no idea how close we are to success here, or even what all the relevant goalposts would be.)
Regarding object permanence: I was impressed overall with their detection. Still, you could see kids walking close to parents blink in and out of awareness of the car. Now I'm not saying humans are very good at tracking a multitude of actors. So at some point the machines will be "good enough". But that point seems way off when significant objects like kids can just disappear from awareness when they pass behind a stroller.
And about the ad-hoc reasoning: They have the whole city mapped out! Including traffic lights and turn restrictions. I'm not even clear whether they try to detect the signs at all. I'd assume that they have an operations center that hot-patches the map with everything cropping up during the day. So the cars would send in unexpected changes to the road and they would classify those changes and patch the map. Meaning the car is tethered to that feed and not autonomous in the strictest sense. Sure, such a center would be marginal cost given a large enough fleet. Still it's a subscription you'd need for your own robocar.
They mention a lot of things they are prepared for. And I can't help but think "oh they're really good" when they say "detect backed up lanes" or "creep into intersections". But that always leaves the question what happens when they're not prepared for something. When the rules don't fit. Can the car go over a curb if the situation warrants it? Does it back out of a blocked off section? Is it even able to weigh whether backing out is an option at this point?
so I'd like to see a "what we're currently stuck at" video. But I understand one can't very well attract investors with such a video.
The boxes they draw are very wobbly and dimensions expand and contract directly with sensor input. Maybe they only show fused output (in itself an achievement) and there is a later step they don't show. That would be weird though because if they want to brag about their model they would definitely show it if it was any good.
No. That isn't how yellow lights work in the US. If the light turns yellow and you have enough space/time to make a safe stop you do it. There's no need to predict the remaining time on yellow phase. We don't need robot cars bending these rules.
I often find myself slowing down to a stop then awkwardly realizing I’m stopped with multiple seconds of yellow remaining and drivers honking behind me.
Maybe my brakes (or reflexes) are just too good?
Suppose your 4 seconds from a yellow light traveling at a high speed. You can slam on your breaks and make a very abrupt stop, or you can cruise through that light and continue on your way.
If the light is about to turn red you should probably slam on your breaks, because you risk being t-boned in the intersection.
If you have time to get through the yellow light/before the cross light turns green, you should keep going because slamming on your breaks is mildly dangerous.
The law isn't nuanced enough to understand this, with good reason. You don't want to make a bad call about the safest action made in good faith illegal.
Just try to put one of this vehicles in a situation with varying road width, no markings, snow with no sticks to mark the edges so you really have to pay attention to where the road actually is. What would this do if you meet a car on such a road? Try to figure out who should go back, and maybe go back to the latest plase where its wide enough? Do random tests to check for grip every now and then? It also needs to know whether the road is salted, understand if the salt is working and so on and on and on...