> Therefore, all safety-critical decisions must be made by the onboard computer alone.
Why is the requirement that all safety-critical decisions must be made on board, versus the seemingly-simplifying assumption that only some or most decisions would have to be made on board, because a remote operator or backend service could be available a lot of the time? It doesn't seem unreasonable to me to have a single operator remotely monitoring multiple vehicles that are autonomous under ideal conditions (driving along a straight road in good weather) and then taking over when necessary. Let's say you would only use such a system for major routes with solid satellite visibility, not last-mile routes hauling heavy equipment on a dirt road in the boonies, or something like that. Maybe this wouldn't work, but it's not obviously ridiculous to me, so I wonder why he just starts out saying the truck most be fully autonomous with no human ever in the loop.
There are a lot of differences between passenger vehicles and trucks. The physical dynamics of articulated vehicles, the mission profile, and social dynamics come to mind. How does a robotruck place cones or flares while it awaits rescue?
Personally, I expect autonomous trucking to be a force-multiplier for humans who were formerly drivers. Such trucks will have sleeper cabs and the human will be there to maintain the vehicle and handle the long tail of tasks (filling tires, cleaning, refueling, repairs, rigging, whatever). You'll get 24-hour operation out of a single human employee because they'll be able to sleep and do other things most of the time. Maybe they'll work a second job as a remote call-center operator.
For lidar, the range is also limited by power limits + physics, which cannot be overcome by increasing money/power/device size. Some dependencies on semiconductor manufacturing tech or better signal processing might be possible to solve with more money.
Also, I feel like there's a lot of talking past one another in these conversations because one person will say "Let's see an autonomous truck shipping hazmat to Pittsburgh in February with freeway lanes shut down" and another person might say "that's a rare instance" but I really don't feel like society will accept anything other than trucks / vehicles that are able to operate under all conditions, with greater safety than the safest human driver. We tolerate human failures but to use them as the benchmark for autonomous systems would be perceived as unethical, because autonomous systems are deliberately designed and any failures by them would be seen as an intentional oversights and errors, and no one at Waymo or Tesla or where ever is ever going to be charged with vehicular manslaughter for an autonomous vehicle error. We'd demand a way higher standard because these companies don't really have any skin in the game, except for financial penalties which we now understand is not a deterrent for anything. My observations are only moderately related but I'm anticipating the same well-trod talking points coming up and want to address them.
That’s not the argument being presented though. For example Waymo claims to exceed human performance by a large margin: https://waymo.com/blog/2023/12/waymo-significantly-outperfor...
(Again, one may disagree about the methodology or the conclusions of the study. Just want to point out it’s not the argument being presented.)
Driving a car in inner Copenhagen is a stressful situation due to the insane number of cyclist you have to watch out for, see https://youtu.be/FaySp9i2zMA?t=113
Many examples of high pedestrian density in https://youtu.be/P6sw4EKegp4
This analysis seems really suspect to me. Any clarification would be appreciated.
Also recommend checking out the citation. It is an accepted value used in American highway design.
As an additional safeguard, you can make your trucks go into 'safety' mode when connection becomes spotty or when too many operators become too busy with other trucks.
'Safety mode' could mean slowing down the trucks or even stopping some of them. And in general, letting the autonomous systems err on the side of caution more often.
Regarding the minimal risk condition / fallback behavior, a central point of the article was that slowing or stopping are almost always unacceptable on freeways because of the speeds involved