I am really worried by the fact that I am the unwilling tester in the Great Driverless Car Experiment.
Tradition has it that when you load-test a new bridge, you put the architect underneath. I feel like this, except I didn't design those driverless cars, somebody else did. Being an experienced software engineer, my trust in the software in these cars is pretty low. And yet they are testing them on me, because I can be the one getting killed.
I think we should set a much higher bar for allowing those cars on the streets, rather than "it kinda works, so let's roll with it".
My wife's 96 year old grandfather still has a driver's license because he passes some ridiculous vision test with 1 of his eyes. He can't read a newspaper or see a television well enough to enjoy sports (previously his favorite thing). He says he's memorized the vision test at his doctor's office and when asked the doctor refuses to file a report (or whatever procedural thing is required to remove his driving abilities). He totaled his car(s) every ~6 months for the past ~5 years. Luckily for other drivers, always doing something like hitting a concrete light pole base in a large parking lot. Although him and wife have been hospitalized each time as 1) age 2) sometimes happens at >35MPH.
All to say, the roads being safe as a core assumption is completely false. I'd gladly share the road with beta AI versus other drivers like this. Being an occupant in beta AI is completely optional.
A huge percentage of elderly drivers shouldn't be on the road. We (the US) seem, as a society, to have decided to just live with that rather than trying to make it easier to live here without a car.
If there's one thing that seems the be much more forgiving in the US, it's car insurance. How has his premium and excess not become unmanageably expensive? Or not to have been denied insurance?
UK car insurance feels more like US health insurance. High premiums, and heaven forbid if you get into an accident.
That teenage driver goes through a training and certification process, and we have ways of stopping them from driving if they fuck up too badly. They also have liability for their actions.
Most of those are lacking or at least inconsistent, for AI driving.
I think elderly drivers are probably more dangerous than teenagers. Lately whenever I've seen a really reckless driver, it's been someone very elderly and who looked totally overwhelmed by the experience of driving. I've seen old people struggling to keep up with traffic, driving in bike lanes or just on the side of the road thinking it was a second lane... There should be more frequently driver exams (and car exams) in the US.
Aside: I'm happy to see that teenage drivers today have a lot more required training than I did when I got my license. At least in this US state, and likely most of them.
I'm also really happy about the continued advances in auto safety technology.
A teenager is worried about: getting killed, killing someone else, hurting someone, wrecking their parents' car, losing their permit. I don't trust engineers working on the models that drive these cars to teach a computer the difference between a human and an open road. A teenager can discern these with no effort. I trust a teenager to care about not hitting me and try their best, and to choose a ditch over hitting someone when the decision time comes.
Depends on the country, but here teenagers are drilled so much during their L phase and on the cars with a manual transmission, so they are probably the safest drivers on the road.
You didn't design the aircraft that you fly on, which are incredibly computerized. Also the drugs that your doctor prescribes. Etc.
You put your faith in myriad bureaucracies and trust networks that you have no hope of understanding. Driverless cars are just one more added to the list.
The difference is that airplane software standards are by far the most rigorous in the world and so rigorous that most software developers are shocked at the level of rigor employed where as driverless cars have exactly zero software standards with respect to their safety or fitness for purpose. Comparing the processes in play is a outrageously fundamental category error; the systems have standards that are like 6 orders of magnitude apart.
As a example, just look at the 737 MAX, a airplane viewed as a literal deathtrap and a complete indictment of airplane software development, which had 2 crashes in 400,000 flights. At a average flight distance of ~500 miles, the 737 MAX, a airplane literally multiple orders of magnitude worse than any other and a complete failure of the approval process, only resulted in 1 fatality per 100 million person-miles. That is safer than the driving average of 1.1 fatalities per 100 million person-miles. What is viewed as by far the most atrocious outcome in the airplane sector, a system literally hundreds or thousands of times more dangerous than the average, is above average in the car sector; that is how far the gap is.
And this is ignoring driverless car software which is being developed to even lower standards than regular car software and which has no actual independent quality requirements. In contrast, far more lives are at risk with respect to driverless cars, so they should be developed to standards probably around 1000x better than airplane software which puts them around 0.0001% of the way there to a acceptable level of quality even if the software was already as reliable as regular car software.
Neither aircraft nor drug development norms look anything like the norms of software development. I actually find a substantial difference in my trust level between a traditional car company that's building self-driving capabilities (e.g. Mercedes) than a company that fashions itself as a software company that happens to build cars (e.g. Tesla) for exactly the reason of the norms and behaviors that each org would encourage.
Furthermore, at this stage the driverless cars are unlikely to be creating demand. These are replacing human drivers. Most of us have no hand in verifying the safety of other drivers on the road. I happen to believe it is bonkers that we generally test drivers once at the age of 16 and somehow that earns a person driving privileges for at least 50 years before being retested if not their entire life.
Presumably, you choose whether to ride that airplane or take those drugs. You probably don't get to choose when someone else's driverless car runs you over.
> Driverless cars are just one more added to the list.
As others have pointed out, aircraft certification (while it has its problems) is very rigorous. Driverless car certification, up to date, is basically nothing.
But the other aspect is that aircraft autopilots are solving a far easier problem. Developing a car autopilot for busy streets is many orders of magnitude more difficult than an airplane (or boat) autopilot.
Driverless cars are just one more added to the list
It's a qualitatively different change than any of the above (driven by a messianic view that AI is here and now and about to change everything), plus an instance of the
This is one of the problems with technology that Ted Kaczynski wrote about. You can't opt out.
> Even the walker’s freedom is now greatly restricted. In the city he continually has to stop to wait for traffic lights that are designed mainly to
serve auto traffic. In the country, motor traffic makes it dangerous and unpleasant to walk along the highway. (Note this
important point that we have just illustrated with the case of motorized transport: When a new item of technology is introduced
as an option that an individual can accept or not as he chooses, it does not necessarily REMAIN optional. In many cases the
new technology changes society in such a way that people eventually find themselves FORCED to use it.)
Your problem is the same as the walker's. The walker has opted out of the automobile, but still has to breathe its pollution and risk being hit by it.
Maybe now is the time to start raising the bar on human and upcoming automated drivers.
Both could have to pass a driving test that includes challenging, difficult situations. Day, night, rain, snow, fog, heavy traffic, missing lane lines, simulated pedestrians (including small children) leaping out between parked cars, and more.
For further difficulty, both kinds of drivers could have to pass this within a x% margin of the fastest safe speed at which the maneuvers can be completed; no cheating by slowing down to traffic-disrupting speeds.
The reason to hold humans and self driving cars to a different standard is that humans have already "class qualified" under ... well ... every condition that we drive in... with on the order of a billion driving years of total experience to the limitations of human-driving and far in excess of 10 million years of additional driving added per year along with a considerable amount of study across many decades.
Though there are systematic flaws in human drivers they're reasonably well known given the level of experience we have, there are many mitigations in place in both our cars and road designs, and many of these flaws are intuitively modeled by the other humans on the road as well as pedestrians allowing for real-time compensation.
The driverless cars, on the other hand-- we have radically less experience with as a class and there are many reasons to believe that vendor to vendor (e.g. due to sensor modalities) or even version to version will behave less self-consistently than human drivers do as a class. We know of some systematic faults in their operation which cause them to behave in ways that are highly surprising to humans, and we should expect to find many more as we gain experience.
To the extent that the properties of self driving cars have been formally studied at all it's been primarily by their creators which are obviously self interested. Some of the practices used in early supervised self-driving also actively undermined understanding their safety (e.g. executing a hand over to the human once the car was already in an nonrecoverable unsafe state and then attributing the accident to human driving since at the time of the collision the human was back in charge, but maybe only by a hundred milliseconds; or companies committing perjury to use the DMCA to force down unfavorable videos created by others)
This isn't an argument that we might not benefit from more rigorous driver testing or even from testing ideas that originated from self-driving car testing... but parity between driverless car tests and human testing shouldn't be a goal, particularly not when so many orders of magnitude separate our human driving experience from driverless experience.
I suspect that a challenging driver assessment between AVs and humans would be the most rapid driver of AVs possible... Just remember: Most drivers think they're above average.
Curiously, autonomous car technology also makes it feasible for us to rate drivers -- it's like having a professional watching over your shoulder all the time. IIRC, Tesla used something similar to determine rollout of new FSD capabilities.
Personally, any % above "average human driving skill" reliably should start to see more uptake of this new technology. Driverless cars have and will kill people. Sometimes it will be due to programming errors. But on the whole, they are, in many cases, on par or better than human drivers. I think it's good to be cautious, for the industry to be cautious. But I don't want to see people freaking out over a single accident because it involved an AI when there are thousands of non-AI accidents every day. Be consistent in judging them, not against some mythical 100% success rate but against the criteria, is it better/safer than most drivers.
They've been driving for a while now. Can't you go with data from their existing rides to form an opinion? All you're going with is fear right now. Those cars are already better than the majority of drivers.
I agree with your basic idea, but I don't think there's really any way around this - at some point these systems will have to be tested in a real world environment.
I guess the question is: how much higher should the bar be set? And if we set it substantially higher then how much longer will it take to improve the performance and safety of these systems?
This is the most challenging transition with tech in terms of the outcomes and the difficulty defining just what this trend is.
Cybernetics is* ~insertion of a governance layer into our social contract that we didn't agree to relinquish our agency to... It just sort of happened, and we deal with the outcomes. The more disconcerting aspect to me is the latter. Suddenly, QR codes everywhere, good luck getting around without a smart phone, if that driverless car sideswipes you and doesn't stop, call someone I guess?
If this area/trend interests you, there is a lot of reading available from the weird (CCRU, early Nick Land) to the fairly mechanical/academic (MIT, Norbert Wiener), to the openly behavioral governance-themed (Stanford Persuasive Technology Lab, B.J. Fogg), to adversarial-to-tech (The Cybernetic Hypothesis).
* so-so definition of the space, but its the best I can sum it up.
> I am really worried by the fact that I am the unwilling tester in the Great Driverless Car Experiment.
You’re basically an non-consenting tester for every piece of innovation that society every has/ever will come up with, in one way or another. It’s an unavoidable feature of living on the same planet as 8 billion other people. Perhaps the safety of this technology does need to improve, or perhaps not. But there’s no level of improvement that would ever be able to satisfy this outrage you’ve expressed at something new existing in the same place as you.
> You’re basically an non-consenting tester for every piece of innovation that society every has/ever will come up with, in one way or another. It’s an unavoidable feature of living on the same planet as 8 billion other people. Perhaps the safety of this technology does need to improve, or perhaps not. But there’s no level of improvement that would ever be able to satisfy this outrage you’ve expressed at something new existing in the same place as you.
Why do you say it's unavoidable? Regulations could certainly keep these tests off of public roads.
You could say the same for regular cars when they first arrived on the road. Saying there should be a higher bar is useless unless you are able to say exactly what that bar should be.
People did say exactly the same for regular cars. The result was a massive advertising campaign to establish the concept of jaywalking, and to make the street the sole domain of the car.
I can easily imagine a similar campaign to label pedestrians as irresponsible for walking outside without an ultrasonic emitter, or a lidar reflector, or whatever integrates with the self-driving system. Whatever shifts the responsibility away from the car and onto the deceased.
Let's start with setting the bar at "the driverless car only operating at night should have its lights on". And since Cruise so effortlessly limbod right under that one, we should probably ask for a root-cause analysis here and stop their operation immediately until then. God forbid they start disabling the brakes like Uber "because they cause disengagements".
What? No, not at all. The bridge has to be able to handle the max load, otherwise it's not safe. If the load test causes any issues with the bridge, that is a massive red flag.
>I think we should set a much higher bar for allowing those cars on the streets, rather than "it kinda works, so let's roll with it".
Indeed. The minimum before any self driving car can be put on the road is that the developers, managers, executives, and major investors and their immediate families have to spend 8 hours running back and forth across a track on which 40 self driving cars per mile of track are running laps at a minimum of 50 MPH. Additionally there need to be things like rains of nails on the road, suddenly floods of water, packs of dogs, tumbling tumbleweeds, and other hazards. If at any point the average speed drops below 50 MPH or if the cars hit anyone or hit each other, the car can't go on public roads.
These robots are massive and fast, they are killers. Yet as long as they look like cars we let them run loose on public streets. It's kinda nuts. (Arguably car traffic as it's set up now is kinds nuts too, but that's a different argument.)
The obvious thing to do is start with something like a golf cart that is limited to, say, 5mph top speed, and work your way up.
When the device is a giant robot arm or a laser cutter, there are huge amounts of regulation-- you may have to get permission from your municipality to operate it even within the confines of your own facility.
Make it look like a car, however, ...
Cars and roads are kind of nuts, but they're established lunacy at least.
Drunk drivers are killers. People that text and drive are killers. People that get kicked by their kid and turn around for a second to yell at them are killers.
I'll take an unfatigable, indistractable, computer that literally has eyes in the back of its head, and on the side of its head, and the front, and can bounce radar under cars, etc over the pitiful example of a "safe (human) driver" we have now.
plus the "beta" testers treat it like some kind of fun game. they're like "whoops, almost swerved into traffic. whoops, almost ran into a pedestrian crossing the street."
> I am really worried by the fact that I am the unwilling tester in the Great Driverless Car Experiment.
Welcome to America, where you are an unwitting tester of many private ventures that have been signed off on by the American government on your behalf.
Remember that complaining about these tests and demanding more control means that you are communist. Unless of course you don't mind waiting around to be harmed by such tests, and are either able to sue, or told that you had no reasonable expectation of safety.
This is not exactly irrelevant, but there is some indeterminate weighting factor that you would have to apply for the bad taste of algorithms killing people vs people killing people.
Not that I'm an impartial judge... At the end of the autonomous car rainbow waits more parking, more driving, and fences to keep people out of streets.
Being used to riding motorcycles in the US, why are headlights not always on for all vehicles at this point? LED bulbs last a very long time and there is little reason to not have them on during the day. I know, it’s much more fun to anthropomorphize the car in this case, but the simple solution is to just hard wire them.
It's been required to do so in some northern European countries. The practical reason there is that dawn and dusk can take many hours during much of the year at higher latitudes thus creating a weird situation where it's not really clear when exactly you should have the lights on or off in exactly the kind of situation where a lot of accidents can happen if you don't have them on.
The simple rule of "always have your lights on" fixes that and it slightly improves visibility even in broad daylight. The only downside would be the increased power usage and the light bulb wearing out sooner. A minor issue a few decades ago when this became normal and a very minor one now that we have LEDs that last a long time without using a lot of power.
My rental bike (a Swapfiets) has the LEDs permanently on; there is no off switch. If you ride it, the lights are on. That's indeed how it should be. I can't think of any good reason to voluntarily reduce my visibility while cycling through potentially lethal traffic.
I've gotten so used to lights being on always that I classify cars with lights off as parked. I won't focus on them. I'm very surprised when they move.
It should really be hardwired, because people forget.
Headlights always being on was the biggest thing I noticed when I visited Iceland. It is so much easier to see vehicles even during a sunny day, they simply stand out much more. As a result, I always turn my lights on right after I get into a vehicle.
Mechanics need to be able to turn them off when they work on them or they’ll get blinded. Every car I’ve driven in the last 10 years seems to have an auto mode though.
I’d be surprised if the Chevy’s these are based on don’t have an auto mode. It could be that a fuse blew here.
I bet it is something boring, like a fuse blew, or maybe they still have the auto/manual lights selector even though the car is autonomous, and somebody misconfigured it.
You'd think 'are the lights on' would be a self-check it would perform, though. GM has to know that headlights go out occasionally, right?
It could also be that some bumbling meat-man clumsily knocked the switch out of the "Auto" position. The Bolt has a truly idiotic headlight switch which always indicates "Auto" even when its been disabled by turning the switch counterclockwise.
In Canada it's programmed to reset to Auto every time the car is started, but not the U.S. model.
The same cars sold in Canada have the tail and head lights on when it turns dark since September 2021. If the mechanic is scared for their eyes they can just have well lit area to work and the lights will not turn on.
It's required in Canada, but in the US, we tend to prefer our freedoms. There are also credible scenarios, say when someone is being stalked, when you would want to keep them off for security reasons; or, if you are parked in front of a house, or pulling into your driveway, perhaps you just don't want them to rake across the front windows and wake your sleeping children up. Many people won't buy a GM vehicle for precisely this reason.
When I had a GM car with daytime lights, they would switch off if you applied the parking brake. So a trick is to apply the parking brake enough that the lights go out (happens at the same time the parking brake indicator illuminates on the dash), but not enough to really grab the wheels. Then you can pull into a driveway with the lights off.
I'm also not sure why this is a thing. With all of the various sensors, etc. in cars, there's safeguards to prevent me from typing an address in the GPS while moving, so why can't there be something that turns your headlights on while driving at night above a certain speed.
On Saturday, I drove from NJ to MA and saw 3 vehicles on the highway at night with only daytime running lamps, and one with no lights on whatsoever.
I haven't driven a modern car in a decade or more that didn't have light sensors in the dashboard to know when to automatically turn on lights. They even turn them on when the wipers are engaged. Everytime you go through a tunnel you'll see headlights pop on automatically. My 2009 BMW is the oldest car I have that has auto lights, but it doesn't have wiper sensors.
Many EU countries (especially those on the North) mandate them to be ON at all times when car is moving. The cars sold in those regions have light switch on Auto by default, which you can manually override in both directions.
What I find annoying is the people whose cars DO have headlights that are automatically "on", yet all the OTHER lighst on the car are off.
It's common to come across one of these cars at night from behind - no visible tail lights, but when you pull up next to them, their headlights are on, yet dim (daytime running lights?).
Headlights do burn out. I'd rather not pay $1,000-$2,000 to replace my headlights since they are LED built into the headlight. I'd much rather have auto headlights be mandated. Even that can be a pain in the ass if you have HID bulbs and travel in areas where they turn on and off often reducing the life of them.
I remain confused on how these cars are allowed on the road. Did they pass a driver's license check? Who's responsible? Who gets the ticket? In this instance, you can see that the car performed a dangerous maneuver by running from the cops, with its lights still off, and crossing an intersection, which is then spun by the company as finding a "safer" spot to pull over at. Even though, it found a spot essentially identical to where it was. It also took off pretty fast.
Yes. That would be the AV permit that the cars operate under. Also as they are programmed to obey all road laws, it's exceedingly unlikely they will cause any moving infractions.
> Who's responsible? Who gets the ticket?
Cruise. Their name is on the AV permit, it's their insurance.
> spot essentially identical to where it was.
Before it was in a lane of traffic next to a parklet, after it was pulled over partially into what looks to be a parking spot. The SFPD officer also was no longer next to the vehicle.
> how these cars are allowed on the road
At the end of the day the argument is that these cars are safer than the average driver. Having spent a lot of time in SF, I've seen a lot of these cruise vehicles moving about. I remember watching them in the early days in the Dogpatch, when an errant cone would end up with them being stuck until manual action was taken. Slowly they got better and smarter.
Yes, it's a hard transitional time. Mistakes may happen, but the same is true for humans. And not all AV technology is the same. Cruise has limited itself a great deal so it doesn't try to take on the entire world at the same time. Just the streets of SF, which are hard enough.
How many Cruise traffic accidents have you read about? How many people injured or killed?
Isn't OP about a moving infraction it caused, despite its programming?
Arguing the probabilities of an outcome is an engineering-centric approach to selling these cars to the public, and it doesn't land well IMO/in this thread from others.
It makes a false equivalence b/t how we value human agency's role in outcomes, and something other-than-human. Most people seem to prefer the risk outcomes of something I decided, vs. a faceless control system.
Where do I apply for a permit to reassign culpability for my driving to a corporation?
> At the end of the day the argument is that these cars are safer than the average driver.
The last data I saw said otherwise, and that data was primarily collected in ideal conditions. That has been a few years, but these companies are not forthcoming with their data.
Thought experiment: imagine a particular self-driving algo were to do something that would warrant a human driver getting their driving licence suspended, would that mean all the cars with that same algo would be simultanously suspended, or just that one car?
Waymo's marketing would imply they think the whole fleet should be suspended in this case. They call their system the driver, and seem to anthropomorphise it.
But more practically: what could a self-driving car do to get it's licence suspended? Afaik That's a pretty rare punishment outside of two cases: drunk driving or street racing.
That is a very intriguing scenario indeed. Due to the non-deterministic nature of many self-driving systems, I'd reckon a broader investigation would have to follow.
Should that investigation reveal a reproducible behaviour, a recall or general suspension of vehicles running that particular software version would seem like a reasonable response. Otherwise suspending just that one car could be an option.
It's uncharted territory both in terms of the legal system and w.r.t accountability so I guess no one has an answer to that yet.
That's easy and obvious: The whole fleet is "grounded" (or at least the self-driving feature disabled) until the bug has been identified and fixed. If you ask me, only a single car misbehaving is actually much more alarming than if the bug manifests in all cars of that type.
In case no one is specifically identified for the infraction, doesn't it come back to the car owner ?
Not in the US, but that's how it would work here for illegal parking or if the driver fled from the cops and the car is never found for instance. Even if the owner doesn't own a driving license, that's just where the buck stops. The owner if of course free to try to shove it somewhere else, but that's on them to make it work.
Parking citations attach to the car (as does civil collision liability generally), but moving violations can only be levied against the driver. Which is how some red light camera fines are avoided, if the driver isn’t clearly identifiable by the camera.
What worries me a little bit is how this can shift blame further on other (more vulnerable) road users once such vehicles become more common, solidifying even stronger the notion that roads are for motor vehicles and for motor vehicles alone.
I'd say actually less so. As a pedestrian I'd be much more confident stepping in front of a self-driving car, than stepping in front of a human-driven car.
So much so, that I've read about people's fears that self-driving cars will be subject to bullying by pedestrians (and other drivers).
Insurance doesnt pay for crimes. You cannot buy "speeding insurance" to pay your traffic tickets anymore than you can hire someone else to go to jail for you.
I find the explanation offered by Cruise amusing: the car supposedly "figured out" that it is in a dangerous spot to be pulled over and "tried to move to a better one" (extremely complex behavior), while the actual reason for the car being pulled over was not having its headlights on (seemingly a relatively simple bug).
Perhaps the lights are not even inside the loop of the Cruise hardware and software. They may rely on the automatic lights that GM provides from the factory. Do they even have visible light optical cameras in their system?
EU has a regulation requiring new cars to be equipped with DRL capability, that's all. Laws requiring the use of lights in daytime that exist in some countries are completely unrelated to EU.
No, as you have cited, there is a law saying vehicles must be equipped with these lights, but it's up to national regulations on when they must be used.
Just my anecdote, saw a cruise car trying to turn left from the middle lane on Franklin.
Came to a complete stop, bringing traffic to a halt, and waited for the cars on its left to clear out, including anyone who went around, then cautiously moved into the left lane and turned.
Some kind of rude, selfish programming I’d be yelling at a person for.
I wonder if Cruise or other AVs have a programmed threshold to "give up"? A human driver in the middle lane might decide to keep driving on and take a different route rather than hold those people up.
I have lots of anecdotes of the opposite. Many, many times I've been driving around SF and noted "Hey! That driver had an opportunity to be an ass and instead chose to be a polite, reasonable driver. Ohhhh, it's a Cruise car. That explains it." 9/10 times it's a robot car.
The intention is for the car to be a very boring, unremarkable driver.
This sort of behavior has been noted before and most incidents were thought to have been fixed by a merge ~6 months ago, but there can always be missed scenarios/interactions. Was this more recent?
Perhaps all human drivers are incompetent? In the USA there are 5 million crashes that involve insurance every year. Insurance companies say that someone will get in a crash (however minor) once every ~18 years of driving on average.
Rereading my comment yeah that’s harsh. Not trying to promote road rage though it is kind of fun to generally be loud from the safety of your car when no one can hear you.
The scalable solution would be that law enforcement will be given a direct override/pull-over button for any driverless car. I expect this to happen in the future if driverless cars reach large volumes. Maybe I haven't thought this through enough, but I can't imagine a circumstance where you would want a driverless car to be able to ignore stopping directions from law enforcement.
I would assume they have to have it programmed in them to pull over.
They already need to incase of ambulance or fire trucks behind them. That's a day one feature. You see emergency flashing lights you pull over until they pass you.
If they pull over behind you, then that's just a case of lights behind you, pull over and the car remains stopped.
Flashing lights to signal to pull over is one of the most unbreakable of all driving rules, you can't release a self driving car that can't handle that case.
Same way humans do? It needs to recognize flashing lights and emergency vehicles to obey the law to get out of their way, and from there add code to fully pull over and remain stopped as long as there’s a cop car with flashing lights staying behind you.
I assume it's more complex that that, but I can't see how a driverless car can reliable distinguish between being pulled over and an emergency vehicle is trying to get past on a narrow road.
Remote override for authorities to stop the car, instruct it to pull over etc. Alternatively, passengers always having access to a pull over when safe button.
Tradition has it that when you load-test a new bridge, you put the architect underneath. I feel like this, except I didn't design those driverless cars, somebody else did. Being an experienced software engineer, my trust in the software in these cars is pretty low. And yet they are testing them on me, because I can be the one getting killed.
I think we should set a much higher bar for allowing those cars on the streets, rather than "it kinda works, so let's roll with it".
All to say, the roads being safe as a core assumption is completely false. I'd gladly share the road with beta AI versus other drivers like this. Being an occupant in beta AI is completely optional.
UK car insurance feels more like US health insurance. High premiums, and heaven forbid if you get into an accident.
Most of those are lacking or at least inconsistent, for AI driving.
I'm also really happy about the continued advances in auto safety technology.
You put your faith in myriad bureaucracies and trust networks that you have no hope of understanding. Driverless cars are just one more added to the list.
As a example, just look at the 737 MAX, a airplane viewed as a literal deathtrap and a complete indictment of airplane software development, which had 2 crashes in 400,000 flights. At a average flight distance of ~500 miles, the 737 MAX, a airplane literally multiple orders of magnitude worse than any other and a complete failure of the approval process, only resulted in 1 fatality per 100 million person-miles. That is safer than the driving average of 1.1 fatalities per 100 million person-miles. What is viewed as by far the most atrocious outcome in the airplane sector, a system literally hundreds or thousands of times more dangerous than the average, is above average in the car sector; that is how far the gap is.
And this is ignoring driverless car software which is being developed to even lower standards than regular car software and which has no actual independent quality requirements. In contrast, far more lives are at risk with respect to driverless cars, so they should be developed to standards probably around 1000x better than airplane software which puts them around 0.0001% of the way there to a acceptable level of quality even if the software was already as reliable as regular car software.
Apparently there is ZERO learning or forethought about risks that can be anticipated or suggested from other technologies!
As others have pointed out, aircraft certification (while it has its problems) is very rigorous. Driverless car certification, up to date, is basically nothing.
But the other aspect is that aircraft autopilots are solving a far easier problem. Developing a car autopilot for busy streets is many orders of magnitude more difficult than an airplane (or boat) autopilot.
It's a qualitatively different change than any of the above (driven by a messianic view that AI is here and now and about to change everything), plus an instance of the
https://en.wikipedia.org/wiki/Boiling_frog
apologue.
> Even the walker’s freedom is now greatly restricted. In the city he continually has to stop to wait for traffic lights that are designed mainly to serve auto traffic. In the country, motor traffic makes it dangerous and unpleasant to walk along the highway. (Note this important point that we have just illustrated with the case of motorized transport: When a new item of technology is introduced as an option that an individual can accept or not as he chooses, it does not necessarily REMAIN optional. In many cases the new technology changes society in such a way that people eventually find themselves FORCED to use it.)
Your problem is the same as the walker's. The walker has opted out of the automobile, but still has to breathe its pollution and risk being hit by it.
Both could have to pass a driving test that includes challenging, difficult situations. Day, night, rain, snow, fog, heavy traffic, missing lane lines, simulated pedestrians (including small children) leaping out between parked cars, and more.
For further difficulty, both kinds of drivers could have to pass this within a x% margin of the fastest safe speed at which the maneuvers can be completed; no cheating by slowing down to traffic-disrupting speeds.
The majority of fatal crashes involve drugs and alcohol and not people with "untested skills in challenging situations."
Pedestrians and motorcyclists account for a little less than 1/3 of all fatalities.
Young men are 8x more likely to die in a car accident than their female counterparts.
This "user-hostile" mode of thinking about driving serves no one.
Though there are systematic flaws in human drivers they're reasonably well known given the level of experience we have, there are many mitigations in place in both our cars and road designs, and many of these flaws are intuitively modeled by the other humans on the road as well as pedestrians allowing for real-time compensation.
The driverless cars, on the other hand-- we have radically less experience with as a class and there are many reasons to believe that vendor to vendor (e.g. due to sensor modalities) or even version to version will behave less self-consistently than human drivers do as a class. We know of some systematic faults in their operation which cause them to behave in ways that are highly surprising to humans, and we should expect to find many more as we gain experience.
To the extent that the properties of self driving cars have been formally studied at all it's been primarily by their creators which are obviously self interested. Some of the practices used in early supervised self-driving also actively undermined understanding their safety (e.g. executing a hand over to the human once the car was already in an nonrecoverable unsafe state and then attributing the accident to human driving since at the time of the collision the human was back in charge, but maybe only by a hundred milliseconds; or companies committing perjury to use the DMCA to force down unfavorable videos created by others)
This isn't an argument that we might not benefit from more rigorous driver testing or even from testing ideas that originated from self-driving car testing... but parity between driverless car tests and human testing shouldn't be a goal, particularly not when so many orders of magnitude separate our human driving experience from driverless experience.
Curiously, autonomous car technology also makes it feasible for us to rate drivers -- it's like having a professional watching over your shoulder all the time. IIRC, Tesla used something similar to determine rollout of new FSD capabilities.
I guess the question is: how much higher should the bar be set? And if we set it substantially higher then how much longer will it take to improve the performance and safety of these systems?
Deleted Comment
Cybernetics is* ~insertion of a governance layer into our social contract that we didn't agree to relinquish our agency to... It just sort of happened, and we deal with the outcomes. The more disconcerting aspect to me is the latter. Suddenly, QR codes everywhere, good luck getting around without a smart phone, if that driverless car sideswipes you and doesn't stop, call someone I guess?
If this area/trend interests you, there is a lot of reading available from the weird (CCRU, early Nick Land) to the fairly mechanical/academic (MIT, Norbert Wiener), to the openly behavioral governance-themed (Stanford Persuasive Technology Lab, B.J. Fogg), to adversarial-to-tech (The Cybernetic Hypothesis).
* so-so definition of the space, but its the best I can sum it up.
You’re basically an non-consenting tester for every piece of innovation that society every has/ever will come up with, in one way or another. It’s an unavoidable feature of living on the same planet as 8 billion other people. Perhaps the safety of this technology does need to improve, or perhaps not. But there’s no level of improvement that would ever be able to satisfy this outrage you’ve expressed at something new existing in the same place as you.
Why do you say it's unavoidable? Regulations could certainly keep these tests off of public roads.
I can easily imagine a similar campaign to label pedestrians as irresponsible for walking outside without an ultrasonic emitter, or a lidar reflector, or whatever integrates with the self-driving system. Whatever shifts the responsibility away from the car and onto the deceased.
stay inside in front of FANNG. You'll be safe there ( yet still an unwilling tester ).
Indeed. The minimum before any self driving car can be put on the road is that the developers, managers, executives, and major investors and their immediate families have to spend 8 hours running back and forth across a track on which 40 self driving cars per mile of track are running laps at a minimum of 50 MPH. Additionally there need to be things like rains of nails on the road, suddenly floods of water, packs of dogs, tumbling tumbleweeds, and other hazards. If at any point the average speed drops below 50 MPH or if the cars hit anyone or hit each other, the car can't go on public roads.
Deleted Comment
Do you have any more info on this? It's a fascinating fact.
These robots are massive and fast, they are killers. Yet as long as they look like cars we let them run loose on public streets. It's kinda nuts. (Arguably car traffic as it's set up now is kinds nuts too, but that's a different argument.)
The obvious thing to do is start with something like a golf cart that is limited to, say, 5mph top speed, and work your way up.
When the device is a giant robot arm or a laser cutter, there are huge amounts of regulation-- you may have to get permission from your municipality to operate it even within the confines of your own facility.
Make it look like a car, however, ...
Cars and roads are kind of nuts, but they're established lunacy at least.
Drunk drivers are killers. People that text and drive are killers. People that get kicked by their kid and turn around for a second to yell at them are killers.
I'll take an unfatigable, indistractable, computer that literally has eyes in the back of its head, and on the side of its head, and the front, and can bounce radar under cars, etc over the pitiful example of a "safe (human) driver" we have now.
Deleted Comment
Welcome to America, where you are an unwitting tester of many private ventures that have been signed off on by the American government on your behalf.
Remember that complaining about these tests and demanding more control means that you are communist. Unless of course you don't mind waiting around to be harmed by such tests, and are either able to sue, or told that you had no reasonable expectation of safety.
Not that I'm an impartial judge... At the end of the autonomous car rainbow waits more parking, more driving, and fences to keep people out of streets.
Deleted Comment
The simple rule of "always have your lights on" fixes that and it slightly improves visibility even in broad daylight. The only downside would be the increased power usage and the light bulb wearing out sooner. A minor issue a few decades ago when this became normal and a very minor one now that we have LEDs that last a long time without using a lot of power.
My rental bike (a Swapfiets) has the LEDs permanently on; there is no off switch. If you ride it, the lights are on. That's indeed how it should be. I can't think of any good reason to voluntarily reduce my visibility while cycling through potentially lethal traffic.
It should really be hardwired, because people forget.
Watch more horror movies and you'll think of some.
I’d be surprised if the Chevy’s these are based on don’t have an auto mode. It could be that a fuse blew here.
You'd think 'are the lights on' would be a self-check it would perform, though. GM has to know that headlights go out occasionally, right?
In Canada it's programmed to reset to Auto every time the car is started, but not the U.S. model.
https://www.youtube.com/watch?v=yZlYCSmtD_4
Couldn't you just turn off the car?
On Saturday, I drove from NJ to MA and saw 3 vehicles on the highway at night with only daytime running lamps, and one with no lights on whatsoever.
Older cars? Of course not.
It's common to come across one of these cars at night from behind - no visible tail lights, but when you pull up next to them, their headlights are on, yet dim (daytime running lights?).
Deleted Comment
It's been that way for over 30 years in Canada (since 1989 I think)...
I'd be curious to know how many more replacement headlights they see per car per year
https://www.hondapartsnow.com/parts-list/2021-honda-accord--...
LEDs are a high-end feature even in new cars, never mind all the existing cars on the road.
Yes. That would be the AV permit that the cars operate under. Also as they are programmed to obey all road laws, it's exceedingly unlikely they will cause any moving infractions.
> Who's responsible? Who gets the ticket?
Cruise. Their name is on the AV permit, it's their insurance.
> spot essentially identical to where it was.
Before it was in a lane of traffic next to a parklet, after it was pulled over partially into what looks to be a parking spot. The SFPD officer also was no longer next to the vehicle.
> how these cars are allowed on the road
At the end of the day the argument is that these cars are safer than the average driver. Having spent a lot of time in SF, I've seen a lot of these cruise vehicles moving about. I remember watching them in the early days in the Dogpatch, when an errant cone would end up with them being stuck until manual action was taken. Slowly they got better and smarter.
Yes, it's a hard transitional time. Mistakes may happen, but the same is true for humans. And not all AV technology is the same. Cruise has limited itself a great deal so it doesn't try to take on the entire world at the same time. Just the streets of SF, which are hard enough.
How many Cruise traffic accidents have you read about? How many people injured or killed?
Arguing the probabilities of an outcome is an engineering-centric approach to selling these cars to the public, and it doesn't land well IMO/in this thread from others.
It makes a false equivalence b/t how we value human agency's role in outcomes, and something other-than-human. Most people seem to prefer the risk outcomes of something I decided, vs. a faceless control system.
> At the end of the day the argument is that these cars are safer than the average driver.
The last data I saw said otherwise, and that data was primarily collected in ideal conditions. That has been a few years, but these companies are not forthcoming with their data.
Thought experiment: imagine a particular self-driving algo were to do something that would warrant a human driver getting their driving licence suspended, would that mean all the cars with that same algo would be simultanously suspended, or just that one car?
1. Easy to implement
2. Doesn't punish fleet scale (compare: suspending the whole fleet means a fleet that's twice as large would be suspended twice as often)
3. Punishes the fleet owner in proportion to the frequency of the error
4. Is exactly correct when the cause is really a hardware problem specific to the car
But more practically: what could a self-driving car do to get it's licence suspended? Afaik That's a pretty rare punishment outside of two cases: drunk driving or street racing.
Should that investigation reveal a reproducible behaviour, a recall or general suspension of vehicles running that particular software version would seem like a reasonable response. Otherwise suspending just that one car could be an option.
It's uncharted territory both in terms of the legal system and w.r.t accountability so I guess no one has an answer to that yet.
Deleted Comment
Dead Comment
Details depend on location / jurisdiction. Some Googling brought up eg https://www.dmv.ca.gov/portal/vehicle-industry-services/auto...
Not in the US, but that's how it would work here for illegal parking or if the driver fled from the cops and the car is never found for instance. Even if the owner doesn't own a driving license, that's just where the buck stops. The owner if of course free to try to shove it somewhere else, but that's on them to make it work.
So much so, that I've read about people's fears that self-driving cars will be subject to bullying by pedestrians (and other drivers).
No
> Who's responsible?
Cruise and their insurance
Dead Comment
I am sorry, I can't do that Dave.
So why rely on the automatic headlights at all? Why not just set the headlights on manually?
Old cars drive with low beam, new cars comes with daytime running lights (DRLs).
https://en.wikipedia.org/wiki/Daytime_running_lamp#European_...
See https://trip.studentnews.eu/s/4086/77033-Car-travel-in-Europ... for a list (incomplete, Denmark requires daytime lights.)
Came to a complete stop, bringing traffic to a halt, and waited for the cars on its left to clear out, including anyone who went around, then cautiously moved into the left lane and turned.
Some kind of rude, selfish programming I’d be yelling at a person for.
This sort of behavior has been noted before and most incidents were thought to have been fixed by a merge ~6 months ago, but there can always be missed scenarios/interactions. Was this more recent?
Sounds like an inexperienced or incompetent driver who lacks confidence. Basically, the kind of driver that shouldn't be on the road.
And yes, that seems thoroughly impractical at scale.
They already need to incase of ambulance or fire trucks behind them. That's a day one feature. You see emergency flashing lights you pull over until they pass you.
If they pull over behind you, then that's just a case of lights behind you, pull over and the car remains stopped.
Flashing lights to signal to pull over is one of the most unbreakable of all driving rules, you can't release a self driving car that can't handle that case.