One thing Waymo did really well was roll out their service extremely slowly. They knew they needed to build a whole lot of trust to get people to accept them. Can Tesla do the same thing? Can they accumulate a track record of being nearly perfect for several years before trying to scale up?
This decision was made at the height of the pandemic supply chain shortages, but then was never reversed when they could get sensors again. FSD will never work with pure vision and it's folly that Tesla / Musk insists that it will.
I’m willing to believe that machine vision would eventually become good enough to match or exceed human visual perception given the same inputs.
But humans in a car have a massive advantage over little cameras that no one seems to discuss much: we have two sensors (eyeballs) mounted on a servo (our head) that can move around and is looking through a truly monstrous aperture (the windshield), and that aperture is equipped with fancy cleaning devices (wipers and cleaning fluid spray), and the car’s operator is motivated to clean the windshield and maintain the windshield, wipers, and spray system to be able to see.
A Tesla car has little tiny camera lenses that are every bit as exposed as the windshield but don’t have all the compensating machinery.
Go stick a pair of nice cameras on a three-axis servo mount with a range of motion of a whole foot (or a camera array and no servo), stick that two feet behind the windshield, train it well (use that massive parallax!) and I’d believe the result would be competitive in performance but definitely not cost. Also the car would lose an entire seat.
Or use radar and lidar and achieve super-human performance.
Fir what it’s worth, the military was and is fully aware that lidar and similar tech can outperform human eyeballs in “battlefield conditions”, and I’m aware of old DARPA projects to do things like pulsed laser range-gated imaging to see through fog and such. (You still get attenuation and scattering, but you can mostly disambiguate the additive signal from fog from the stuff behind it.) Lidar can do something similar. Humans can move their head to acquire more data. Little cameras are at the mercy of the fog and can only use fancy image processing to try to compensate.
Can you cite some practical failure scenarios besides a wile e coyote billboard where camera inherently won't be able to accomplish what lidar/radar do?
5 years ago I agreed that you'd need the other sensors. ML vision has improved so quickly now I'm really not sure you do. From what I've seen the system available to consumers also performs well IRL.
Lidar units suitable for this sort of thing used to be extremely expensive, but they’ve come down a good bit and will likely continue to. At this point hard to read it as anything other than obstinacy.
Costs. Lidar is extra sensors (and also extra signal integration but not having a lidar requires a lot of extra video processing to get information the lidar would straight up give you).
Allegedly, they believe a system with inputs similar to human vision is best suited for interpreting signals on roads designed for human eyes, and conflicting signals from LIDAR makes disambiguation challenging when combining sensor types. Per a recent Musk interview.
Even if Tesla gets it working, it will never be popular enough to justify their valuation. It's a niche product that will only compete with traditional taxis/ubers in urban areas, it has no chance of competing with car ownership at large, which is what Tesla's investors think it will do.
If Tesla only did trucking, that alone is a $1T industry. Now imagine they take a bite out of that, a bite out of Uber and Lyft, food delivery, transportation for an increasingly aging population, continue to make large investments in energy, robotics, etc. It's not that crazy of a valuation.
I'm not sure why anyone thinks it will compete with car ownership. You know what's better than an autonomous taxi, a private autonomous taxi that's just for you.
No, but I do expect Elon to make some nice tweets about Trump and the next thing you know, murder by robotaxis will be legal and any states attempting to regulate them will be prosecuted.
Deleted Comment
Is this due to reduced processing requirements, reduced sensor costs, or what?
But humans in a car have a massive advantage over little cameras that no one seems to discuss much: we have two sensors (eyeballs) mounted on a servo (our head) that can move around and is looking through a truly monstrous aperture (the windshield), and that aperture is equipped with fancy cleaning devices (wipers and cleaning fluid spray), and the car’s operator is motivated to clean the windshield and maintain the windshield, wipers, and spray system to be able to see.
A Tesla car has little tiny camera lenses that are every bit as exposed as the windshield but don’t have all the compensating machinery.
Go stick a pair of nice cameras on a three-axis servo mount with a range of motion of a whole foot (or a camera array and no servo), stick that two feet behind the windshield, train it well (use that massive parallax!) and I’d believe the result would be competitive in performance but definitely not cost. Also the car would lose an entire seat.
Or use radar and lidar and achieve super-human performance.
Fir what it’s worth, the military was and is fully aware that lidar and similar tech can outperform human eyeballs in “battlefield conditions”, and I’m aware of old DARPA projects to do things like pulsed laser range-gated imaging to see through fog and such. (You still get attenuation and scattering, but you can mostly disambiguate the additive signal from fog from the stuff behind it.) Lidar can do something similar. Humans can move their head to acquire more data. Little cameras are at the mercy of the fog and can only use fancy image processing to try to compensate.
And there should be some criminal liability since people have died.