Readit News logoReadit News
whywhywhydude · 3 years ago
Tesla’s insistence on using vision alone is pretty dumb. Elon and Andrej Karpathy argued that since humans can drive using just vision, that’s how we should do it in self driving cars, but I think that’s a flawed argument. The proper question to ask would be - if given additional senses, wouldn’t humans use them for safer driving?
tzs · 3 years ago
Also humans do sometimes do use additional senses when driving. For example I've had to make a left turn at a T intersection onto the cross street, where I had a stop sign and the cross traffic did not.

This was in California's central valley where it can get very foggy making it very hard to see traffic until it was almost in the intersection.

It was a quiet rural area though and by opening the windows on both sides and turning off the radio I could hear traffic quite a bit before I could see it. I'd sit at the stop listening until I'd heard a car or two go by to be sure that it was quiet enough that I could hear them. Once I'd calibrated my senses to that day's current conditions I was able to make the turn.

cjohnson318 · 3 years ago
Listening for cars in fog is great until a Prius comes along, and coastal areas of California is their natural habitat. But, yeah, if I had extra senses laying around, I'd use them for all sorts of things.
tshaddox · 3 years ago
But to be fair, that’s just because you couldn’t easily see in that direction. Presumably a Tesla’s cameras can see in all directions.

A better example would probably be hearing emergency sirens before there was any line of sight to the emergency vehicle.

dieselgate · 3 years ago
That’s precisely the reason why busses and maybe other commercial vehicles must stop at railroad crossings before passing
adamjcook · 3 years ago
In my view, the higher-level issues with the FSD Beta program are:

- A failure by Tesla to view the system that they are developing as what it really is - a physical safety-critical system and not "an AI". Those two are distinct systems as, with a physical safety-critical systems, the totality of the systems safety components cannot be fully expressed in software - neither initially nor continuously.

- To build on that point, Tesla is not allowing the Operational Design Domain (ODD) via a robust, well-maintained validation process determine the vehicle hardware as the ODD demands it to be. Instead, Tesla is trying to "front run" it (ignore the demands of the ODD) by largely focusing on hardware costs. The tension from failing to recognize that is why Tesla, in part, has a long history of being forced to (somewhat clandestinely) change the relevant sensor and compute hardware on their vehicles while promising to "solve FSD" (whatever that means) by the end of every year since around 2015 or so.

judge2020 · 3 years ago
> and not "an AI"

But what is AI? If it's just "artificial intelligence", it effectively includes all programming with if/then logic gates based on program input.

brianstorms · 3 years ago
Also, humans and other animals that rely on vision have eyelids and tear ducts and are able to blink and get stuff out of an eye.

Poor, poor Tesla cameras freak out as soon as the sunshine is too bright or there's snow or rain or ice or mud in the way. You'd think if they're going to rely on vision, every camera mounted on the car would have a way to "squint" in blinding-light conditions, or "wipe" the lens or something when smudges, rain, snow, ice, mud, or bug-splat blocks the view. But then, Tesla is insanely cheap, and all that would require parts, and that would impact margins, and that would impact stock price, and so, this is why we can't have nice things ....

est31 · 3 years ago
Yeah human eyes cover a huge dynamic range compared to traditional cameras that have all sorts of issues with either too little or too much light (blooming, lens flares). Are a Tesla's cameras of the same quality of the human eye? Can they see in the dark just as well?
reaperman · 3 years ago
Would be nice if someone was prototyping things like that. Id imagine you could sell actual self driving cars for $200k or maybe a bit more. (Cost of a decent luxury car + 1-2 years of a dedicated full time chauffeur)
maven29 · 3 years ago
Closi · 3 years ago
But humans eyes often look away from the road, close during a sneeze etc, and have a very narrow viewing angle compared to a car surrounded 365 degrees by cameras... so there are plusses and minuses.

Human vision isn't that perfect for driving when it's looking at a mobile phone.

ikhatri · 3 years ago
Yeah it's shocking to me how many people overlook this. Even if we pretended that the Tesla sensor suite was capable of FSD, it's not FSD if you have to disengage when the lens gets mud on it. Sensor cleaning is an integral part of actually being able to have driverless operation. When I worked at Argo we spent a lot of time making sure that we were designing failsafe methods for detecting and dealing with obstructions (https://www.axios.com/2021/12/15/self-driving-cars-clean-dir...).
TaylorAlexander · 3 years ago
Also I think the biggest discovery is that the “brain” part of the human “eyes plus brain” part is extremely hard, and “sensor which can see depth” probably makes the brain part easier.

That said, Tesla was never in a position to use LiDAR because it has generally been extremely expensive. Solid state lidars are supposedly now hitting low volume testing for 2025 production years. Tesla is a mass manufacturer not a self driving start up, so there was never really an option for them to offer LiDAR without an extremely expensive self driving package.

One thing however that was obviously wrong was Elon’s promises, which were extremely misleading and helped build his fortune thanks to the misunderstanding. (Assuming this inflated stock values)

With solid state LiDAR supposedly becoming available for $500 in the next two years (a promise we have heard since 2016 but one that seems possibly to finally be coming true) we may never end up seeing if Tesla could have ever done it with pure vision - they could go with solid state LiDAR for forward facing driving in the next few years.

That said, over promising is going well for them. Perhaps they will just keep doing that.

jakeinspace · 3 years ago
If Tesla caves and ships LiDAR, I don’t see how they can get out of refunding all previously sold FSD packages. They can continue working on camera-only FSD but it will be immediately apparent that there’s a massive gulf in safety and performance between that and the LiDAR-equipped option.
amelius · 3 years ago
> One thing however that was obviously wrong was Elon’s promises, which were extremely misleading and helped build his fortune thanks to the misunderstanding.

I'm wondering why there isn't a law firm that wants to make a fortune by starting a class action suit.

firesteelrain · 3 years ago
Why is LiDAR such a big deal? Our Honda Odyssey has had LiDAR for Lane Assist, Brake Assist, etc for a while.

The sensor costs $125 on eBay.

cypress66 · 3 years ago
If you actually try FSD beta you'll very quickly realize that the vast majority (over 95% probably) of disengagements are because the planner is dumb, not because of vision. In other words, on the screen it sees everything correctly, it just decides to do something dumb.

So currently the vision stack is not greatly holding them back.

bottlepalm · 3 years ago
Exactly, FSD Beta is currently driving me around 90% of the time, and none of the problems are sensor/vision related, but decision related.

Updates are happening around once a month and the decision making is getting noticeably better.

bryananderson · 3 years ago
I think this misses the point.

The full system for humans is “vision + brain” and for self-driving its “sensors + planner”.

The Waymo/Cruise philosophy is that since we don’t know how to make the planner human-brain-level, we should shift as much of the load as possible to the sensors, where we have the ability to use things that humans don’t have, like lidar and radar.

To me, Tesla FSD going vision-only is a bet on the progress of AI planning models. If the planner reaches a human-equivalent level, then human-equivalent sensors are fine. Time will tell if this is a good bet, but so far it’s not.

Deleted Comment

eagsalazar2 · 3 years ago
This is 100% true. Lidar improves accuracy by millimeters up close, inches at 10-50 feet away and feet beyond that. That accuracy is more than sufficient. Recognition and classification of objects is not improved at all (that part that matters). And, like parent post said, tesla classifies everything very very well, the real issue is that the planner acts completely crazy all the time and is scary.
timthelion · 3 years ago
Humans have two eyes that move in sync and can measure distance using the auto-focus feature. They can both point at an object and autofocus to figure out how far that object is. I don't see teslas using moving binocular cameras with 10th of a second autorefocusing in order to judge distances. I don't see how it is the same thing. Of course we can play GTA with just vision, but I'd argue the average person crashes in GTA more than they do in a real car.
cainxinth · 3 years ago
> Of course we can play GTA with just vision, but I'd argue the average person crashes in GTA more than they do in a real car.

I’m fairly certain that people would drive more safely in GTA if their life was literally at stake.

eternauta3k · 3 years ago
Aren't the eyes basically always focused at infinity while driving?
treis · 3 years ago
That's due to the small viewing window, controller, and physics of GTA. In proper simulators with steering wheels and big screens they do fine.
vosper · 3 years ago
> Elon and Andrej Karpathy argued that since humans can drive using just vision, that’s how we should do it in self driving cars

I thought their argument was a little more like “since roads are designed for human vision, we should take a vision-based approach, too”.

Not saying it’s the right idea, just that’s how I thought they had put it.

ugh123 · 3 years ago
I never bought that argument considering roads live in 3 dimensional space and our eyes and brain are constantly trying to decipher 2d space into depth. Seems like an extra hop that would be better cut out.
wolpoli · 3 years ago
It's difficult to accept Elon and Andrej's reasoning. I suspect that if one asks a team of engineer to research designing a self-driving car, the team wouldn't come back with the argument to use vision since "human could do it with eyes." I expect a list of options with the pros and cons of each approach, along with an estimated timeline and cost.
cma · 3 years ago
More like since lidars are $100,000 (at the time), we can't sell that but we can say our advanced any day now vaporware means we don't need lidar.

Lidar has since dropped in price by a lot, an order of magnitude or more.

anothernewdude · 3 years ago
I'm pretty sure their argument is "this will be cheaper, and therefore more profitable."
dclowd9901 · 3 years ago
That’s like saying “if cooking is just following directions in a recipe, we should just follow directions in a recipe.”

The result is subpar food because most recipes have a 1% problem called “seasoning”.

The “seasoning” of driving — the completely unpredictable and intuiting 1% of situations you find yourself behind the wheel where you just have to draw on your intuition and gut instinct — are the reason we need nothing short of AGI for _completely_ self driving vehicles.

I do think, though, trucking is ripe for AI disruption.

im_down_w_otp · 3 years ago
Humans also use inertial sensation, vibration, force feedback through the steering wheel, and their “cameras” are constantly changing their focus and position in space to construct a rich context of the environment.

Now, all of these things except the last one have some representation through an electronic or electromechanical sensor, but gluing them all together into what it takes to deal effectively with the intersection of vehicle dynamics and environmental dynamics is very hard.

v0idzer0 · 3 years ago
Humans also don’t have 8 eyes facing every direction at all times. They also get drunk/tired/impatient/angry etc. The reality is the entire argument is silly. Both are very different and Musk/Karpathy argument is misrepresented here. Saying humans only use vision was a response to “its not possible with only vision” not a statement that human vision is good enough and no need to do better. The 8 camera surround is leaps better than human vision. Where they lack is processing the signal. Human brain does that better. But if you have better inputs (we do already) and you believe you can one day match on the processing part, you’ll one day get a much better result. One thats suited to the vision based roads we have now and scales to literally anywhere not geo constrained like Waymo
justapassenger · 3 years ago
Their logic is on the same level as arguing that birds fly with just wings, so no idea why we’re playing with those silly engines.
leobg · 3 years ago
Which isn’t a dumb argument at all. It’s what got us flying in the first place, thanks to a brave Geman engineer called Lilienthal.
fwlr · 3 years ago
I am sympathetic to this view (I would really love to see just how safe it’s possible to get), but I think the Musk/Karpathy-style argument for vision-only self-driving is quite strong, and it only seems flawed because it has been incorrectly simplified as “humans do driving with ~only vision -> computers should do driving with only vision”.

The proper argument is “humans do driving with ~only vision -> roads are therefore universally designed and built to be driven via by vision -> computers should do driving with only vision”. It is essentially a standards-based argument: since vision is the universal standard for driving, computers must be able to drive using just vision.

So vision is always going to be the core of self-driving. Why not augment with LIDAR anyway?

Well, in situations where vision and LIDAR are both right, you didn’t need LIDAR; in situations where vision is right and LIDAR is wrong, you didn’t need LIDAR and it potentially made you worse off; in situations where vision is wrong and LIDAR is right, you need to spend more on improving your vision; and in situations where both vision and LIDAR are wrong, you need to spend more on improving both, but improving vision is a higher priority. These are all the possible outcomes and none of them make a compelling case for investing in LIDAR.

ClumsyPilot · 3 years ago
> hink the Musk/Karpathy-style argument for vision-only self-driving is quite strong

> humans do driving with ~only vision -> roads are therefore universally designed and built to be driven via by vision -> computers should do driving with only vision

What is 'should', is it a moral imperative? Is it a social obligation? Who made this argument, a catholic priest?

Where is consideration of this argument from an engineering perspectove - analysis of advantages disadvantages, where consideration of cost benefit? Where is assesment that, for example, 50% of human crashes are due to poor visibility or spatial awareness and comparison of how well computer handles them?

If I posted this vacuous, unsupported argument here, I would be laughed at, and rightly so.

But if Elon announces something, there is always 10% of the population willing to defend it, no matter how dumb it is.

kanbara · 3 years ago
radar is not lidar and is present on lots of vehicles that do L2/L3 driving except newer Tesla. optical sensors do not inherently tell you distance as a function of their sensing, whereas radar does.

a vision only approach _may_ be possible at some time, but only with a strong computational model of the human brain and thought process.

also, most people drive poorly— i wouldn’t say vision is the be-all-end-all of autonomous driving. it’s also clear that waymo and cruise have taken a full sensor based approach and are successful, whereas tesla is not.

gdiamos · 3 years ago
Current generation deep learning systems rely on supervised learning, which requires human labeling.

This is one argument for vision alone. It’s easier for humans to teach the deep neural network what to do if they both see and label the same thing.

It’s harder to build labeling systems that work on representations that humans don’t understand like point clouds or noisy depth maps.

That isn’t to say that other sensors including radar, gps, LiDAR, other spectrum, etc don’t help.

But you have to develop more complex labeling methods or move away from supervised learning.

bootlooped · 3 years ago
This doesn't sound like much of a barrier to me. If you're a human training the LiDAR system, couldn't you just consult the image or video to help label whatever the LiDAR is seeing?
nextaccountic · 3 years ago
Is it possible to transfer learning from vision to LIDAR? Maybe if it's possible to map visual images to LIDAR images and vice-versa (by running a car with both cameras and LIDAR and learning their associations)
neuronexmachina · 3 years ago
Isn't the typical training data used in self-driving basically things like object labeling/segmentation and motion prediction? I'm not sure why that would be significantly different for visual vs depth-map data.
add-sub-mul-div · 3 years ago
> Elon and Andrej Karpathy argued that since humans can drive using just vision

Is this maliciously specious or am I missing something? I drive using vision plus decades of life experience and all the tacit knowledge, judgement, and reasoning ability that comes from that. We have not reproduced any of that with math, and getting/stalling 90% of the way there with mimicry is not good enough.

saghm · 3 years ago
I don't think it's super surprising that someone who sells a product called "Full Self Driving" that isn't fully self driving would also happen to lack rigor in their scientific claims.
judge2020 · 3 years ago
It's obvious that they're talking about the input ("sensors") humans use and not presenting an exhaustive list of the things required to drive.
williamcotton · 3 years ago
It has more to do with training a neural network. The environmental cues for driving on a road are optimized for human vision. This becomes important when you think about it from a machine learning perspective. Fewer inputs are better for many reasons. Non-visual inputs will sometimes be in disagreement when the visual inputs during training which leads to a worse model.

If everyone could take some deep breathes and press pause on their emotional response to Elon Musk (and not assume everyone who happens to agree with him has a Musk tattoo) then they would fine plenty of rational arguments from an engineering perspective.

andrewfong · 3 years ago
I'm still annoyed they got rid of the ultrasonic sensors on the latest model. I test drove a 2022 (or maybe a 2021) and the park assist was pretty good. And then I get the 2023 delivered and there's no park assist for the first 6 months because they removed the ultrasonics but the vision-only software wasn't ready yet. A vision-based park assist finally came in via an OTA update but it's nowhere near as precise as the ultrasonic version was. Like, the estimates of how much distance is in front of me seem to jump around a lot more than I'm actually moving, and it sometimes reports it's degraded when trying to pull out of a tight spot.
birdyrooster · 3 years ago
Tesla drivers are the product, they train the machine and put up with garbage so that Elon can invest in his future profits. Self selected clones.
29athrowaway · 3 years ago
Humans not only use vision, they use sensor fusion. Combining what you see, hear, touch, etc. Your body can perceive acceleration, for example.

On top of that, you have theory of mind. For example, you have 4 cars next to you, all of them with opaque windows that do not let you see the driver:

A) a loud sports car with a bunch of modifications, decals and racing related stuff

B) a grandma car with cat related stickers

C) an unmaintained car with collision damage, and loud music coming from it

D) a family station wagon with a baby on board sticker and other family related stuff

Your mind will process what it sees and quickly assign each one of those cars a different personality. A and C will likely be perceived as riskier cars, B and D will be likely perceived as safer cars. You will avoid A and C and remain unconcerned about B and D.

The problem with the self-driving cars right now is that they only perceive the road as bodies that move.

senttoschool · 3 years ago
Great point. We use far more than just vision.

I just imagined myself driving without sound. That seems crazy to me. I need to hear cars, kids playing, etc.

And you're right that we subconsciously assign risk values to each car. A heavily modified BMW with decals? Could be an irresponsible young male adult trying to show off on the road. I should probably be prepared to brake or let him go first.

jo909 · 3 years ago
And assumption like that are probably a great source of accidents, our mind needs to take some shortcuts like that and isn't always right. Grandmas car got sold last week and daddy is alone in the car and late for work and will be racing to get in front of you.

I'd rather have a computer keep track of everybody just the same but with millisecond reaction to all changes. Something that I can't do lacking eyes all around and processing power.

flomo · 3 years ago
Reminds me when I drove a cab in my early 20s. I had a psychological bead on other drivers, I knew what they were going to do before they knew it. (And to some extent I still do, I just try not to be a dick about it.)

AI will probably eventually very good at picking up these behavior tendencies. Or at least better than the people who aren't driving all day.

spoonjim · 3 years ago
Humans use a lot of reasoning though. I once saw a guy approaching an intersection where he had the red light and from far away I could see he was jamming out to the music and in his own world. I didn’t go on green (getting honked at by the guy behind me) and watched while the guy blew straight though the red light and slammed on the brakes when he was almost fully through it.
throwaway2037 · 3 years ago
Will you also tell us a time where you made a careless mistake that a self-driving car would not make?
rzimmerman · 3 years ago
The argument is more complicated than Elon makes it out to be. "Humans can drive with vision alone => computers can drive with vision alone" implies computers can do anything the human brain can do. It's not a given, and it's certainly not true for the compute power in a Tesla. It's completely possible that all of the following are true:

* Humans can drive with vision alone

* A sufficiently advanced compute system can drive with cameras alone

* Telsa's FSD computer cannot match human performance without additional sensors

Geee · 3 years ago
I've watched FSD videos quite a lot, and I've never seen an issue related to sensing / visual detection. It always draws the road and other cars with decent accuracy. All issues happen in the planner, i.e. the car makes a wrong decision, not because it can't see something, but because it doesn't know what to do in that situation.

The hard problems in autonomous driving are not related to sensing, but deciding what to do in weird or complex situations.

See this video for an example of a typical problem in recent FSD (and you can see from the screen that it's not related to sensing / detection): https://youtu.be/eY3z1kgX5hY?t=74

matthewdgreen · 3 years ago
It has trouble detecting speed bumps and potholes. I'm not sure if this is because the vision sensors fail, or because they have not been programmed to detect/display these features properly. Whatever the reason, the planner then accelerates right into them.
seanmcdirmid · 3 years ago
Sensor fusion is actually a hard problem. Yes, more different kind of sensors can lead to poorer results. Imagine having two different views in the world at unsynced points in time, and making decisions on that. It isn’t weird there might be a focus on LIDAR, or vision, but not both at the same time, at least for real time decision making.
foobazgt · 3 years ago
I don't have to imagine this. My brain is doing sensor fusion every moment of every day. A lot of the time that involves conflicting data, and your brain has to decide on the most reasonable interpretation. When it's not at its best you get things like optical illusions, nausea, etc.

It's a hard problem to solve, but that doesn't mean it's a bad idea. I think most people would agree that human beings are better off with the overlapping set of sensors that are available to us compared to the alternative.

ramraj07 · 3 years ago
Google did it, so that argument is moot.
mrjin · 3 years ago
Cannot agree more. Even that argument was not only flawed but rather false. We use at least hearing to help driving. I'm not sure I'm alone, after closing all windows, I feel a huge different in the car just like I was disconnected from the outside.

The thing is, Elon has lots of fans.

WesolyKubeczek · 3 years ago
This argument is really dumb. I understand deaf people can drive, too, but the additional auditory input is very helpful. When an emergency vehicle is far behind me, I hear the sirens and know to look in the mirror and move right to let it pass me. I can hear the bells of a railroad crossing even if I cannot see its lights blinking due to the road curvature, and start reducing my speed so I approach it smoothly. Some stupid junctions like the one described below, I hear a car approaching well before I can see it.

That said, there is no point making a self driving rig if it’s going to only be as good as the best humans are. For adoption, it must be provably better in any situation imaginable: moving obstacles, weather, dust, dark drunk humans in the night, emergency vehicles, no lanes drawn on the road, read all road signs correctly (say my Yaris gives me mistaken readings where maximum mass is confused with speed, it doesn’t know what “built-up” area means with regard to speed limits, it sometimes reads a sign that belongs to an adjacent road). For it all to work, you need more input than vision. And redundancy. Lots of redundancy.

Someone · 3 years ago
> since humans can drive using just vision, that’s how we should do it in self driving cars

Not the strongest of arguments.

“since humans move around on foot, that’s how we should design machines that help us move stuff around”

“Since humans do long-distance communication by shouting, hand or smoke signals, that’s how we should do it in transatlantic communication”

“since birds can fly using wings, that’s how we should do it in machines that fly”

Deleted Comment

hprotagonist · 3 years ago
their argument is specious; i suspect the actual reason is “lidar/other sensor system components are expensive or hard to acquire”.
ndhillon · 3 years ago
This is a mischaracterization of what he is saying. Humans are unable to drive without vision. Even Chris Urmson agrees that LiDAR is a crutch that measures distance as opposed to computing it (albeit it only works in perfect weather). Musk is saying that if you are going to use a sensor to measure distance, don’t use photons in the visible light spectrum. Instead use photons that can penetrate objects (microwaves) to capture information that you wouldn’t have. The challenge is building a high-precision RADAR, which Tesla is attempting to do. Some HW4 vehicles have it but it’s unclear whether it is required or not. Ultimately, building a L4 AV requires solving very difficult problems in computer vision, which is exactly what they are trying to do
cameldrv · 3 years ago
Musk did not publicly endorse high resolution radar until recently, and using high resolution radar is not a unique feature of Teslas. Waymo currently has 5 high resolution radars on their cars.
tsimionescu · 3 years ago
This is anachronic. Musk was making the vision argument to claim that Teslas sold at the time (which lacked any kind of LIDAR or radar) had all of the hardware they needed to do FSD, which was sold as "just a few months away via software update". I believe that they still make this claim officially, since it's important to deny that they dif false advertising on this front.
spullara · 3 years ago
It is self-evident that you only need vision to drive. The question is whether you can make a computer smart enough to leverage vision to do it.
ClumsyPilot · 3 years ago
it is self evident that you only need legs to move around and gears are just extra weight. why do cars have gears?
bluecalm · 3 years ago
Why is it self-evident? If your argument is that people can drive with only vision let me stop you right there and point to the fact that people are terrible drivers and the bill is in millions of injured and dead a year.

The main argument after the collision? "I haven't seen them!".

j16sdiz · 3 years ago
No.

You listen to the road condition.

You sense the road bumping.

You feel the acceleration.

Your eye have better dynamic range.

mytailorisrich · 3 years ago
I think the 'proper' approach is to use whatever is economically viable to get to the desired result as quickly as possible.

Ultimately the market will only ask two things: Does it work? And, how much does it cost?

No-one cares that "Ah but ours only uses vision".

martythemaniak · 3 years ago
It's smart if you realize that they never had a choice. No matter what Musk says, they could do vision-only or nothing at all. Google started their program in 2009 and had a pile of cash. Tesla started their program in 2015 ish and until 2019 they were in a precarious financial position. So they never had the money or time to take Google head on. With vision, they could at least use their position to their advantage.

And it's a good bet. There's tons more to self driving than perceiving the world (lidar does not help a driver what to do in a novel and ambiguous situation, it's not a perception problem) and Tesla's vision is quite good based on all the FSD videos on YouTube.

jknoepfler · 3 years ago
Forgive me if this seems like a knee-jerk response, but literally zero human drivers use "vision alone" to drive. Humans drive with a spacio-temporal model of the world, visual, auditory, and haptic feedback, logical/symbollic rules about driving norms, emulative models of other drivers/agents in the driving environment, ethical judgments about what it is ok/not ok to collide with... and so on and so forth.

Reducing that to "vision" doesn't even make superficial sense.

mariusor · 3 years ago
I always wondered, and don't remember seeing it specified one way or another, do self-driving AIs use any sort of temporal modelling for the environment?

A model which can use predictive behaviour for the objects that the visual part detects seems orders of magnitude better than one that just does visual detection from scratch. Seems like a huge wasted opportunity to have to model the world starting from zero for every frame they receive from the cameras.

Objects that pop-in and out of the field of vision due to occlusion or other reasons seem to trip Teslas (in at least some of the reported incidents), but even like that it's hard to believe that they didn't implement such an obvious improvement.

leobg · 3 years ago
Thanks not the full argument, actually. The argument is:

You’ll need to solve vision anyway. Because you need to know what the object is. Is it a trash can or is it a dog? Will it move or stay?

If you had LiDAR, you’d still need sensor fusion with a camera to answer those questions, introducing more problems.

That’s why Elon says that any company that relies on LiDAR is doomed. LiDAR isn’t the gold standard - it’s the low hanging fruit.

seanmcdirmid · 3 years ago
It remains to be seen: Musk is making a bet that Tesla could win big or lose here. It is an interesting bet to be sure, but Waymo's bet is as well (sensor fusion, and LIDAR will become affordable over time). I like that we have some diversity in the bets being made at least.
voytec · 3 years ago
> Elon and Andrej Karpathy argued that since humans can drive using just vision, that’s how we should do it in self driving cars, but I think that’s a flawed argument.

Of course. The real argument is "LIDAR is expensive in comparison and after scamming people for almost a decade, we have to be careful what kind of money we ask for". LIDAR was never considered an overkill by Elon.

v0idzer0 · 3 years ago
That was just their reply to “is it even possible?”. Their argument is that vision actually works better than sensors. The signal processing data it receives is higher quality. You just have to know how to process it like the human brain does. It’s a harder solution, but if solved a better one
ClumsyPilot · 3 years ago
> The signal processing data it receives is higher quality... It’s a harder solution, but if solved a better one

What is the basis for this claim? Have you seen the kind of data other sensors provide?

Radar gives you exact distance and velocity of a vehicle thats hundreds of meters away, through fog, in the dark.

Cameras can't even give you distance for objects that are too far away

Deleted Comment

graycat · 3 years ago
Humans also use sound as in tires squealing, something going "thump, thump, thump", someone screaming, horns, sirens, "crash, tinkle, tinkle, tinkle", etc.

Humans also use their sense of motion from the car sliding, rocking, tilting, jerking, etc.

Then humans integrate all such input, combine that with what they know about how cars work, traffic, people, the road, diagnose what has happened, apply years of prudent judgment, and then decide what to do.

E.g., maybe they have an ice chest with 20 pounds of ice which has melted and now, due to the motion of the car, has tilted, spilled, and is about to get the dog soaked with ice water. Good luck with the self-driving with a good response to such a scenario.

shaky-carrousel · 3 years ago
Humans can drive using just vision, but the autopilot doesn't have the processing capabilities of a human by a long long margin. So the smartest thing to do would be to compensate the worse processing capabilities with better senses.
NeuroCoder · 3 years ago
I'm not privy to the legit decisions around Teslas tech vs PR that I occasionally run across. Does this mean they are dedicated to only using traditional vision comparable to humans or is that just their focus at the current time? I can appreciate there's only so much tech that can be reliably developed for mass production at a given time but Elon also tends to make some odd choices/statements out of principle.
jrpt · 3 years ago
Obviously they know that. They are just betting that they can do it with vision only, to save on expenses, and also work with the way their cars work today.
jacquesm · 3 years ago
And they're betting with other people's money but keep the profits.
Reubend · 3 years ago
I agree. Vision only approaches are cheaper but far more difficult in the long run.

But that wouldn't be a problem if Tesla didn't advertise their cars as containing all the necessary hardware for self driving. If Tesla admitted that cameras were insufficient for self-driving, they would open themselves up to legal liability.

mupuff1234 · 3 years ago
If you view Tesla for what it is - a car company than the insistence doesn't seem dumb.

They got all the hype and PR for the AI angle without the money pit that comes with a real self driving project.

Even if they eventually get cannibalized by actual self driving cars then they still made billions and billions in the process.

b33j0r · 3 years ago
I’m personally surprised that this strategy plateaued as soon it has. It had an extremely sharp early-success curve.

Opinion: a lot of people in this space thought “no one wants to get in a car with a radar tower on top, can’t we make it just look like a car?”

And decisions proceeded from there.

entropicgravity · 3 years ago
My question to Musk would be,"Is there one, cheap, device that could be deployed widely on roads that could improve self driving cars enough to push the technology into viability by a substantial margin?".
giantrobot · 3 years ago
> Elon and Andrej Karpathy argued that since humans can drive using just vision

Which is insipid because humans do not use just vision when driving. We use hearing, touch, and proprioception extensively when driving.

judge2020 · 3 years ago
Not to perform the main driving task though. Deaf people tend to be able to get a driver's license no problem, and proprioception is mostly irrelevant unless you need to know where your feet are to ensure they're on the correct pedals.
ekianjo · 3 years ago
> Tesla’s insistence on using vision alone is pretty dumb

If they can finally make it work well one day, that comment will be the one looking dumb in the end.

just saying that one should be careful about making this kind of assumptions...

> if given additional senses, wouldn’t humans use them for safer driving?

It's all a question of costs in the end. It would be safer to fly with 10 pilots in one airplane, but the economics make it work for 2-3 at most.

de6u99er · 3 years ago
I brlieve self driving can be achieved through standards, e.g. if roads become digital. Street signs and cars could communicate the current state of traffic in real time with other cars. It should be left to the state to detect certain events. E.g. traffic jams or, if a bicycle or pedestrian is about to cross the road. This way liability issues could be fairly split up between law- and car-makers.
medellin · 3 years ago
That sounds great if we want to rebuild our entire road infrastructure. So using this for self driving will never happen since it would require every country to implement a very costly standard.

Sounds great in practice but isn’t realistic in the slightest

GrumpyNl · 3 years ago
If thats the argumentation they go by, the car should be able to drive on tomatoes.
gmac · 3 years ago
Agreed: it’s like they’re insisting on playing on hard mode (or is that cheap mode?).
izzydata · 3 years ago
This implies that they can create a piece of software as intelligence as the human brain. Absolutely absurd reasoning. If they are being serious with such a statement then they are complete fools. More than likely they are just making excuses for why they are being cheap. Still fools, but maybe not complete fools.
pipodeclown · 3 years ago
It doesn't imply that at all. That's like saying openai is creating a human level intelligence with chatgpt. Emulating a single function a human is able to perform really well is not the same as aiming for human level intelligence.
LatteLazy · 3 years ago
"Humans can move without any wheels at all!" exclaimed the Tesla engineer as he announced Tesla's new Legs only, Wheels are for Losers policy.

Deleted Comment

bparsons · 3 years ago
Also, humans are pretty horrible at driving. They constantly commit traffic violations and kill eachother. Driving kills about 1 in 103 people in their lifetime.

There is no other day to day activity where this level of risk is considered acceptable.

Veserv · 3 years ago
What are you talking about? Humans are shockingly good drivers. It is a average of ~80,000,000 miles, or ~5,000 years of regular driving between fatalitys and that includes the motorcyclists, drunks, and people who do not wear their seatbelts who account for ~70% of all deaths if I recall correctly. If you are a average driver who does not drive drunk and who wears your seatbelt and you started driving when agriculture was invented you would not be expected to have gotten into a fatal accident yet.

Anybody who says humans are bad drivers is almost certainly underestimating the difficulty of replacing humans by a factor of 1000x.

samtho · 3 years ago
> Driving kills about 1 in 103 people in their lifetime.

More people have been killed by cars than I have.

https://en.m.wikipedia.org/wiki/Comparative_illusion

Dalewyn · 3 years ago
Human vision is simply converting rays of visible light sensed by our light sensors (aka eyes) into electric signals which are subsequently converted into an image by our brain which is then processed.

Given that, perceiving the environment with radar, lidar, visible light, infrared, and so on is equivalent to human vision.

As far as I'm aware, Tesla uses more than just visible light sensors. Am I wrong in my understanding?

mjg59 · 3 years ago
Humans aren't capable of emitting an electromagnetic pulse, sensing the reflection, and using the time of flight to calculate distance between themselves and an object. So, no, lidar and radar aren't equivalent to human vision even if you extend the idea of human vision over a wider range of the EM spectrum.
thorncorona · 3 years ago
> Given that, perceiving the environment with radar, lidar, visible light, infrared, and so on is equivalent to human vision.

When paired with a general intelligence evolved over a couple million years, and visual sensors that have much more dynamic range than commercially available sensors yeah.

The problem is without those 2 things Teslas crash into fire trucks.

And different bands of EM have different properties so eyes aren’t equivalent to radar. Otherwise we would be able to see around corners.

And if humans had more sensory data we would definitely be integrating it into our driving. Otherwise ADAS tech wouldn’t be so commonplace in 2023.

selcuka · 3 years ago
Park sensors, for example, use radar but they are not equivalent to human vision because humans can't see the back of the car, and they can't measure the distance as accurately.
lambic2 · 3 years ago
stephc_int13 · 3 years ago
The mix of unconstrained input (the real world, not a lab, not a simu, not a factory) and safety-critical output make this kind of problem particularly hard to tackle.

I don’t understand why anyone could think that it would be easy or fast.

Guys like Elon or Geohot have been simply delusional.

I think that being naïvely over optimistic is a good trait for innovation engineering, but we should also manage expectations…

Nothing is done until it’s done, proof is in the pudding.

bodge5000 · 3 years ago
> I don’t understand why anyone could think that it would be easy or fast.

Because progress went so quickly from "cars are entirely human operated" to "cars can nearly drive themselves", so it's assumed that progress will continue at that pace.

We've made this mistake time and time again before self driving cars and evidently continue to make that mistake now

visarga · 3 years ago
> so it's assumed that progress will continue at that pace. We've made this mistake time and time again before self driving cars and evidently continue to make that mistake now

We are making the same mistake now with chatGPT. We think if it progressed so much in the last 3 years, future progress will go at the same speed. But the last 1% is exponentially harder than the previous 99%.

delusional · 3 years ago
It's obvious that it's hard now, but at the time it was much harder to coherently argue that THIS was the mode shift of the progress chart.

An S-curve just looks exponential from the bottom. It's not until you hit the flattening part of it you see where the limit is.

chii · 3 years ago
as a percentage of accidents, i would imagine autodriving cars are less than human driven cars.

It's still flawed atm, but surely it will continue to improve. Not to mention that the average human is already terrible a driver, and replacing them with autodriving surely nets more benefits than the flaws of autodriving - if not at this moment, then very soon in the future.

piyh · 3 years ago
Geohot might be delusional, but his company has a value proposition that's viable.
runarberg · 3 years ago
What baffles me is that it is relatively easy to make most transportation needs very predictable under a very controlled setting. And we have been doing it for a while, we have even been operating self driving vehicles on these corridors that transport millions of people every day.

Fully automatic train systems are a proof that it can be done, but lack of imagination among policy makers makes them think the only way forward is with privately owned consumer market cars.

Fricken · 3 years ago
Geohot never made any delusional or outrageous claims. His company set out to build and ship an advanced driver assist system and they did.
IshKebab · 3 years ago
grecy · 3 years ago
> * I don’t understand why anyone could think that it would be easy or fast *

Like electric cars, or landing rockets, or even getting to the moon.

I don't think anyone honestly thought self driving cars would be easy or fast, but they're getting better every single month, and sooner or later they're going to be better than human drivers.

jackmott42 · 3 years ago
are they getting better every month? a company like Waymo may have good data on this. Tesla updates seem to be doing more like a random walk.
rektide · 3 years ago
It's just crazy because you can get to driving great 99.99999% of the time very very quickly. The unceasing attention & ability to keep track of known unknowns is a huge superpower versus humans.

But all the other cases, that wild element of reality, is so endlessly widely puzzling.

monero-xmr · 3 years ago
The amount of engineering, testing, rigor, and so on for a rover on Mars is stunning. And they are working with much more constrained environments without human actors.

Society won’t accept robots that kill people. You can bring out statistics until you are blue in the face, but a major reason car accidents are tolerated is that a person can be convicted if they break the law. A faceless AI that hides behind insurance won’t cut it.

Enthusiasts of self-driving cars can be hyper-logical about statistics, safety analyses, etc. while simultaneously ignoring all the other areas of political life that are based on emotion rather than rational logic. If we can’t convince huge fractions of society about basic truths - pick whatever truths you are passionate about that half the country disbelieves - why do you think society will accept robot cars that cause accidents (even 1)? It’s not going to happen.

valine · 3 years ago
Tesla gets a bad rap because of their notorious missed deadlines. They continue to chip away at the problem though, and have a large number of people paying them to use their beta software.

The fact that they’ve figured out how to make income from unfinished self driving software makes them, in my opinion, likely to succeed eventually. For everyone else self driving is a money pit. Tesla can continue working the problem indefinitely until it’s solved.

adamjcook · 3 years ago
In my book, Tesla gets a bad rap for providing an unvalidated, should-be safety-critical system to run-of-the-mill consumers without an accompanying Safety Management System.

The fact that they profit handsomely off this structurally dangerous wrongdoing is just the cherry on top.

And, without robustly maintaining a systems safety lifecycle (which, by necessity, must incorporate a Safety Management System)... no technical progress is quantifiable by anyone, including Tesla.

Tesla effectively throws a system over-the-wall and throws it all on the human driver and on the public.

rektide · 3 years ago
People keep using their Necessary Captial Words to say what we have is balderdash. Another post guffawes that there isn't Operational Design Domain.

I agree that where we are is balderdash, and dishonest, unclear about itself in extreme & lacking. But I detest this My Paradigm Is Required phrasing. Say what you think please! Browbeating the topic with particular/spefific engineering dogmatisms is unhelpful, unclear: it leans on authority, while also not having assertable claims anyone else could contest. This kind of hollow criticism degrades.

valine · 3 years ago
Nothing you said is wrong, I just really, truly don’t care. FSD makes fewer mistakes on routes I drive than it did a year ago, so whatever they’re doing seems to be working. Complaining that people like me shouldn’t be allowed to purchase safety critical software just deepens my resolve to keep using it.
Consultant32452 · 3 years ago
Every day thousands upon thousands of new teenagers start driving. One day, maybe today, the self driving technology will be safer than those teenagers and it will be a moral crime to put them in control of the vehicle. The same applies for the elderly.
justapassenger · 3 years ago
No, Tesla gets bad rap because they constantly lie. And they lie, because it allows them to get billions in basically free capital.
Reubend · 3 years ago
Exactly. Missing a deadline would be understandable, but Tesla has been consistently dishonest about their software.

Unfortunately, neither consumers nor the market have punished them for it. So it's inevitable that they will continue, and that other companies will follow.

Animats · 3 years ago
Just because a scam is profitable for the scammer doesn't mean the thing being sold works.
valine · 3 years ago
It kind of works, that’s the point. I pay for an FSD subscription because I get enjoyment from my car driving me around. Am I being scammed?
paulddraper · 3 years ago
Not necessarily, but in this case
hiddencost · 3 years ago
Tesla chose to kill people by shipping a low quality system.

They did this because they were (and remain) massively behind the market leaders, and the only way to catch up was to start bulk data collection early with a system that wasn't ready.

ericd · 3 years ago
Strong allegations, what is the death rate per million miles of FSD vs baseline? Is that a publicly known figure?
HDThoreaun · 3 years ago
This is like saying Ford chose to kill people by shipping a car. Guess what, cars kill people, society has accepted that.
idopmstuff · 3 years ago
The Model Y was the best selling car in the word in Q1 - they're not massively behind the market leaders.

https://driving.ca/auto-news/industry/the-tesla-model-y-just...

gdiamos · 3 years ago
I give Tesla a lot of credit for shipping features that I can buy.

I use lane following, summoning (which saves me time), and the windshield wipers every day.

Does anyone remember will it wipe? https://youtu.be/0SSYFMtdJ5k

I’m also furious about the hype, and how much of my time it has personally wasted.

I sometimes have to turn off the auto wipe, when there is a lot of glare…

Swenrekcah · 3 years ago
At some point one can’t just claim a missed deadline. They have in fact relied on misleading marketing massively inflating their capabilities in this regard.

Which is sad because they’re still the best value for money you can get in the EV space. The only reason I haven’t bought one is because I hate touchscreens.

MangoCoffee · 3 years ago
I'm impressed by this comparison between Tesla FSD vs Waymo.

whether Tesla is level 2 or 4 autonomous. I'm very impressed with Tesla's FSD.

edit: Tesla FSD reach the destination first by taking the high way while Waymo go the local route and far longer to reach the same destination.

https://www.youtube.com/watch?v=Hv9HtWUf27s

marques brownlee FSD demo: https://www.youtube.com/watch?v=9nF0K2nJ7N8

adamjcook · 3 years ago
That "comparison YouTube video" is absurd and dangerous, because, at minimum...

A Level 4-capable vehicle (a Waymo vehicle) is an incomparably different system than a Level 2-capable vehicle (a vehicle equipped with FSD Beta).

The Waymo vehicle has a design intent such that there is no human driver fallback requirement within their vehicle's Operational Design Domain (ODD).

The Tesla vehicle has a de facto design intent such that the human driver is the fallback at all times - which makes the control relationship between the human driver and the automated system exactly the same as if the Tesla vehicle was equipped with no automation at all.

The risk profiles and failure mode analyses are Night and Day different and, therefore, the validation traits between these two vehicle are Night and Day different.

But, more than that, there are no guarantees that:

- The human driver of the FSD Beta-active vehicle shown in that video did not manipulate any of the vehicle controls out-of-view that clandestinely assisted the vehicle without deactivating the automated system (possible and inherent Human Factors safety issues with that aside); and

- The creators of this comparison video did not select the most visually-performant run out of several attempts.

Naturally, since we are dealing with safety-critical systems here, assumptions of "positive safety" are not compatible with any internal or external analysis.

Lastly, I have yet to see a video involving FSD Beta where indirect and "unseen" systems safety issues were satisfied. Appearances can be deceiving and deadly with safety-critical systems.

thorncorona · 3 years ago
The problem is that the complexity is hidden in the edge cases. And those edge cases are non obvious and deadly.

To put it succinctly, the difference in safety between a car manufactured in 1953 vs 2023 is not fully obvious to a driver who has not been in an accident.

Google had a self driving car over 10 years ago, that is at the level of FSD, but their approach is to go straight to L5 for safety.

https://youtu.be/TsaES--OTzM

asdgkasdngionio · 3 years ago
Unless their approach is completely unworkable, which by all appearances it is. It doesn't matter how much you bet if it's on the wrong horse.

Tesla's own systems from five years ago, when they used a proper sensor suite, worked better than what they ship today.

jillesvangurp · 3 years ago
That and Tesla is insanely profitable and not running out of resources any time soon. The opposite actually. I don't really care about any arbitrary timelines here. Because they don't have any real dead line other than "Elon Time" which is probably is a bit of a blessing in disguise. By putting on pressure to deliver quickly, he actually gets results. And some of those results in other areas have lead to his companies being very profitable and unlocking multiple multi billion dollar markets. As long has he believes self driving can be done and is well funded, he'll continue to pursue this thing. Rapid iterations and quickly adapting to challenges and set backs is a good strategy here.

I think there are two misconceptions in this space:

1) People mostly only talk about US companies and self driving cars in the US. China actually has a lot of self driving cars as well. Mostly following strategies similar to Waymo and with some borrowing from Tesla. And China is nowhere near as burdened by trigger happy lawyers slowing everything down. It's an ideal test ground for self driving from a legal point of view. And they are well funded. And they are running circles around most of the US in terms of car manufacturing. In general, the rest of the world will need to be covered by self driving eventually. Forget Phoenix; that's easy and rather boring. Can self driving cars manage in Italy, Spain, or conquer the German autobahn? I'm not pessimistic. But it won't be next year.

2) Painting this as a black and white game. All or nothing. Waymo is leading the way here by focusing on where it can work and gradually expanding their abilities. They are not even trying to make it work everywhere; just where they need it to work to make money. It's expanding it's area of operation and as it does the area it operates in adapts to self driving cars rather than the other way around. And it undeniably moves people from A to B at this point, seemingly without major incidents. The one thing that makes self driving cars hard is having human drivers around. Easy solution: get them out of the equation. This technology is basically going to be safety statistics and cost driven. As it gets better (i.e. safer and cheaper), it gets easier to optimize roads for self driving and the problem as a whole gets easier to deal with. Meanwhile there are big market opportunities in personal transport, freight, containers, etc. where self driving makes a lot of sense. Those are already happening.

My prediction is that Tesla will do well where companies like Waymo do well in as well and that they will meet in the middle sooner rather than later. And being positioned as a safety feature, all Tesla needs to keep on doing is failing to cause a lot of fatalities and gradually keep on improving. The rest is just raw numbers. As soon as there are millions of cars driving semi autonomously most of the time failing to cause a lot of trouble, confidence increases and it gets harder to argue there is a problem. Eventually, people will let go of their steering wheels. Insurance companies and cost will incentivize them. Based on the numbers.

ikhatri · 3 years ago
So with the disclaimer that these views don't reflect those of my employer, as someone who works in the industry I think this article is basically spot on. The only point I would add is that the top line cost for all these vehicles is quite high right now, so scaling up the service alone isn't really a solution to the profitability problem. I won't get into specifics, but I think this blog post from Cruise summarizes the point pretty well (https://getcruise.com/news/blog/2023/av-compute-deploying-to...). The term "edge supercomputer" really is the best way to describe AV hardware deployment. And that doesn't even cover the sensor suite which is quite costly as well.

So if I was a betting man, I'd say that you can expect Cruse, Waymo & others to scale a little bit now, just to show investors that they can but for them to really save the bulk of the scaling (to hit that targeted figure of 1B/yr of revenue) until after they've found a way to get the costs down. That's going to come in the form of more bespoke vehicles that are better vertically integrated with custom hardware and sensing solutions (like the Cruise Origin).

kbos87 · 3 years ago
So many armchair opinions here about Tesla FSD. It drove me 175 miles today from a dense and complicated city to a distant destination without me intervening once. Reading the comments here you’d think it’s completely smoke and mirrors.
jillesvangurp · 3 years ago
Yep, it's either science fiction or science fact. There seems to be a lot of opinionated people that insist they can counter reality by yelling harder. But their points of view don't necessarily align with the facts.

My view is very simple. Lots of people insist that this stuff is dangerous and will kill people. Lots of Tesla FSD capable cars are on the road and are racking up billions of miles. Where are the traffic deaths? Where are the countless crashses? Those supposedly dangerous situations escalating all the time? It's all failing to happen as people insist it ought to be happening. Maybe that's because they are wrong. If anything, it seems Tesla runs circles around the likes of Volvo in terms of car safety by now. They certainly seem to insist so and they claim to have the numbers to prove it. And I'm not hearing a lot of statistics that counter that.

Meanwhile, there are of course lots of traffic deaths. The vast majority of which involve human drivers making fatal mistakes and getting themselves into trouble. Even adjusted for relative miles driven by humans and AI, the numbers aren't good for humans. They are terrible actually. It's not that hard to do better than that. AI vs. drunk, tired, reckless, moronic, etc. human drivers is basically no contest. And there are a lot of those. Roads are dangerous because of that.

kbos87 · 3 years ago
Not to mention that if you are using FSD, your attention is being monitored. I can’t pick up my phone or look away from the road for more than a few seconds without the car warning me. Meanwhile the rest of the driving public almost always has their phone in hand.

Deleted Comment

bottlepalm · 3 years ago
I did over 700 miles with it over the memorial day weekend, worked really well especially on the highway no issues, surface streets probably 90%. Each release gets noticeably better. I actually notice when someone drives manually now how it's not as smooth as FSD in many situations.
Reubend · 3 years ago
That's fantastic! But hopefully you were paying attention, ready to take control at any point in time. I've personally been inside a "self-driving" Tesla that would have crashed without human intervention.

So while we can give Tesla props for shipping a very useful feature, it's not reliable enough to be considered "fully self driving". There could easily be fatal consequences for consumers who don't babysit it diligently enough.

kbos87 · 3 years ago
I don’t disagree with you. FSD isn’t yet what the name claims, though they’ve made shocking progress in just the last 3 months. If your experience with FSD was longer ago, know that it’s almost a completely different system. I have full confidence it will get there, and it isn’t going to take long from where it is today.

Also, the eye tracking update that was rolled out a few months ago is strict - you are either paying attention to the road and never looking at a phone, or FSD is going to shut itself off and suspend your use of it. It easily holds you to a higher standard than the average driver holds themself for being engaged with the road (which is a low bar.)

oldgradstudent · 3 years ago
What's your life expectancy is you let Tesla FSD do all the driving for you, without you supervising it?
kbos87 · 3 years ago
There’s eye tracking in the cabin now. It’s simply not possible to use it without supervising it.
MangoCoffee · 3 years ago
i noticed on HN. if you gave Tesla/Elon any credits. you get down voted.
LanceJones · 3 years ago
Absolutely true. A lot of emotions on this forum.
bboygravity · 3 years ago
This tells me the FUD/smearing campaigns by hedge funds that are (or have been) short Tesla have been extremely effective at brainwashing everybody including the relatively intelligent who roam HN.

The average (hedge fund owned) media outlet, especially the financial ones, basically read "please remember to hate Elon".

fragmede · 3 years ago
Just responding to the headline, Cruise just started rolling out service to the whole city of San Francisco. Before it was limited to areas and times that were hard to take advantage of, but now it's opening up to the whole city (still late night only) so it's started competing with Uber-Lyft for late night rides. we'll just have to wait and see what effect it has on the transportation system as a whole.
mkw5053 · 3 years ago
And it’s so much better than a Lyft or Uber. They are clean. Don’t try to talk to you. And take the GOS-recommended routes.
moomoo11 · 3 years ago
I mean.. they're clean right now.

Think about when they're GA. You're going to have all sorts of people, not just the invited few as it is right now(fellow techies and such, who mostly are not going to trash and destroy property).

The public transit is gross most of the time with trash, vandalism, and grimy seats. Do you really think people are going to treat cars well where they're all alone inside (when have cameras stopped some people from being vile?)?

IMO these AI car services should be priced at a premium above Uber/Lyft. I want individual door to door service any time, with a guarantee it will be clean and comfortable. Otherwise, I'm always opting for my own car.

tuatoru · 3 years ago
> Sam Abuelsamid, the industry analyst, told me that he “did some math” on Cruise’s plan to generate a billion dollars in revenue in 2025. He concluded that the goal was “actually probably achievable.” He estimates that the goal corresponds to a fleet of around 6,000 driverless vehicles

Let's plug in some reasonable guesses and see. 6000 vehicles x 10 trips/day average x $25/trip average x 250 days/year: $375M.

To an analyst that maybe be practically the same as a billion, but not to an accountant. And that's gross revenue. Take off all the car costs as well as the overhead.

(Remember: these vehicles can't cope with the busiest, most profitable locations for taxis.)

WheatMillington · 3 years ago
I am puzzled by your assumptions. 10 rides per day? How many rides do you think a taxi/uber driver does in one SHIFT? 250 days - why? I would expect more like 95% uptime and 95% availability, for 329 days.
carstenhag · 3 years ago
Why would a self-driving car only drive for 250 days? You have to consider maintenance/repairs, but that won't be 115 days. The 10 trip a day guess is probably also low, but seems okay as an average to me.
ekianjo · 3 years ago
With 330 days a year it still brings you to half a billion, which is half of what's considered "achievable".

This means they would need 20 trips per day on average, or higher pricing on average for each trip.

tuatoru · 3 years ago
Because the demand is only there for 250 days.
dvh · 3 years ago
You don't have to estimate, use average income of taxi driver as that's what ai taxi will earn.
carstenhag · 3 years ago
Or even more. An AI taxi won't speed or nearly cause accidents due to braking way too late. Greetings from my recent Poland trip :D (But yeah, time will tell whether they will cause unnecessary accidents in other ways)
jrm4 · 3 years ago
I can't be the first person to think that trying to do the "every possible situation, individually" thing is stupid.

Why no focus on e.g. a top down federal "system" that would take over the driving e.g. just on the highways/interstates? Seems like that would be orders of magnitude EASIER.

jboy55 · 3 years ago
I always felt there would be 'carpool'/express lanes that would just be for automatic driving cars. They would communicate with each other and so they could pack together really tightly. You'd enter your destination, then drive you car to the point that you enter the lanes, then the car would take over. It would warn before your exit, then you'd take over.
bandyaboot · 3 years ago
I would like to see such a system installed at a high volume intersection. When the light turns green the system instructs all the cars lined up to accelerate simultaneously. The motivation isn’t necessarily practicality, just pure entertainment value.
judge2020 · 3 years ago
Another benefit could be to have extremely fast lanes instead of more packed-together lanes. With good enough coordination, or a block zone system like trains[/roller coaster trains], you could achieve >120mph cruising. With both of these possibilities though, the requirements are generally that the car completely takes over driving with no way for the human to assume manual control to break the system or otherwise create an unsafe situation for other road users.
CSMastermind · 3 years ago
Why do it on public roads? There are oil fields in North Dakota where a lot of driving needs to be done and corporations could completely control that environment.

Why not automate cargo ships? There's far less to hit in the ocean and far fewer inputs required.

Why still have human pilots in airplanes when they're largely automated?

neysofu · 3 years ago
> Why not automate cargo ships? There's far less to hit in the ocean and far fewer inputs required.

Maneuvering a ship is but a very small part of all that's needed to maintain a ship operational – engine maintenance, paperwork, dealing with port agents and customs, operating the radio, etc.. Much of that can be automated, I'm sure, but the return on investment is quite poor. Crew salaries are a small part of all operations costs, so you're not saving much by removing them from the equation. (Add to that the fact that large cargo ships can exceed $1B worth of cargo, so I'm not surprised that the economics don't make much sense.)

doublespanner · 3 years ago
It's all about the cost of a driver relative to the activity and risk; with taxis it's a much higher proportion than it is for aeroplanes.
umanwizard · 3 years ago
The US is functionally ungovernable due to its political structure, so the federal government can’t really just “do things” that are that ambitious, even if they make sense. If something like that happened at all it would probably be in China.
jrm4 · 3 years ago
You should read more? We've already done railroad, long distance telephone and arguably the internet.
mulmen · 3 years ago
That’s the idea GM had way back in 1958: https://youtu.be/cPOmuvFostY. Basically ATC for a high speed self driving lane.