The point I soured on Musk was when he ditched radar/lidar and tried to go with camera's alone. This made me realize he is not the genius he is made out to be but instead he is a fraud/charlatan and over the years his statements on different topics have only hardened that belief. Why the fuck would you want ai cars to be the same as humans you should want them to be many times better and radarr/lidar sonarr type tech make them better
This was purely an effort to improve margins on the cars that they tried to sell with other types of rationale. From the way I've seen him operate his companies, treat his employees, and now work with the government, he has a high tolerance for risk paired with a very low tolerance for perceived inefficiencies plus too little patience to fully understand the problem.
He really embodies the ethos of "move fast and break things". So let's fire 80% of the staff, see what falls down, and rehire where we made "mistakes". I really think he has an alarmingly high threshold for the number of lives we can lose if it accelerates the pace of progress.
It's pretty funny because lidar used to be many thousands of dollars and now it's down to hundreds and they're still sticking to just regular cameras. Funny in the sense that Teslas are many times more dangerous than lidar enabled cars but anyway.
The sour irony is that "move fast and break things" was formulated in the low-stakes world of web entertainment software, which was able to become so prominent precisely because of the stability of having our more pressing needs being predictably taken care of (for the most part).
While it was absolutely vital to getting the costs of the original Tesla Roadster and SpaceX launches way down… it can only work when you are able to accept "no, stop" as an answer.
Rockets explode when you get them wrong, you can't miss it.
Cars crashing more than other models? That's statistics, which can be massaged.
Government work? There's always someone complaining no matter what you do, very easy to convince yourself that all criticism is unimportant, no matter how bad it gets. (And it gets much worse than the worst we've actually seen from DOGE and Trump — I don't actually think they'll get to be as bad as the Irish Potato Famine, but that is an example of leaders refusing to accept what was going on).
> He really embodies the ethos of "move fast and break things". So let's fire 80% of the staff, see what falls down, and rehire where we made "mistakes". I really think he has an alarmingly high threshold for the number of lives we can lose if it accelerates the pace of progress.
Which seems like an antidote to the current culture of "don't build anything anywhere."
Multiple independent sensor streams is like having multiple pilots fly a plane instead of just one pilot. The chance of a fatal error (false negative identification of an object near the car) decreases from p to p^n. The size of that decrease is not intuitive: if p=0.0001 it now becomes a much smaller number with the introduction of a second pilot or second independent sensor stream (0.0001^2).
Now the errors are not all independent so it's not as good as that, but many classes of errors are independent (e.g. two pilots getting a heart attack versus just one) so you can get pretty close to that p^n. Musk did not understand this. He's just not as smart as he's made out to be.
I misread your first sentence, in that having multiple pilots is not necessarily a good thing, like multiple cooks.
You have a main and a backup pilot, but either one must be 100% capable of doing it on their own. The backup is silently double checking, but their assignments are more about ensuring that the copilot doesn't just check out because they're human. If the copilot ever has to say "don't do that it's going to kill us all" it's a crisis.
Lidar is a good backup but the car must be able to with without it. You can't drive just with lidar; it's like driving by Braille. Lidar can't even read a stop light. If they cannot handle it with just the visuals, the car should be not be allowed on the road.
I concur that is terrifying that he was allowed to go without the backup that stops it from killing people. Human co drivers are not a good enough backup.
But he's also not wrong that the visual system must be practically perfect -- if it's possible at all. Which it surely isn't yet.
I think that's an oversimplification, and helps mainly in the case where one sensor is totally offline.
If you have two sensors, one says everything's fine but the other says you're about to crash, which one do you trust? What if the one that says you're about to crash is feeding you bad data? And what if the resulting course correction leads to a different failure?
I'd hope that we've learned these lessons from the 737 Max crashes. In both cases, one sensor thought that the plane was at imminent risk of stalling, and so it forced the nose of the plane down, thereby leading to an entirely different failure mode.
Now, of course, having two sensors is better than just having the one faulty sensor. But it's worth emphasizing that not all sensor failures are created equal. And of course, it's important to monitor your monitoring.
I thought it was consensus that the move away from lidar was driven by a lack of supply compared to desired EV production numbers, and the things said about making vision only work have all been about coping with that reality.
I dunno because I'm pretty sure that each sense a person loses increases their risk of mortality. If what you are saying is true, then shouldn't we all be better off wearing blinders, a clothespin over our noses, and rely on echolocation?
he is a ruthless ** who doesn't care about people dying and slightly increasing the profit margin is for him worth more then a some people dying
regulators/law allowing self driving companies to wriggle out of responsibility didn't help either
lidar was interesting for him as long as it seemed light Tesla could maybe dominate the self driving marked through technological excellency the moment it was clear it won't work he abandoned technological excellence in favor of micro optimizing profit at the cost of read safety
which shouldn't be surprising for anyone I mean he also micro optimized workplace safety in SpaceX away not only until it killed someone but even after it did (stuff like this is why there where multiple investigations against his companies until Trump magiced them away)
the thing is he has intelligent people informing him about stuff, including how removing lidar will statistically seen kill people, so it's not that he doesn't know, it's that he doesn't care
Just see him talking about things at NeuraLink. Musk wouldn't exist if it weren't for the people working for him. It's a clown that made it to the top in a very dubious way.
> Musk wouldn't exist if it weren't for the people working for him.
I've decided Musk core talent is creating and running an engineering team. He's done it many times now - Tesla, SpaceX, Paypal, even twitter.
It's interesting because I suspect he isn't a particular good engineer himself, although the only evidence I have for that is tried to convert Paypal from Linux to Windows. His addiction to AI's getting results quickly isn't a good look either. To make the product work in the long term the technique has to get you 100% of the way there, not the 70% we see in Tesla and of now DOGE. He isn't particularly good at running businesses either, as both twitter shows and his solar roof's show.
But that doesn't matter. He's assembled lots of engineering teams now, and he just needs a few of them to work to make him rich. Long ago it was people who could build train lines faster and cheaper than anyone else that drove the economy, then it was oil fields, then I dunno - maybe assembly lines powered by humans. But now wealth creation is driven by teams of very high level engineers duking it out, whether they be developing 5G, car assembly lines or rockets. Build the best team and you win. Musk has won several times now, in very different fields.
I see this get thrown around once in a while and I really don't get it. Isn't this true of basically every leader? Gates, Jobs, Buffet, Obama, they all wouldn't existing without their teams. Isn't that just obvious? Isn't one of the important markers of a good leader to be able to build a good team?
You do want them to be better than humans, but vision quality is not really a major source of human accidents. Accidents are typically caused by driving technique, inattention, or a failure to accurately model the behavior of other drivers.
Put another way - would giving humans superhuman vision significantly reduce the accident rate?
The issue here is that the vision based system failed to even match human capabilities, which is a different issue from whether it can be better than humans by using some different vision tech.
> Put another way - would giving humans superhuman vision significantly reduce the accident rate?
Yes? Incredibly?
If people had 360° instant 3D awareness of all objects, that would avoid so many accidents. No more blind spots, no more missing objects because you were looking in one spot instead of another. No more missing dark objects at night.
It would be a gigantic improvement in the rate of accidents.
Human eyes are in many ways far superior to reasonably priced vision sensors. This isn't giving humans superhuman vision, it's changing the tradeoffs human vision has (without changing the cognitive process that it coevolved with to begin, and which is the most important part of why we get into accidents).
There is no affordable vision system that's as good as human vision in key situations. LiDAR+vision is the only way to actually get superhuman vision. The issue isn't the choice of vision system, it's to choose vision itself, and besides the lesson from the human sensory system is to have sensors that go well with your processing system, which again would mean LiDAR.
If humans could integrate a LiDAR-like system where we could be warned of approaching objects from any angle and accurately gauge the speed and distance of multiple objects simultaneously, we would surely be better drivers.
Isn't radar/lidar less like super vision and more like spidey sense? I'd love to give human drivers an innate sense of exactly how far away things are and how fast they are closing.
>This made me realize he is not the genius he is made out to be but instead he is a fraud/charlatan and over the years his statements on different topics have only hardened that belief.
That was Karpathy's decision [1] and, yes, I also have that perception of him.
I know this is not going to be well received because he's one of HN's pet prodigies but, objectively, it was him.
> The point I soured on Musk was when he ditched radar/lidar and tried to go with camera's alone. This made me realize he is not the genius he is made out to be but instead he is a fraud/charlatan and over the years his statements on different topics have only hardened that belief.
Yeah, he was arguably wrong about one thing so his building both the world's leading EV company and the world's leading private rocket company was fake.
As they say, the proof of the pudding is in the eating. Between Tesla, SpaceX, and arguably now xAI, the probability of Musk's genius being a fluke or fraud is close to zero.
> probability of Musk's genius being a fluke or fraud is close to zero.
We already know he's an objective fraud because he literally cheats at video games and was caught cheating. As in, he hired people to play for him and then pretended the accomplishments were his own. Which maps very well to literally everything he's done.
He was not wrong about one thing, he was frequently wrong. He is highly charizmatic bullshitter that gets away with fraud and lies, that can secure help from people when he needs.
But, he is frequently wrong, it just does not matter. He was occasionally right, like with Tesla back then.
I can see that cutting out the LIDAR could qualify as "cheap." And maybe he's being risky on betting on lowering the cost by simplifying the technology.
But why make the jump to "fraud/charlatan"? Every system needs to be finite. We can invest in every bell and whistle. Furthermore, he's upfront about the decision. Fraud requires deception.
According to Musk, my car was supposed to be unsupervised driving by now. The shift to vision only has consumed all of their resources for the past several years, and my car has been left behind. There have been giant leaps in vision only, but it still isn't better than the vision+radar.
So, I was deceived. I didn't buy the car because of the deception, but I did buy FSD because of it.
Also, FSD disengaging when it gets sensor confusion should be considered criminal fraud. FSD should never disengage without a driver action.
It's because the sensor suite for lidar is expensive and HD cameras are basically a commodity at this point.
So if your goal is to pump out $20k self driving cars, then you need cameras to be good enough. So the logic becomes "If humans can do it, so can cameras, otherwise we have no product, no promise."
Cameras have poor dynamic range and can be easily blinded by bright surfaces. While it is true that humans do fine with only eyes, our eyes are significantly better than cameras.
More importantly, expectations are higher when an automated system is driving the car. It is not sufficient if, in aggregate, self-driving cars have fewer accidents. If you lose a loved one in an accident where the accident could have been easily avoided if a human was driving, then you're not going to be mollified to hear that in aggregate, fewer people are being killed by self-driving cars! You'd be outraged to hear such a justification! The expectation therefore is that in each individual injury accident a human clearly could not have handled the situation any better. Self-driving cars have to be significantly better than humans to be accepted by society, and that means it has to have better-than-human levels of vision (which lidars provide).
I wish this myth would die. Anyone who picks up a camera would know that it isn't true, there are many things even very expensive cameras can't do that humans can. Specifically, the mix of high acuity when needed but wide angle, low-light movement performance, and tracking fast objects is something that only a camera system in the tens of thousands of dollars can do, and those are all relevant to driving.
I drive a beater used car, I've contemplated installing aftermarket lidar on it, i don't want to drive as a human only relying on being able to see everything by turning my head.
Computer Vision has turned out to be a very tough nut to crack and that should have been visible from anyone doing serious work in the field since at least 15 years ago.
In any case, any safety critical system should be build with redundancy in mind, with several sub systems working independently.
Using more and better sensors is only a problem when building a cost sensitive system, not a safety critical one, and very often those sensors are expensive because they are niche, that can be mediated with mass scale.
Radar. I remember some of his nonsense about disambiguating, despite the previous AP disambiguating just fine. Same with the rain sensor. This is a cheap part. Radar isn't very expensive either.
To be fair, the ghost brakes on TACC have reduced. But I tend to control my wipers with voice.
I still get just as much phantom breaking. I’ve narrowed it down to the car using map speed limit data and losing track of where it is. It’s very consistent on 5 south in LA. The same spot every day.
Good point. You don't get rid of a technology that improves your results drastically until you have a replacment. His visual systems are still a failure when compared to lidar based ones. Just see how well other self driving systems are doing in comparison.
Everyone is harping on the engineering. It is a marketing reason first and foremost. Lidar units are ugly as hell. No consumer would ever buy one of those waymo jaguars even if its better if tesla can do 95% of that without looking like a beluga whale.
I guess I just have to accept that for the foreseeable future, any article in any way related to Elon Musk will result in a lot of angry low quality comments that get lots of upvotes.
Instead of critiquing the article for liberal use of words like "overwhelmingly", "unique", and " 100% of Teslas" on an n=5 cars, with limited data and a very questionable analysis of the Snohomish accident, we discuss how Musk is a fraud.
I'm a roboticist who had learned from researchers involved in the 2007 DARPA urban challenge. The key takeaway from that event was that every single car that finished the race was equipped with the 3D velodyne LIDAR. It was that technology that enabled driverless cars to work as a concept.
Why? Because it provided information that people had to infer, and that you couldn't easily get from camera. So absent the human inference engine that allowed human drivers to work, we would have to rely on highly-precise measurement instruments like LiDAR instead.
Musk's huge error was in thinking "Well humans have eyes and those are kind of like cameras, therefore all you need are cameras to drive"
But no! Eyes are not cameras, they are extensions of our brains. And we use more than our eyes to navigate roads, in fact there's a huge social aspect to driving. It's not just an engineering challenge but a social one. So from the get-go he's solving the wrong problem.
I knew this guy was full of it when he started talking about driverless cars being 5 years out in 2015. Just utter nonsense to anyone who was actually in that field, especially if he thought he could do it without LiDAR. He called his system "autopilot" which was deceptive, but I was completely off him when he released "full self driving - beta" onto public streets. Reckless insanity. What made me believe he is criminally insane is this particular timeline (these are headlines, you can search them if you want to read the stories):
2016 - Self-Driving Tesla Was Involved in Fatal Crash, U.S. Says
2016 - Tesla working on Autopilot radar changes after crash
2017 - NTSB Issues Final Report and Comments on Fatal Tesla Autopilot Crash
2019 - Tesla didn’t fix an Autopilot problem for three years, and now another person is dead
2021 - Inside Tesla as Elon Musk Pushed an Unflinching Vision for Self-Driving Cars Tesla announces transition to ‘Tesla Vision’ without radar, warns of limitations at first
2022 - Former Head Of Tesla AI Explains Why They’ve Removed Sensors; Others Differ
2022 - Tesla Dropping Radar Was a Mistake, Here is Why
2023 - Tesla reportedly saw an uptick in crashes and mistakes after Elon Musk removed radar from its cars
2023 - Elon Musk Overruled Tesla Engineers Who Said Removing Radar Would Be Problematic: Report
2023 - How Elon Musk knocked Tesla’s ‘Full Self-Driving’ off course
2023 - The final 11 seconds of a fatal Tesla Autopilot crash
Now I get to add TFA to the chronical.
The man and his cars are a menace to society. Tesla would be so much further along on driverless cars without Musk.
I mean what does lidar do but tell you distance? A rangefinder optic from 1940 can also tell you distance with two little offset windows purely optically and accurately. Millions of rolls of films shot on this principle of optical distance finding. And yet, this is no good now, because IMO its musks idea and thats enough to poison it among the armchair engineers than any other reason. Telling to this point is how everyone just parrots the same comment that optical self driving is bad without actually providing evidence to support their point. Just arguing on precedent established in forums and social media.
Tesla's camera-based self-driving system is still likely at least five years away from being ready, Hall said.
Still, he said, "Elon Musk is right about not needing lidar."
This problem has been solved more than a decade ago by radar sensors (standard on many mid-range cars at the time). They detect imminent collisions with almost perfect accuracy and very little false positives. Having better sensor data is always going to beat trying to massage crappy data into something useful.
Radars are not as good as you think. They generally can't detect stationary objects, have problems with reflections, most of them are VERY low resolution, and so on.
The "with almost perfect accuracy and very little false positives" part is not true.
If you look at euroncap data, you'll see how most cars are not close to 100 in Safety Assist category (and Teslas with just vision are among the top). And these EuroNCAP are fairly easy and ideal. So it's clearly not a solved problem, as you portray.
> They generally can't detect stationary objects, have problems with reflections, most of them are VERY low resolution, and so on.
Radar can absolutely detect a stationary object.
The problem is not, "moving or not moving", it's "is the energy reflected back to the detector," as alluded to by your second qualification.
So something that scatters or absorbs the transmitted energy is hard to measure with radar because the energy doesn't get back to the detector completing the measurement. This is the guiding principle behind stealth.
And, as you mentioned, things with this property naturally occur. For example, trees with low hanging branches and bushes with sparse leaves can be difficult to get an accurate (say within 1 meter) distance measurement from.
Backing this up: automotive radar uses a band at ~80 GHz. The wavelength is ~3.7 millimeters, which lets you get incredible resolution. Not quite as good as the TSA airport scanners that can count your moles through your shirt, but good enough to see anything bigger than a golf ball.
For a long, long time automotive radar was a pipe dream technology. Steering a phased array of antennas means delaying each antenna by 1/10,000s of a wave period. Dynamically steering means being able to adjust those timings![1] You're approaching picosecond timing, and doing that with 10s or 100s antennas. Reading that data stream is still beyond affordable technology. Sampling 100 antennas 10x per period at 16 bit precision is 160 terabytes per second, 100x more data than the best high speed cameras. Since the fourier transform is O(nlogn), that's tens of petaflops to transform. Hundreds of 5090s, fully maxed out, before running object recognition.
Obviously we cut some corners instead. Current techniques way underutilize the potential of 80 GHz. Processing power trickles down slowly and new methods are created unpredictably, but improvement is happening. IMO radar has the highest ceiling potential of any of the sensing methods, it's the cheapest, and it's the most resistant to interference from other vehicles. Lidar can't hop frequencies or do any of the things we do to multiplex radar.
[1]: In reality you don't scan left-right-up-down like that. You don't even use just an 80 GHz wave, or even just a chirp (a pulsing wave that oscillates between 77-80 GHz). You direct different beams in all different directions at the same time, and more importantly you listen from all different directions at the same time.
Agreed. Also, the fact that current automotive radars return a point cloud (instead of, say, a volumetric density grid) is sad. But it will be a while before processing power can catch up, and by the time you have the equivalent of hundreds of 5090s on your car, you will also be able to drive flawlessly by running a giant transformer model on vision inputs.
This isn't true. You can try using adaptive cruise control with lane-keeping on a radar-equipped car on an undivided highway. Radar is good at detecting distance and velocity, but can't see lane lines. In order to prevent collisions, you would need to know precisely the road geometry and lane positions, which may come from camera data, and combine that information with the vehicle information.
The first thing I thought before even reading the analysis was "Does the author account for it?" And indeed he makes no mention that he did.
So after reading the whole article I have no idea whether Tesla's automatic driving is any worse at detecting motorcycles than my Subaru's (which BTW also uses only visual sensors).
Antidisclaimer: I hate both Teslas and Musk. And my hate for one is not tied to the other.
The base rate was discussed early in the article, but not by that name:
> It’s not just that self-driving cars in general are dangerous for motorcycles, either: this problem is unique to Tesla. Not a single other automobile manufacturer or ADAS self-driving technology provider reported a single motorcycle fatality in the same time frame.
That gives you absolute rate, but not relative rate.
There are not many other cars out there (in comparison), with a self-driving mode. There are so many Teslas in the World out there driving around, that I think you'd have to considerably multiply all the others combined to get close to that number.
As such, while 5 > 0, and that's a problem, what we don't know (and perhaps can't know), is how that adjusts for population size. I'd want to see a motorcycle fatality rate per auto-driver-mile number, and even then, I'd want it adjusting for prevalence of motorcycles in the local population: the number in India, Rome, London and South California vary quite a bit.
To take a hypothetical extreme: If all cars but one on the road were Teslas, it would not be meaningful to point out that there have been far more fatalities with Teslas.
Even more illustrative, if 10 people on motorcycles had died from Teslas, and 1 person had died from that sole non-Tesla, then that non-Tesla would be deemed much, much more dangerous than Tesla.
Another part of the problem: Waymo, for example, doesn't provide motorcycle specific stats. They only provide collisions with vehicles, bicycles, and pedestrians. There's no breakdown of vehicle type. So the basis of this article is already bullshit and likely done just for "space man bad" reasons
It is worse than that. The other adas providers do not all automatically report this stuff. He's comparing five reports gathered meticulously to a self reporting system that drops the vast majority of incidents.
People will gobble up all kinds of bad articles to reinforce their anti-Tesla bias. This reminds me of the "iSeeCars" study that somehow claimed that teslas have the highest fatality rate per mile travelled, even though:
* They basically invented the number of miles travelled, which is off by a large factor compared to the official figure from Tesla
* If you take into account the fact that the standard deviation is proportional to the square root of the number of fatal accidents, the comparison has absolutely no statistical significance whatsoever
I don't like Tesla and the premature "FSD" announcement was a huge set back AV research. An AV without lidar killing motorcyclists is not surprising, to say the least. And this is a damning report.
That said -- and I might have missed this if it was in the linked sources, I'm on mobile -- what is the breakdown of other (supposed) AVs adoption currently? What other types of crashes are there? Are these 5+ fatalities statistically significant?
I’m in the same camp. I think self driving shouldn’t be allowed as it currently stands. But, this is probably the XKCD heatmap phenomenon.
How many other self-driving vehicles are on the road vs Tesla? What percentage of traffic consists of motorcycles in the place where those other brands have deployed bs in Florida, etc.
Prediction: over the next 4 years we're going to see lots of stories like this. Some of the stories will be fair, some won't. "Musk bad" stories get clicks. The next administration will be very anti-Tesla. Coincidentally, by that time self-driving will be considered mature enough to warrant proper regulation, rather than the experimental regulation status we have now.
Combine the two, and the regulations will be written such that it excludes Tesla and includes Waymo. Not by name, just that the safety regulations will require a safety record better than Tesla's but worse than Waymo's. Likely nobody but Waymo will have that record, and now nobody will be able to because they won't have access to the public roads to attain it.
This might be the ultimate regulatory lock in monopoly we've ever seen.
> Combine the two, and the regulations will be written such that it excludes Tesla and includes Waymo.
The solution seems easier, if only the regulators would pick up upon it.
Under the current human driven auto regime, it is the human that is operating the machine who is liable for any and all accidents.
For a self-driving car, that human driver is now a "passenger". The "operator" of the machine is the software written (or licensed) by the car maker. So the regulation that assures self-driving is up-to-snuff is:
When operating in "self driving" mode, 100% of all liability for any and all accidents rests on the auto manufacturer.
The reason the makers don't seem to care much about the safety of their self driving systems is that they are not the owners of the risk and liability their systems present. Make them own 100% of the risk and liability for "self driving" and all of a sudden they will very much want the self-driving systems to be 100% safe.
Being both the owner and operator, Waymo already has full liability. It's a good proposal, but I don't think it's sufficient if your secondary goal is "screw Elon".
Nor is it sufficient to ensure that self driving is significantly safer than human drivers. I don't think the public wants "slightly safer than humans".
Seems like the obvious solution to this, is if you collect driving data on public highways, the data has to be made available to the public. If you collect the data on private highways you are free to keep it private. If you don't intend to use it in a product on public highways it can remain private.
Doesn't even seem that crazy when you consider the government is already licensing them to be able to use their private data anyway. Biggest issue is someone didn't set it up this way from the start.
By that time Waymo will likely be over a billion miles of data, and you're likely going to need similar amounts of mileage to prove that your safety margin is >> 10X better than human.
Hmm, but all the real, registered self driving vehicles in California are safer than human drivers - and that the data is better documented, we know that better than comparing human drivers to each other.
The regulations are doing really well, it’s a big victory for regulators, why not make Teslas abide by the same rules? Why not roll out such strict scrutiny gradually to all vehicles and drivers?
You are talking about regulatory degrees that are about safety. It seems the thing that lawmakers change is reactive to other things. Like how much does the community depend on cars to survive? If you cannot eliminate car dependence you can’t really achieve a more moral legal stance than “People can and will buy cars that kill other people so long as it doesn’t kill the driver.”
> Not by name, just that the safety regulations will require a safety record better than Tesla's but worse than Waymo's.
That's not a bad thing if Tesla is significantly worse than Waymo. That's desirable.
The solution here seems like it would be for Tesla to become as safe as Waymo. If they can't achieve that, that's on them. Unfair press doesn't cause that.
I mean, I care about not dying in a car accident. If Tesla is less safe, and this leads to people taking safer Waymos instead, I can't see that as anything but a good thing. I don't want to sacrifice my life so another company can put out more dangerous vehicles.
I see a lot of people saying that this isn't statistically significant. I think that that is probably true, but I also think that it is important to do the statistical test to make sure:
tesla.mult is how many times more total miles Teslas have driven with level-2 ADAS engaged compared to all other makers. We don't have data for what that number should be because automakers are not required to report it. I think that it is probably somewhere between 1/5 and 5. If you believe that the number is more than 1, then the result is not statistically significant.
Even though other manufacturers may not be reporting these numbers, Level 2 ADAS systems are pretty common as far as I can tell. Wouldn't any vehicle with adaptive cruise control and lane-keep assist be considered Level 2 ADAS?
I’m not quite sure where the line is between Level 1 and Level 2 ADAS. Wikipedia says this:
> ADAS that are considered level 1 are: adaptive cruise control, emergency brake assist, automatic emergency brake assist, lane-keeping, and lane centering. ADAS that are considered level 2 are: highway assist, autonomous obstacle avoidance, and autonomous parking.
I think that Level 2 requires something more than adaptive cruise control and lane-keep assist, but that several automakers have a system available that qualifies.
My intuition is that there are more non-Tesla cars sold with Level 2 ADAS, but Tesla drivers probably use the ADAS more often.
So I don’t have high confidence what tesla.mult should be. I wish that we had that data.
Hmm. The article's source is NHTSA data that goes up through February 2025 -- pretty recent.
The article cites 5 motorcycle fatalities in this data.
Four of the five were in 2022, when Tesla FSD was still closed beta.
The remaining incident was in April 2024.
(The article also cites one additional incident in 2023 where the injury severity was "unknown", but the author speculates it may have been fatal.)
I dunno, to me this specific data suggests a technology that has improved a lot. There are far more drivers on the road using FSD today than there were in 2022, and yet fewer incidents?
The only criticism I could leverage is that the difference between five and zero incidents is very hard to extrapolate information from.
The author kind of plays this up a bit by insinuating that there are incidents we don't know of, and they probably aren't wrong that if there are five fatalities there are going to be many more near misses and non-fatal fender bender collisions.
But for the number of millions of miles on the road covered by all vehicles, extrapolating from five incidents is doing a lot of statistical heavy lifting.
What gets me is that there are no other brands in Tesla's league. Tesla is the only consumer car that has "FSD" level ability.
The competitors have to use pre-mapped roads and availability is spotty at best. There is also risk as Chevy already deprecated their first gen "FSD", leaving early adopters with gimped ability and shutout from future expansions.
You mean betatesting on millions of users? Yes traditional manufacturers are very wary of class action suits and generally have some reputation to uphold. Tesla, not so much... move fast, break things, kill few people, who cares current profit is all that matters
Level 4 is a commercially viable product. Mapping allows verification by simulation before deployment. Tesla offers level 3, which is not monetizable beyond being a gimmick.
There is no statistical evidence cited to show that there really is a difference. And there is no data at all showing the rate of these crashes vs non self driving cars.
Without knowing how many FSD miles Teslas have done compared to other brands, it's hard to judge. It could just be that Tesla owners are far more likely to trust and use their cars FSD capabilities, and thus end up in more FSD accidents. Other brands might have so bad FSD that people simply not trust it and basically never use it, and thus never get into an FSD accident.
I don’t see any way you can spin this in a positive light. Yeah, there may be many more FSD miles on Tesla, but if that leads to a bunch of motorcyclists getting hit, then maybe that’s exactly the problem.
We know this is one of the core issues of Tesla FSD: its capabilities have been hyped and over promised time and time again. We have countless examples of drivers trusting it far more than they should. And who’s to blame for that? In large part the driver, sure. But Elon has to take a lot of that blame as well. Many of those drivers would not have trusted it as much if it wasn’t for his statements and the media image he has crafted for Tesla.
> Other brands might have so bad FSD that people simply not trust it and basically never use it
The issue is not quite how good the automation is in absolute terms, it's how good it is vs. how it is sold. Tesla is an outlier here, right down to the use of the term "FSD" i.e. "Full Self-Driving", when it's nothing of the sort.
He really embodies the ethos of "move fast and break things". So let's fire 80% of the staff, see what falls down, and rehire where we made "mistakes". I really think he has an alarmingly high threshold for the number of lives we can lose if it accelerates the pace of progress.
While it was absolutely vital to getting the costs of the original Tesla Roadster and SpaceX launches way down… it can only work when you are able to accept "no, stop" as an answer.
Rockets explode when you get them wrong, you can't miss it.
Cars crashing more than other models? That's statistics, which can be massaged.
Government work? There's always someone complaining no matter what you do, very easy to convince yourself that all criticism is unimportant, no matter how bad it gets. (And it gets much worse than the worst we've actually seen from DOGE and Trump — I don't actually think they'll get to be as bad as the Irish Potato Famine, but that is an example of leaders refusing to accept what was going on).
Which seems like an antidote to the current culture of "don't build anything anywhere."
Now the errors are not all independent so it's not as good as that, but many classes of errors are independent (e.g. two pilots getting a heart attack versus just one) so you can get pretty close to that p^n. Musk did not understand this. He's just not as smart as he's made out to be.
You have a main and a backup pilot, but either one must be 100% capable of doing it on their own. The backup is silently double checking, but their assignments are more about ensuring that the copilot doesn't just check out because they're human. If the copilot ever has to say "don't do that it's going to kill us all" it's a crisis.
Lidar is a good backup but the car must be able to with without it. You can't drive just with lidar; it's like driving by Braille. Lidar can't even read a stop light. If they cannot handle it with just the visuals, the car should be not be allowed on the road.
I concur that is terrifying that he was allowed to go without the backup that stops it from killing people. Human co drivers are not a good enough backup.
But he's also not wrong that the visual system must be practically perfect -- if it's possible at all. Which it surely isn't yet.
Don't think that's the right analogy. Realistically you'd aim to combine them meaningfully. A bit like two eyes gives you depth perception.
You assume 1+1 is less than two, when really you'd aim for >2
If you have two sensors, one says everything's fine but the other says you're about to crash, which one do you trust? What if the one that says you're about to crash is feeding you bad data? And what if the resulting course correction leads to a different failure?
I'd hope that we've learned these lessons from the 737 Max crashes. In both cases, one sensor thought that the plane was at imminent risk of stalling, and so it forced the nose of the plane down, thereby leading to an entirely different failure mode.
Now, of course, having two sensors is better than just having the one faulty sensor. But it's worth emphasizing that not all sensor failures are created equal. And of course, it's important to monitor your monitoring.
It's more like, a pilot has access to multiple sensors. Which they do.
more likely
he is a ruthless ** who doesn't care about people dying and slightly increasing the profit margin is for him worth more then a some people dying
regulators/law allowing self driving companies to wriggle out of responsibility didn't help either
lidar was interesting for him as long as it seemed light Tesla could maybe dominate the self driving marked through technological excellency the moment it was clear it won't work he abandoned technological excellence in favor of micro optimizing profit at the cost of read safety
which shouldn't be surprising for anyone I mean he also micro optimized workplace safety in SpaceX away not only until it killed someone but even after it did (stuff like this is why there where multiple investigations against his companies until Trump magiced them away)
the thing is he has intelligent people informing him about stuff, including how removing lidar will statistically seen kill people, so it's not that he doesn't know, it's that he doesn't care
Just see him talking about things at NeuraLink. Musk wouldn't exist if it weren't for the people working for him. It's a clown that made it to the top in a very dubious way.
I've decided Musk core talent is creating and running an engineering team. He's done it many times now - Tesla, SpaceX, Paypal, even twitter.
It's interesting because I suspect he isn't a particular good engineer himself, although the only evidence I have for that is tried to convert Paypal from Linux to Windows. His addiction to AI's getting results quickly isn't a good look either. To make the product work in the long term the technique has to get you 100% of the way there, not the 70% we see in Tesla and of now DOGE. He isn't particularly good at running businesses either, as both twitter shows and his solar roof's show.
But that doesn't matter. He's assembled lots of engineering teams now, and he just needs a few of them to work to make him rich. Long ago it was people who could build train lines faster and cheaper than anyone else that drove the economy, then it was oil fields, then I dunno - maybe assembly lines powered by humans. But now wealth creation is driven by teams of very high level engineers duking it out, whether they be developing 5G, car assembly lines or rockets. Build the best team and you win. Musk has won several times now, in very different fields.
Put another way - would giving humans superhuman vision significantly reduce the accident rate?
The issue here is that the vision based system failed to even match human capabilities, which is a different issue from whether it can be better than humans by using some different vision tech.
Yes? Incredibly?
If people had 360° instant 3D awareness of all objects, that would avoid so many accidents. No more blind spots, no more missing objects because you were looking in one spot instead of another. No more missing dark objects at night.
It would be a gigantic improvement in the rate of accidents.
Never driven before?
There is no affordable vision system that's as good as human vision in key situations. LiDAR+vision is the only way to actually get superhuman vision. The issue isn't the choice of vision system, it's to choose vision itself, and besides the lesson from the human sensory system is to have sensors that go well with your processing system, which again would mean LiDAR.
If humans could integrate a LiDAR-like system where we could be warned of approaching objects from any angle and accurately gauge the speed and distance of multiple objects simultaneously, we would surely be better drivers.
That was Karpathy's decision [1] and, yes, I also have that perception of him.
I know this is not going to be well received because he's one of HN's pet prodigies but, objectively, it was him.
1: https://www.forbes.com/sites/bradtempleton/2022/10/31/former...
(one of many)
I read that as Musk wanted this done and asked Karpathy to find a way.
Yeah, he was arguably wrong about one thing so his building both the world's leading EV company and the world's leading private rocket company was fake.
As they say, the proof of the pudding is in the eating. Between Tesla, SpaceX, and arguably now xAI, the probability of Musk's genius being a fluke or fraud is close to zero.
We already know he's an objective fraud because he literally cheats at video games and was caught cheating. As in, he hired people to play for him and then pretended the accomplishments were his own. Which maps very well to literally everything he's done.
But, he is frequently wrong, it just does not matter. He was occasionally right, like with Tesla back then.
But why make the jump to "fraud/charlatan"? Every system needs to be finite. We can invest in every bell and whistle. Furthermore, he's upfront about the decision. Fraud requires deception.
So, I was deceived. I didn't buy the car because of the deception, but I did buy FSD because of it.
Also, FSD disengaging when it gets sensor confusion should be considered criminal fraud. FSD should never disengage without a driver action.
So if your goal is to pump out $20k self driving cars, then you need cameras to be good enough. So the logic becomes "If humans can do it, so can cameras, otherwise we have no product, no promise."
More importantly, expectations are higher when an automated system is driving the car. It is not sufficient if, in aggregate, self-driving cars have fewer accidents. If you lose a loved one in an accident where the accident could have been easily avoided if a human was driving, then you're not going to be mollified to hear that in aggregate, fewer people are being killed by self-driving cars! You'd be outraged to hear such a justification! The expectation therefore is that in each individual injury accident a human clearly could not have handled the situation any better. Self-driving cars have to be significantly better than humans to be accepted by society, and that means it has to have better-than-human levels of vision (which lidars provide).
Computer Vision has turned out to be a very tough nut to crack and that should have been visible from anyone doing serious work in the field since at least 15 years ago.
In any case, any safety critical system should be build with redundancy in mind, with several sub systems working independently.
Using more and better sensors is only a problem when building a cost sensitive system, not a safety critical one, and very often those sensors are expensive because they are niche, that can be mediated with mass scale.
To be fair, the ghost brakes on TACC have reduced. But I tend to control my wipers with voice.
Dead Comment
Instead of critiquing the article for liberal use of words like "overwhelmingly", "unique", and " 100% of Teslas" on an n=5 cars, with limited data and a very questionable analysis of the Snohomish accident, we discuss how Musk is a fraud.
Deleted Comment
Why? Because it provided information that people had to infer, and that you couldn't easily get from camera. So absent the human inference engine that allowed human drivers to work, we would have to rely on highly-precise measurement instruments like LiDAR instead.
Musk's huge error was in thinking "Well humans have eyes and those are kind of like cameras, therefore all you need are cameras to drive"
But no! Eyes are not cameras, they are extensions of our brains. And we use more than our eyes to navigate roads, in fact there's a huge social aspect to driving. It's not just an engineering challenge but a social one. So from the get-go he's solving the wrong problem.
I knew this guy was full of it when he started talking about driverless cars being 5 years out in 2015. Just utter nonsense to anyone who was actually in that field, especially if he thought he could do it without LiDAR. He called his system "autopilot" which was deceptive, but I was completely off him when he released "full self driving - beta" onto public streets. Reckless insanity. What made me believe he is criminally insane is this particular timeline (these are headlines, you can search them if you want to read the stories):
Now I get to add TFA to the chronical.The man and his cars are a menace to society. Tesla would be so much further along on driverless cars without Musk.
https://www.bizjournals.com/sanjose/news/2022/11/09/heres-wh...
The "with almost perfect accuracy and very little false positives" part is not true.
If you look at euroncap data, you'll see how most cars are not close to 100 in Safety Assist category (and Teslas with just vision are among the top). And these EuroNCAP are fairly easy and ideal. So it's clearly not a solved problem, as you portray.
https://www.euroncap.com/en/ratings-rewards/latest-safety-ra...
Radar can absolutely detect a stationary object.
The problem is not, "moving or not moving", it's "is the energy reflected back to the detector," as alluded to by your second qualification.
So something that scatters or absorbs the transmitted energy is hard to measure with radar because the energy doesn't get back to the detector completing the measurement. This is the guiding principle behind stealth.
And, as you mentioned, things with this property naturally occur. For example, trees with low hanging branches and bushes with sparse leaves can be difficult to get an accurate (say within 1 meter) distance measurement from.
For a long, long time automotive radar was a pipe dream technology. Steering a phased array of antennas means delaying each antenna by 1/10,000s of a wave period. Dynamically steering means being able to adjust those timings![1] You're approaching picosecond timing, and doing that with 10s or 100s antennas. Reading that data stream is still beyond affordable technology. Sampling 100 antennas 10x per period at 16 bit precision is 160 terabytes per second, 100x more data than the best high speed cameras. Since the fourier transform is O(nlogn), that's tens of petaflops to transform. Hundreds of 5090s, fully maxed out, before running object recognition.
Obviously we cut some corners instead. Current techniques way underutilize the potential of 80 GHz. Processing power trickles down slowly and new methods are created unpredictably, but improvement is happening. IMO radar has the highest ceiling potential of any of the sensing methods, it's the cheapest, and it's the most resistant to interference from other vehicles. Lidar can't hop frequencies or do any of the things we do to multiplex radar.
[1]: In reality you don't scan left-right-up-down like that. You don't even use just an 80 GHz wave, or even just a chirp (a pulsing wave that oscillates between 77-80 GHz). You direct different beams in all different directions at the same time, and more importantly you listen from all different directions at the same time.
(Also I wouldn't say it's _irrelevant_ that they don't have lidar, as if they did it would cover some of the same weaknesses as radar.)
Deleted Comment
The analysis is useless if it doesn't account for the base rate fallacy (https://en.m.wikipedia.org/wiki/Base_rate_fallacy)
The first thing I thought before even reading the analysis was "Does the author account for it?" And indeed he makes no mention that he did.
So after reading the whole article I have no idea whether Tesla's automatic driving is any worse at detecting motorcycles than my Subaru's (which BTW also uses only visual sensors).
Antidisclaimer: I hate both Teslas and Musk. And my hate for one is not tied to the other.
> It’s not just that self-driving cars in general are dangerous for motorcycles, either: this problem is unique to Tesla. Not a single other automobile manufacturer or ADAS self-driving technology provider reported a single motorcycle fatality in the same time frame.
There are not many other cars out there (in comparison), with a self-driving mode. There are so many Teslas in the World out there driving around, that I think you'd have to considerably multiply all the others combined to get close to that number.
As such, while 5 > 0, and that's a problem, what we don't know (and perhaps can't know), is how that adjusts for population size. I'd want to see a motorcycle fatality rate per auto-driver-mile number, and even then, I'd want it adjusting for prevalence of motorcycles in the local population: the number in India, Rome, London and South California vary quite a bit.
To take a hypothetical extreme: If all cars but one on the road were Teslas, it would not be meaningful to point out that there have been far more fatalities with Teslas.
Even more illustrative, if 10 people on motorcycles had died from Teslas, and 1 person had died from that sole non-Tesla, then that non-Tesla would be deemed much, much more dangerous than Tesla.
It is a bad article.
* They basically invented the number of miles travelled, which is off by a large factor compared to the official figure from Tesla
* If you take into account the fact that the standard deviation is proportional to the square root of the number of fatal accidents, the comparison has absolutely no statistical significance whatsoever
There are 5 dead people who would be alive today had tesla not pushed this to the market.
That said -- and I might have missed this if it was in the linked sources, I'm on mobile -- what is the breakdown of other (supposed) AVs adoption currently? What other types of crashes are there? Are these 5+ fatalities statistically significant?
Doesnt give number of driving hours for Tesla vs others though.
How many other self-driving vehicles are on the road vs Tesla? What percentage of traffic consists of motorcycles in the place where those other brands have deployed bs in Florida, etc.
https://www.xkcd.com/1138/
Dead Comment
Combine the two, and the regulations will be written such that it excludes Tesla and includes Waymo. Not by name, just that the safety regulations will require a safety record better than Tesla's but worse than Waymo's. Likely nobody but Waymo will have that record, and now nobody will be able to because they won't have access to the public roads to attain it.
This might be the ultimate regulatory lock in monopoly we've ever seen.
The solution seems easier, if only the regulators would pick up upon it.
Under the current human driven auto regime, it is the human that is operating the machine who is liable for any and all accidents.
For a self-driving car, that human driver is now a "passenger". The "operator" of the machine is the software written (or licensed) by the car maker. So the regulation that assures self-driving is up-to-snuff is:
When operating in "self driving" mode, 100% of all liability for any and all accidents rests on the auto manufacturer.
The reason the makers don't seem to care much about the safety of their self driving systems is that they are not the owners of the risk and liability their systems present. Make them own 100% of the risk and liability for "self driving" and all of a sudden they will very much want the self-driving systems to be 100% safe.
Nor is it sufficient to ensure that self driving is significantly safer than human drivers. I don't think the public wants "slightly safer than humans".
Doesn't even seem that crazy when you consider the government is already licensing them to be able to use their private data anyway. Biggest issue is someone didn't set it up this way from the start.
If a competitor resold their system to other car companies, another possible scenario might be a duopoly like Apple versus Android.
The regulations are doing really well, it’s a big victory for regulators, why not make Teslas abide by the same rules? Why not roll out such strict scrutiny gradually to all vehicles and drivers?
You are talking about regulatory degrees that are about safety. It seems the thing that lawmakers change is reactive to other things. Like how much does the community depend on cars to survive? If you cannot eliminate car dependence you can’t really achieve a more moral legal stance than “People can and will buy cars that kill other people so long as it doesn’t kill the driver.”
In fairness to the regulators they have been pretty reasonable so far.
That's not a bad thing if Tesla is significantly worse than Waymo. That's desirable.
The solution here seems like it would be for Tesla to become as safe as Waymo. If they can't achieve that, that's on them. Unfair press doesn't cause that.
I mean, I care about not dying in a car accident. If Tesla is less safe, and this leads to people taking safer Waymos instead, I can't see that as anything but a good thing. I don't want to sacrifice my life so another company can put out more dangerous vehicles.
Dead Comment
> ADAS that are considered level 1 are: adaptive cruise control, emergency brake assist, automatic emergency brake assist, lane-keeping, and lane centering. ADAS that are considered level 2 are: highway assist, autonomous obstacle avoidance, and autonomous parking.
https://en.m.wikipedia.org/wiki/Advanced_driver-assistance_s...
I think that Level 2 requires something more than adaptive cruise control and lane-keep assist, but that several automakers have a system available that qualifies.
My intuition is that there are more non-Tesla cars sold with Level 2 ADAS, but Tesla drivers probably use the ADAS more often.
So I don’t have high confidence what tesla.mult should be. I wish that we had that data.
The article cites 5 motorcycle fatalities in this data.
Four of the five were in 2022, when Tesla FSD was still closed beta.
The remaining incident was in April 2024.
(The article also cites one additional incident in 2023 where the injury severity was "unknown", but the author speculates it may have been fatal.)
I dunno, to me this specific data suggests a technology that has improved a lot. There are far more drivers on the road using FSD today than there were in 2022, and yet fewer incidents?
But this seems like a pretty legitimate accusation, and certainly a well researched write-up at the very least.
The author kind of plays this up a bit by insinuating that there are incidents we don't know of, and they probably aren't wrong that if there are five fatalities there are going to be many more near misses and non-fatal fender bender collisions.
But for the number of millions of miles on the road covered by all vehicles, extrapolating from five incidents is doing a lot of statistical heavy lifting.
The competitors have to use pre-mapped roads and availability is spotty at best. There is also risk as Chevy already deprecated their first gen "FSD", leaving early adopters with gimped ability and shutout from future expansions.
Level 4 is a commercially viable product. Mapping allows verification by simulation before deployment. Tesla offers level 3, which is not monetizable beyond being a gimmick.
Tesla has already said that some of its vehicles, sold with “all the hardware necessary for FSD” will never get it.
Dead Comment
We know this is one of the core issues of Tesla FSD: its capabilities have been hyped and over promised time and time again. We have countless examples of drivers trusting it far more than they should. And who’s to blame for that? In large part the driver, sure. But Elon has to take a lot of that blame as well. Many of those drivers would not have trusted it as much if it wasn’t for his statements and the media image he has crafted for Tesla.
The issue is not quite how good the automation is in absolute terms, it's how good it is vs. how it is sold. Tesla is an outlier here, right down to the use of the term "FSD" i.e. "Full Self-Driving", when it's nothing of the sort.
Deleted Comment