Readit News logoReadit News
Animats · 6 years ago
What gets me is the obsession with "identifying objects". The first thing you want for self-driving is an elevation map of what's ahead. If it's not flat, you don't go there. You don't need to know what it is. This is what LIDAR is good at.

Radar is not yet usable for ground profiling. Radar returns from asphalt at an oblique angle just aren't very good. Nor is there enough resolution to see even large potholes. Maybe someday, with terahertz radar. Not yet.

Now, you can go beyond that. If you're following the car ahead, and it's moving OK, you can assume that what they just drove over was flat. If the road far ahead looks like the near road, and the elevation map of the near road says it's flat, you can perhaps assume that the road far ahead is flat, too. That's what the Stanford team did in the DARPA Grand Challenge.

Identifying objects is mostly for things that move. Either they're moving now, or they're of a type that's likely to move. This is where the real "AI" part comes in - identifying other road users and trying to predict or reasonably guess what they will do.

Collision avoidance based on object recognition has not worked well from Tesla. They've hit a street sweeper, a fire truck, a crossing tractor-trailer, a freeway barrier, and some cars stalled on freeways. All big, all nearly stationary. This is the trouble with "identify, then avoid".

TeMPOraL · 6 years ago
Even when trying to identify objects, I don't get the push for trying to understand what they are. The way I see it (and I might be very wrong here), you should be able to get good enough results by identifying things around you that are solid (that's the most important part), and tracking their velocity. This should be doable without any kind of understanding of what the objects are. Then the car's control system should keep the speed and direction such that the car can always be stopped before any of the tracked objects hit it.

Having done that, you can play with identifying lanes of traffic and sidewalks and other "normal" features, and potentially ignoring objects there as long as they behave according to expectations. But I'd think the first order of business would still be ensuring that you don't run into solid objects, whatever they are, and whether or not they're moving.

tgb · 6 years ago
Some things you need to identify and understand: lane markings, police officers, temporary road signs, construction workers guiding traffic, traffic lights, someone waiting at the "yield to pedestrians" crosswalk, emergency response vehicles, ice patches. Just "not running into things" isn't enough to drive on roads. Really you should be trying to understand every vehicle since a naive approach of just extrapolating their velocity is insufficiently cautious since it doesn't account for future acceleration which can cause a crash.
coldtea · 6 years ago
>Even when trying to identify objects, I don't get the push for trying to understand what they are.

Without that you don't know lots of things.

1) Whether they might move or are completely stationary (e.g. a pole vs a motorcycle).

2) How they move.

3) Which you're better off hitting if you need to swerve to avoid another car (is it better to hit the fruit stand or the 10 year old boy?)

4) Whether they tell you something (e.g. traffic signs, traffic lights, a traffic cop directing you elsewhere, for starters).

5) Whether they represent some danger and you need to keep a distance (e.g. a bus with open doors, from where someone might come out at any minute).

miahi · 6 years ago
Identifying objects means you not only know their current position and speed, you can also anticipate their possible speed and trajectory changes.

Deleted Comment

londons_explore · 6 years ago
You're discussing "drivable area detection".

So far, most manufacturers cars have no problem with this. Maps and localisation are very very effective for it, and both camera and lidar based systems are pretty good at it. Nearly all players in the self driving world use a combination. Overall, it's pretty much a solved issue.

Object recognition, behaviour prediction, object interactions, etc. are the remaining unsolved issues, and that's why people talk about them more.

Animats · 6 years ago
So far, most manufacturers cars have no problem with this.

Not Tesla.

Tesla hitting construction barricade.[1]

Tesla hitting freeway offramp divider.[2]

[1] https://www.youtube.com/watch?v=-2ml6sjk_8c

[2] https://www.theregister.co.uk/2018/06/07/tesla_crash_report/

Deleted Comment

Deleted Comment

hoseja · 6 years ago
Terrahertz radar smoothly transitions into LIDAR, as the terrahertz increase.
Animats · 6 years ago
In theory, yes. In practice, terahertz RF technology isn't here yet. The first terahertz amplifier was made in 2014, it was DARPA-funded, and it's not a simple semiconductor device.
loourr · 6 years ago
This author missed Musks point entierly. His argument is that to solve self-driving you need a deep understanding of your surrounding which you can only achieve with visible light spectrum video. That's the real hard problem to solve and you need cameras to solve that and if you solve it then lidar becomes unnecessary.

The doomed part is because if companies are spending all of their energy on creating neural nets around lidar then they'll reach a local maximum where they never begin to tackle the much more difficult problem truly needed for self-driving.

cromwellian · 6 years ago
Seems to me that "deep understanding of surrounding" and "only achievable with visible spectrum" are contradictory. Visible light is readily attenuated, occluded, and reflected.

The first time Tesla runs over a kid chasing a ball into the street because it couldn't see him between the cars, this will be readily apparent.

Seems to me that Tesla is in the business of selling cars, other self driving companies are interested in AV for ride sharing or trucking. The latter have different requirements for styling and costs and the consumer case, so Musk has several limitations on the sensor suite he includes in a Tesla.

What he's doing is trying to argue a $5k system with cheap cameras and crappy radar coverage is all that is needed, because a full no-blind-spot multi-spectrum system would both cost too much AND likely make the car look ugly.

Two people have already been killed, and several injured, by Tesla autopilot due to blind spots.

davidgould · 6 years ago
Can you explain how a lidar sees a child hidden between cars? I was under the impression that lidar was line of sight.
moduspol · 6 years ago
The things detectable in the visible spectrum are what humans use to drive.

Will it be apparent how fundamentally problematic this is when a human runs over a kid chasing a ball into the street because it couldn't see him between the cars?

How many people have been killed by human drivers due to blind spots?

lolc · 6 years ago
Exactly. Until a car with reliable object permanence is demonstrated Tesla must tone down their promises. This LIDAR controversy is just a sideshow. Though a car having it will be able to outperform a car without it in many scenarios. An improvement over baseline human perception is very welcome.
AndrewBissell · 6 years ago
> His argument is that to solve self-driving you need a deep understanding of your surrounding which you can only achieve with visible light spectrum video. That's the real hard problem to solve

Musk's argument is more that cameras should be sufficient because humans can drive using only two eyes to perceive the driving environment. He always neglects to mention that humans do this with a combination of sight and a brain capable of general intelligence. I'm sure it's true that if Tesla invents AGI, self-driving with just cameras will become tractable. But "real hard problem to solve" doesn't begin to capture the difficulty.

In reality, since no one has yet invented a self-driving computer, it's impossible to say what components are necessary or even whether there may be more than one way to skin the cat. But one source we should probably take with a grain of salt on this issue is those (like Musk) with an intense commercial interest in one perspective.

kjksf · 6 years ago
You could have made the same argument about any hard problem before it was solved.

"So this guy says a machine can outplay chess with just a CPU, some memory and a bit of code. He neglects to mention that humans have a brain capable of general intelligence".

Year later, computer beats human in chess.

"So this guy says you can teach a computer to play Go just by unsupervised training of neural networks. He neglects to mention that humans have a brain capable of general intelligence".

Year later, computer beats human in Go.

"So this guys says you can program neural network to play computer games competitively using vision and deep learning. He neglects to mention that humans have a brain capable of general intelligence".

We don't need AGI for self driving.

The difficulty of self driving is probably less than a dog walking on the street.

No, a dog doesn't steer a car, because he doesn't have hands, but he's performing the same vision and planning tasks like a human (or AI) driving a car.

He knows where he is, he knows where he wants to go and he uses vision and his non-AGI brain to plan a path to get there while also avoiding dynamic, unexpected obstacles.

bob457 · 6 years ago
It seems to me that one of the difficulties we have with making robots is trying to model them too closely after ourselves. We don’t have a humanoid maid, but we do have a roomba; similarly for lots of industrial automation. Home automation doesn’t look like c3po walking around your house flipping light switches.

It seems like maybe not the best reasoning to say “humans, do it this way, so that’s how my robot should do it.”

rhacker · 6 years ago
Even if that is his point, that's making a lot of assumptions.

I don't know why Tesla autopilot keeps missing obvious impervious occlusive surfaces, and detecting obvious impervious occlusive surfaces is what Lidar excels at, it's kinda making the point for the other side.

JibJabDab · 6 years ago
Cameras also can't really see around objects (in front of car, for instance). Lidar can. With cameras, its as if the goal is "to mimic human vision". That's fine and all but why can't we make it "beyond human vision"?
davidgould · 6 years ago
I’m really curious to learn about how lidar can see around things? You’re the second person to make this claim in this thread and I’ve never heard of it. Please explain or provide a link or something.
KaiserPro · 6 years ago
> which you can only achieve with visible light spectrum video

Well, no.

RGB is really useful. But actually, you can get pretty good object recognition from a point cloud alone. I mean its better to have RGB as well. but infra-red works just as well.

The problem that appears to escape a lot of the commentary is latency. Sure, you can have a rudimentary stereo camera setup and get _some_ depth information reasonably fast. But it won't be good enough to tell you if that blob that's 100m out is stationary or moving towards you.

Lidar gives you high resolution long range 3d point cloud at 30hz (or faster). The best most reliable depth from monocular/stereo will have a latency of at least 150ms and will be a tiny resolution.

The chances are that we will have sub $100 CCD based lidar before we have low noise/low latency/full resolution depth from monocular/stereo cameras.

The other big issue is that to get decent high res depth from deep learning, you need to have decent segmentation. Segmentation comes for free with lidar (assuming you impose rgb over it.)

> spending all of their energy on creating neural nets around lidar then they'll reach a local maximum where they never begin to tackle the much more difficult problem truly needed for self-driving.

This does not make all that much sense. You don't just train on lidar, you feed in steering, acceleration, braking, gears, signs, radar, pretty much everything.

The other important thing to note is that tesla's stuff is still level 2. volvo, BMW, and a few truck companies are all at least level 4. We are celebrating a "genius" who has yet to actually release a system that does what he claims it should.

mc_blue · 6 years ago
Can you give examples of specific vehicles from Volvo/BMW/truck companies that are at level 4?
threeseed · 6 years ago
Musk's argument was refuted by his own Data Scientists.

They admitted their own models are far from perfect and will likely never be. The concerning one in particular was the "is this a large object" model which initially failed to identity objects such as car-carrying trucks, cranes etc.

With Lidar you can certain at least that it will identity an obstacle.

Sean1708 · 6 years ago
> a deep understanding of your surrounding which you can only achieve with visible light spectrum video

Why can you only achieve that using visible light spectrum video?

QuantumGood · 6 years ago
Musk's point has always been to combine vision with radar, instead of Lidar. I'm amazed that this combination is usually overlooked in discussions of Tesla/Lidar
BluSyn · 6 years ago
Exactly. What is rarely mentioned is his exact quote on the reasoning for radar vs. lidar:

“If you’re going to use active photon generation, don’t use visible wavelength, because with passive optical you’ve taken care of all visible wavelength stuff. You want to use a wavelength that’s occlusion-penetrating like radar. LIDAR is just active photon generation in the visible spectrum.”

This article is still missing the point when talking about redundancies. LIDAR only works in essentially perfect weather ("not occlusion-penetrating"). Even if it serves only as a "redundancy" there's no advantage in relying on a sensor suite that operates in a less-safe mode in the most adverse road conditions. So basically if you are driving in snow or fog, your LIDAR-based AV has to fall back to Radar+Cameras. If that system can pass all the safety tests in the worst-case road condition then there is no value in the additional sensors that add expense but no safety margin.

What's even more overlooked is power consumption. LIDAR is far more power intensive, especially when we're talking about multiple packages per vehicle. In the future world of Autonomous Electric Vehicle Fleets, the vehicles using LIDAR will get significantly less range efficiency than their radar counterparts and cost significantly more to build. In a fleet scenario where every margin counts this will result in a significant economic pressure to ditch LIDAR.

So in the end I think Elon will be proved right. Those currently investing in LIDAR-based systems will eventually ditch it for purely practical economic reasons. Those that don't will be completely destroyed in the open market.

The real competitive advantage for AEV's is in the software, not hardware. LIDAR is a crutch for bad software that reaches a theoretical maximum far short of what is needed for economically-viable LVL5 autonomy.

I'll restate this clearly: there's simply no economic or technological advantage to using LIDAR for AEV's.

joshuamorton · 6 years ago
Lidar doesn't use visible spectrum light. They're usually infrared, so Musk's quote makes no sense.

You're making a lot of strong assumptions to draw your conclusions: that commodity cameras and radar can compete on measurement accuracy with lidar systems, and that lidar costs won't decrease with additional investment (we've already seen costs decrease, by like a factor of 10x in less than a decade). The power questions also aren't cut and dry: if you need extra in-vehicle GPUs to support the radar+camera approach, you may well be using more power than a lidar based approach.

There's also no real requirement that AVs operate in snow or dense fog. Those are only considerations in certain climates in certain seasons. You don't actually need the safety system to pass the safety tests (that don't currently exist) in worst case conditions if the vehicle works anyway. Why optimize for the worst case first?

I'll respond clearly: We're multiple computer vision leaps forward away from what Elon needs for success. They're easily half a decade behind Lidar based systems. And people die as a consequence of putting those systems on the road.

stefco_ · 6 years ago
> So in the end I think Elon will be proved right. Those currently investing in LIDAR-based systems will eventually ditch it for purely practical economic reasons. Those that don't will be completely destroyed in the open market.

High quality cameras are insanely complex pieces of electronics and optics. Ditto for processors capable of doing quality image recognition. Large scale manufacturing has made them cheap nonetheless. LIDAR is relatively niche, but if it proves useful to deploy it at scale, I'd expect costs to drop very significantly. The underlying technology uses very simple physics (relative to the algorithmic complexity of image recognition); seems like a solid basis to build a sensor off of.

> LIDAR is a crutch for bad software

You could invert this and say that high precision image recognition is a crutch for ill-suited hardware. The final combination is a product of hardware and software. If LIDAR is currently too expensive or energy-intensive to compete cost-wise at acceptable safety levels, that's one argument, but saying LIDAR is a crutch is just moving the goalposts from "good system" to "cheap hardware".

[edit] Also, just want to point out that RADAR resolutions are way too low to operate a vehicle safely (never mind road signage or other things).

stefan_ · 6 years ago
If all you use is words, it is of course easy to omit that the spatial resolution of radar is barely enough to tell there are one or or two vehicle sized objects in front of you, maybe one to the side, and they better be moving.

To compare it to LIDAR is ludicrous.

dmix · 6 years ago
Ford did some research to improve Lidar for use in the rain/snow using a filtering algorithm.

> Ford’s autonomous cars rely on LiDAR sensors that emit short bursts of lasers as they drive along. The car pieces together these laser bursts to create a high-resolution 3D map of the environment. The new algorithm allows the car to analyze those laser bursts and their subsequent echoes to figure out whether they’re hitting raindrops or snowflakes.

https://qz.com/637509/driverless-cars-have-a-new-way-to-navi...

Still seems less than ideal but I'm curious if that will ever reach somewhere useful.

raxxorrax · 6 years ago
Aren't there still problems with multiple active sensors sweeping the environment?

I remember it being a problem in cars that used Lidar but cannot find the info anymore.

I think Lidar could still be of help and even the perfect software can use any form of sensory redundancy. But I agree that there might be alternatives.

edit: A laser is probably a lot cheaper than camera and imaging dsps if comparable production scales are reached.

jayd16 · 6 years ago
>So in the end I think Elon will be proved right. Those currently investing in LIDAR-based systems will eventually ditch it for purely practical economic reasons. Those that don't will be completely destroyed in the open market.

Maybe, but there's no reason to leave a local maxima until you actually have something better.

liability · 6 years ago
Musk badly wants for you to not realize that nobody is proposing LIDAR-only, but are rather proposing LIDAR+optical+radar. Musk argues against straw men.

(Also the radar Telsa is using has jack-shit for angular resolution. It can't tell the difference between a tree next to the road and a fire truck parked right across it. Consequently that radar has very limited utility.)

kjksf · 6 years ago
More accurately, Musk doesn't care what others think is needed for self-driving, so your aspersions about Musk badly wanting us to think one way or another are not supported by facts.

Neither Tesla nor Musk make a big deal of lack of lidar.

The only reason his views on the subject are public (and so hotly discussed) is because during Autonomy Investor Day he was asked by an investor why Tesla doesn't use a lidar.

So he answered the question. You might not agree with his reasoning but he's not on some "NO LIDAR" publicity tour, trying to change your mind.

Here's the source: https://www.youtube.com/watch?v=Ucp0TTmvqOE

Watch the whole thing. The first time Musk mentions lack of lidar is after being asked.

threeseed · 6 years ago
Actually there was a presentation given by one of their Lead Data Scientists describing their ML architecture. At no point was radar mentioned. They are purely relying on vision to identity cars, obstacles, traffic lights etc with dozens of models each focused on one particular 'type'.

Radar by the sounds of it is being used purely as a fallback.

The question is if the vision systems fail to recognise an obstacle at high speed is the radar long range enough to compensate in time.

Presentation: https://slideslive.com/38917690/multitask-learning-in-the-wi...

kjksf · 6 years ago
Actually, they did mention radar during Autonomy Investor Day (https://www.youtube.com/watch?v=Ucp0TTmvqOE).

Also, https://www.tesla.com/autopilot says:

"A forward-facing radar with enhanced processing provides additional data about the world on a redundant wavelength that is able to see through heavy rain, fog, dust and even the car ahead."

So it's rather bizarre that you would speculate about how they're not using radar when they explicitly say they do.

llbowers · 6 years ago
I’m sure this is a dumb question to anyone with knowledge in this field but is there any reason to not use all three together?
joshuamorton · 6 years ago
Most of the non-Tesla systems do. Waymo uses lidar, radar, and cameras. The cruise vehicles I see around have lidar and radar as well, and I just assume everyone has cameras because they're cheap and easy to stick somewhere.

To be specific about waymo (to be clear I work at Google, but don't actually have any special info on this), look at the photo in [0]. The cone thing on top is a lidar, but also has cameras in the larger part under the cone. The spinny thing on the front is also probably a lidar. The fin that looks like an extra mirror on the back, and the two on the front have radar. There's also probably a forward facing radar mounted on the nose somewhere near the grille.

[0]: https://waymo.com/

threeseed · 6 years ago
It would make sense to use all three.

Lidar used for long range. Vision used for things like colour recognition e.g. is a traffic light green/red or is an ambulance sirens on. Radar used for reversing etc where Lidar given its location might not be able to see that close.

danepowell · 6 years ago
Price. I worked briefly with teams building self driving cars in the past. Their budget for sensors far exceeded the cost of the car itself.
AndrewBissell · 6 years ago
Currently, LIDAR is very expensive, certainly too expensive to build into every Tesla being manufactured. So Musk would not be able to sell a "full FSD capability" option on his cars if he acknowledged LIDAR is useful/necessary to autonomous driving.

The number one link if you search "Tesla" on HN is "All Tesla Cars Being Produced Now Have Full Self-Driving Hardware." It's been an extraordinarily effective marketing gimmick.

kjksf · 6 years ago
According to a recent interview with pony.ai (https://www.youtube.com/watch?v=0VcpZnIg3M0) the cost to retrofit a car with all necessary sensors is $75k.

A Lidar is a significant part of that.

If $40k Tesla can do as well as $40k car + $75k of sensors (including lidar) it's economical game over. Tesla wins by a wide margin.

The $75k will drop in time, but the battle will likely happen before the price of lidars drop significantly enough.

tareqak · 6 years ago
I have no knowledge in the field, but maybe the expense becomes high?
gameswithgo · 6 years ago
expensive large and heavy
ztjio · 6 years ago
That wouldn't be very good click bait
Traster · 6 years ago
I think I would find Musk's claims more compelling if he had actually sat down with an expert and discussed in detail why he believes what he believes. Instead we're sitting here discussing quote a kooky quotes with no real analysis. Even on the face of it

>"They're all going to dump lidar," Elon Musk said at an April event

We know which companies are building self-driving cars, we know what technologies they're using and we know how long they've been working on it. Have we seen any signs that any of these companies are dumping LIDAR? I would've thought it'd be pretty big news right?

coldtea · 6 years ago
>Have we seen any signs that any of these companies are dumping LIDAR? I would've thought it'd be pretty big news right?

In fact, why would they not have a multi-faceted system, keeping LIDAR and alternatives?

CrazyStat · 6 years ago
Because LIDAR is expensive. If camera-based neural networks eventually get good enough that LIDAR provides minimal additional value, they will drop it. This is what Musk is betting on. We're not there yet for sure, and it's not clear to me yet whether that's a realistic goal for the 5-10 year time frame.
leesec · 6 years ago
As I saw in a tweet earlier, "there are no experts in self driving, just people who have failed for different lengths of time".

Also he's saying they're going to have to dump it because it's the wrong approach.

killjoywashere · 6 years ago
Karpathy is leading the ML team and Musk is no slouch. I'm sure he doesn't write code, but he's been through linear algebra, quantum physics, and statistical mechanics. He understands how photons work, how sensors work, how computers work, how the math works. So he can quickly assess the business utility of a proposed solution, or the plan to find solutions. They have more than a couple guys at this caliber. Every person in that presentation was top-of-their-game, mid-career, I'm-not-falling-on-my-sword-for-some-bullshit.
ThatGeoGuy · 6 years ago
> For example, one of the distance estimation algorithms used in the Cornell paper, developed by two researchers at Taiwan's National Chiao Tung University, relied on a pair of cameras and the parallax effect. It compared two images taken from different angles and observed how objects' positions differ between the image—the larger the shift, the closer an object is.

The shift or disparity between sensors doesn't really matter. We've known that wider convergence angles begets better object point estimation since the 70s. Yet, even the KITTI dataset doesn't attempt to take advantage of this, and uses two rather average cameras with a (relatively) short baseline of 0.06m (see: http://www.cvlibs.net/datasets/kitti/setup.php). That's 6cm!!! You have the entire width of the car to separate these cameras by.

> This technique only works if the software correctly matches a pixel in one image with the corresponding pixel in the other image. If the software gets this wrong, then distance estimates can be wildly off.

Again, yeah. But the problem is twofold: you need to detect / match similar points between two images, but the fundamental setup of your system can limit your precision and accuracy. Use a wider-angle lens with better convergent geometry. Every publication based on the KITTI dataset doesn't even address some of the most basic criticisms from photogrammetry.

Which leads to probably why LiDAR gives such a distinct advantage in most of these data sets. You solve two problems:

1) You solve the correspondence problem trivially because LiDAR doesn't need to match points between cameras, and there's no baseline / convergence criteria that the final point precision depends on.

2) Robust geometric data is well-modelled, well understood, and provides an easier criteria for machine learning systems (particularly ones running over KITTI, as in the article) to converge on than just using stereo-imagery with a baseline of 6cm. You get the scale of the system for free and your calibration troubles are whisked away as LiDAR systems tend to be better-calibrated and more stable than most lens systems or configurations you'll find in the cheap off-the-shelf cameras that many autonomous driving startups are using.

I guess I come off a little negative by looking at this, but my first reaction to Musk saying that nobody should or will want to ever use LiDAR for this is that he doesn't know a damn thing about what he's talking about.

tlb · 6 years ago
A 6 cm baseline is enough for humans to make adequate distance estimates.

Besides the correspondence problem, a longer baseline makes it hard to keep the cameras aligned as the vehicle bounces and flexes. You can't mount them separately to the car -- a chassis can easily twist by a degree or two. So you need a stiff mounting bar between them, which you can either put outside the car like a roof mount (ugly, and it gets buffeted by wind) or inside (also ugly).

flor1s · 6 years ago
Why even limit yourself to two cameras? If I recall correctly multi-view geometry benefits from having as many cameras as possible.

In the future we will all have walls covered with a checkerboard pattern in our garage to calibrate the cameras on our self driving cars. :)

dreamcompiler · 6 years ago
Great points. It would make perfect sense to have two baselines: One of a few cm for nearby objects and one car-width for good depth resolution of distant objects (which humans can't do, but humans have much better world models than computers, so better depth perception on the part of computers might close that gap a bit.)

I also think lidar or radar will always be necessary. The Tesla fatality last week happened because a big white truck pulled out in front of the car. With a big blank surface, stereo pixel correlation is impossible, but it's trivial for lidar or radar to read such surfaces.

georgeburdell · 6 years ago
The article misses one important point about LiDAR. Frequency modulated variants, referred to as "FMCW", get velocity information for free via the doppler effect. You can't get that information from a camera without sophisticated image processing, and you can't get it with high resolution from RADAR. Knowing velocity as well as position is important to assessing immediate safety threats.

There's a good write-up by the co-founder of SiLC, a silicon photonics LiDAR startup, here:

https://www.photonics.com/Articles/Integrated_Photonics_Look...

m463 · 6 years ago
I agree with Musk and see lidar where ray tracing was decades ago. It was an expensive impractical "holy grail".

A set of lidar sensors right now costs as much as a car.

Maybe at some point in the future one of these lidar startups will come out with an inexpensive (maybe solid-state) version to augment the current sensors. Or maybe by that time vision will have gotten much better.

georgeburdell · 6 years ago
The cost of lidar is going to plummet due to exactly the end of your post. Several startups (SiLC, Aeva, etc.) are using silicon photonic integrated circuits. Several more early stage startups have either mems or phase array prototypes for completely solid state chips.
gauku · 6 years ago
Blickfeld is working on solid-state LIDAR. From what I've heard from friends working there, their sensor is/will be available for under $1000 which is a huge cost reduction from the current price(LIDAR) == price(car) solutions
kjksf · 6 years ago
The thing is, self driving wars will likely be over before economies of scale for production of lidars will happen.

If non-lidar system doesn't work, then the cost of lidar, even at $10k, is irrelevant.

If you can make non-lidar system work better than humans (i.e. with quality acceptable for regulators) before the cost of lidars drops down significantly, then lidars lose based on economics.

And the cost of lidar won't drop significantly quickly. The next step-change in price would probably require mass production i.e. production of hundreds of thousands of units per year.

Even if lidar robotaxis happen before non-lidar ones, initially they'll be made in tens of thosands of units per year, leaving a couple of years for non-lidar tech to catch up.

gundmc · 6 years ago
Waymo claimed to be able to produce lidar sensors for 10% of market price back in January of 2017 (estimated $7500/unit). If true, it'll be critical to their scaling and success.

https://techcrunch.com/2019/03/06/waymo-to-start-selling-sta...

m463 · 6 years ago
The waymo cars I've seen seem to have many units on each car.
simcop2387 · 6 years ago
I wonder about the noise aspect of this when you've got 20 cars nearby also using lidar. Is there a point where these kinds of active sensors begin interfering with eachother? I know that it isn't lidar but the xbox kinect's used to interfere with each other if you had multiple in one room
ThatGeoGuy · 6 years ago
That really depends on the modality of the LiDAR. For the record, the Kinect is techically LiDAR since it is using "Light Detection And Ranging."

As for why the Kinect interferes with other units, it's because of the imaging modality (structured light). The sensors interfere with one another because they're largely dependent on detecting a specific pattern of projected dots. If you detect too many dots or if the image gets saturated, you start to have a problem.

In the case of traditional scanning LiDAR (e.g. terrestrial LiDAR in the sense of a Leica or Faro or Velodyne unit), this isn't necessarily the same case. Sure, if the two lasers point exactly at each other for a given point over their sweep, then at that point the lasers will saturate the measurement and that specific measurement will not be useful. In time-of-flight based, mirrorless systems, this matters less than one might think. I can see this being consistently a problem when scanning with Velodyne tech since they tend to only rotate about one axis, but for other types of LiDAR I don't think it would be as big of a deal. Granted, then you have to worry about scanning speed and how that affects the final results.

Overall, I don't think that unit interference is going to be a significant factor in adoption. LiDAR is a broad technology and it's not easy to make assumptions about the entire industry based on a couple implementations or modalities.

flor1s · 6 years ago
As an aside, the original Kinect and Kinect for Xbox 360 use different technologies for 3D detection. The original Kinect projects an infrared pattern and then detects the deformation of the pattern to determine distance/shape. The Kinect for Xbox 360 uses more traditional time of flight.
georgeburdell · 6 years ago
Most next gen lidar systems will have coherent mixjng circuits to combat this exact issue. It’s typically called “FMCW”, frequency modulated continuous wave.