> produce full-color images that are equal in quality to those produced by conventional cameras
I was really skeptical of this since the article conveniently doesn't include any photos taken by the nano-camera, but there are examples [1] in the original paper that are pretty impressive.
Those images are certainly impressive, but I certainly don't agree with the statement "equal in quality to those produced by conventional cameras": they're quite obviously lacking in sharpness and color.
I wonder how they took pictures with four different cameras from the exact same position at the exact same point in time. Maybe the chameleon was staying very still, and maybe the flowers were indoors and that's why they didn't move in the breeze, and they used a special rock-solid mount that kept all three cameras perfectly aligned with microscopic precision. Or maybe these aren't genuine demonstrations, just mock-ups, and they didn't even really have a chameleon.
> Ultrathin meta-optics utilize subwavelength nano-antennas to modulate incident light with greater design freedom and space-bandwidth product over conventional diffractive optical elements (DOEs).
Is this basically a visible-wavelength beamsteering phased array?
How does this work? If it's just reconstructing the images with nn, a la Samsung pasting a picture of the moon when it detected a white disc on the image, it's not very impressive.
I had the same thought, but it sounds like this operates at a much lower level than that kind of thing:
> Then, a physics-based neural network was used to process the images captured by the meta-optics camera. Because the neural network was trained on metasurface physics, it can remove aberrations produced by the camera.
Years ago I saw an interview with a futurist that mentioned the following:
"One day, your kids will go to the toy store and get a sheet of stickers. Each sticker is actually a camera with an IPv6 address. That means they can put a sticker somewhere, go and point a browser at that address and see a live camera feed.
I should point out: all of the technology to do this already exists, it just hasn't gotten cheap enough to mass market. When economies of scale do kick in, society is going to have to deal with a dramatic change in what they think 'physical privacy' means."
Chalk another one up for Vernor Vinge. This tech seems like it could directly enable the “ubiquitous surveillance” from _A Deepness in the Sky_. Definitely something to watch closely.
3 or 4 mm in diameter, according to a scene in chapter 6, big enough to have similar resolution to that of a human eye, according to Paul, but able to look in any direction without physically rotating.
In chapter 13 the enemy describes them as using Fourier optics, though that seemed to be their speculation - not sure whether it was right.
I've been interested in smart dust for a while; recently the news seems to have dried up, and while that may have been other stuff taking up all the attention (and investment money), I suspect that many R&D teams went under government NDAs because they are now good enough to be interesting.
The other side to the localizers is the communication / mesh networking, and the extremely effective security partitioning. Even Anne couldn't crack them! It's certainly a lot to package in such a small form
Everyone here is thinking about privacy and surveillance and here I am wondering if this is what lets us speed up nano cameras to relativistic speeds with lasers to image other solar systems up close.
Honestly even if they are size of a jellybean, it would be a massive boon for space exploration. Just imagine sending them for reconnaissance work around the solar system to check out potential bodies to explore for bigger probes later down the track. Even to catch interesting objects suddenly appearing with minimal delay, like ʻOumuamua.
Given the tiny dimensions, and wide field, adding regular lenses over an array could create extreme wide field, like 160x160 degrees, for everyday phone cameras. Or very small 360x180 degree stand-alone cameras. AR glasses with a few cameras could operate with 360x160 degrees and be extremely situationally aware!
Another application would be small light field cameras. I don't know enough to judge if this is directly applicable, or adaptable to that. But it would be wonderful to finally have small cheap light field cameras. Both for post-focus adjustment and (better than stereo) 3D image sensing and scene reconstruction.
Are they not? Every modern camera does the same thing. Upscaling, denoising, deblurring, adjusting colors, bumping and dropping shadows and highlights, pretty much no aspect of the picture is the way the sensor sees it once the rest of the pipeline is done. Phone cameras do this to a more extreme degree than say pro cameras, but they all do it.
To point out the obvious, film cameras don't, nor do many digital cameras. Unless you mean modern in the sense of "cameras you can buy from best buy right now", of course. But that isn't very interesting: best buy has terrible taste in cameras.
The paper says that reconstructing an actual image from the raw data produced by the sensor takes ~58ms of computation, so doing it for 10,000 sensors would naively take around ten minutes, though I'm sure there's room for optimization and parallelization.
The sensors produce 720x720px images, so a 100x100 array of them would produce 72,000x72,000px images, or ~5 gigapixels. That's a lot of pixels for a smartphone to push around and process and store.
Sensor size is super important for resulting quality, that's why pros still lug around huge full frame (even if mirrorless) cameras and not run around with phones. There are other reasons ie speed for sports but lets keep it simple (also speed is affected by data amount processed, which goes back to resolution).
Plus higher resolution sensors have this nasty habit of producing too large files, processing of which slows down given devices compared to smaller, crisper photos and they take much more space, even more so for videos. That's probably why Apple held to 12mpix main camera for so long, there were even 200mpix sensors available around if wanted.
I was really skeptical of this since the article conveniently doesn't include any photos taken by the nano-camera, but there are examples [1] in the original paper that are pretty impressive.
[1] https://www.nature.com/articles/s41467-021-26443-0/figures/2
Is this basically a visible-wavelength beamsteering phased array?
> Then, a physics-based neural network was used to process the images captured by the meta-optics camera. Because the neural network was trained on metasurface physics, it can remove aberrations produced by the camera.
"One day, your kids will go to the toy store and get a sheet of stickers. Each sticker is actually a camera with an IPv6 address. That means they can put a sticker somewhere, go and point a browser at that address and see a live camera feed.
I should point out: all of the technology to do this already exists, it just hasn't gotten cheap enough to mass market. When economies of scale do kick in, society is going to have to deal with a dramatic change in what they think 'physical privacy' means."
[0] https://en.wikipedia.org/wiki/The_Peace_War
In chapter 13 the enemy describes them as using Fourier optics, though that seemed to be their speculation - not sure whether it was right.
PS It's "Vernor"
It's been a while since I've heard anyone talk about the Starshot project[0]. Maybe this would help revitalize it.
Also even without aiming for Proxima Centauri, it would be great to have more cameras in our own planetary system.
--
[0] - https://en.wikipedia.org/wiki/Breakthrough_Starshot
https://www.centauri-dreams.org/2024/01/19/data-return-from-...
Given the tiny dimensions, and wide field, adding regular lenses over an array could create extreme wide field, like 160x160 degrees, for everyday phone cameras. Or very small 360x180 degree stand-alone cameras. AR glasses with a few cameras could operate with 360x160 degrees and be extremely situationally aware!
Another application would be small light field cameras. I don't know enough to judge if this is directly applicable, or adaptable to that. But it would be wonderful to finally have small cheap light field cameras. Both for post-focus adjustment and (better than stereo) 3D image sensing and scene reconstruction.
They're not comparable, in the intuitive sense, to conventional cameras.
Edit: by default.
Has been published in 2021. Also here https://news.ycombinator.com/item?id=29399828
The paper says that reconstructing an actual image from the raw data produced by the sensor takes ~58ms of computation, so doing it for 10,000 sensors would naively take around ten minutes, though I'm sure there's room for optimization and parallelization.
The sensors produce 720x720px images, so a 100x100 array of them would produce 72,000x72,000px images, or ~5 gigapixels. That's a lot of pixels for a smartphone to push around and process and store.
edit: mixed up bits and bytes
Plus higher resolution sensors have this nasty habit of producing too large files, processing of which slows down given devices compared to smaller, crisper photos and they take much more space, even more so for videos. That's probably why Apple held to 12mpix main camera for so long, there were even 200mpix sensors available around if wanted.