Also check out Lytro, the company created based on this technology. They've got some pretty impressive demos on their website: https://pictures.lytro.com/
I had a whole lot of fun with it at first, but it's fallen into disuse. The live refocusing is a fun trick, but in practice what the camera is really doing is taking ~10 simultaneous pictures with different focal lengths, and letting you choose among them after the fact when you're using their special app.
The novelty of this trick wears off quickly. In practice, I know what I want to focus on at the point when I'm taking the picture in the vast majority of circumstances, and if I want to keep my options open I can bracket. In those situations, using 90% of the sensor's pixels to record versions of the image using focal lengths I don't care about just means I'm getting a 1 megapixel image when I could have had 10.
Where I had high hopes was that this would be great for macrophotography, where focus is a lot harder to get right. Unfortunately it's really difficult to convince the camera to focus in close enough to capture the subject. I'm not sure if this is a physical limitation of the hardware or if it's a firmware issue, but either way it was disappointing.
So all that negative nellying aside, one spot where I think this technology could be really neat is in computer vision for robotics. I'm guessing that using a plenoptic camera instead of a stereoscopic pair of cameras for vision would enable capture of better 3D information, or at least 3D information with different characteristics that might be more useful in some circumstances.
I had a whole lot of fun with it at first, but it's fallen into disuse. The live refocusing is a fun trick, but in practice what the camera is really doing is taking ~10 simultaneous pictures with different focal lengths, and letting you choose among them after the fact when you're using their special app.
The novelty of this trick wears off quickly. In practice, I know what I want to focus on at the point when I'm taking the picture in the vast majority of circumstances, and if I want to keep my options open I can bracket. In those situations, using 90% of the sensor's pixels to record versions of the image using focal lengths I don't care about just means I'm getting a 1 megapixel image when I could have had 10.
Where I had high hopes was that this would be great for macrophotography, where focus is a lot harder to get right. Unfortunately it's really difficult to convince the camera to focus in close enough to capture the subject. I'm not sure if this is a physical limitation of the hardware or if it's a firmware issue, but either way it was disappointing.
So all that negative nellying aside, one spot where I think this technology could be really neat is in computer vision for robotics. I'm guessing that using a plenoptic camera instead of a stereoscopic pair of cameras for vision would enable capture of better 3D information, or at least 3D information with different characteristics that might be more useful in some circumstances.