Readit News logoReadit News
syntaxing · 2 years ago
I swear I read something similar along the lines (pun intended) of this a couple years back but energy was not the Radon transformation, I forget what exactly it was. The hardest part of using this in production is that there is a lot of hand-tuned values, particularly during the edge detect portion which makes it difficult to scale. It's usually cheaper and easier to calibrate the camera at a mass scale in the factory using "old school methods.
PaulHoule · 2 years ago
I have been wanting to do this for my

https://www.kandaovr.com/qoocam-ego

after reading Lenny Lipton's books about stereo cinematography I've been debugging my stereograms and one thing I know is that the lenses on that thing have a little bit of pincushion distortion which means stereo pairs that are supposed to be perfectly aligned vertically aren't quite.

I know DxO makes distortion correct filters for lens/camera pairs and I was sure I could make one by taking pictures of a grid but this gives a definite path to doing it.

michaelt · 2 years ago
The ideas listed in the document are about correcting distortion when the image has already been taken and you can't control the scene.

As you've got the camera in hand, you've got an even simpler option available: You can print a special pattern called a 'ChArUco board' [1] take pictures of it from a few different angles, then you can calculate the camera "intrinsics" (field of view, lens distortion parameters) and "extrinsics" (relative positions of your two cameras) based on those images.

[1] https://docs.opencv.org/3.4/da/d13/tutorial_aruco_calibratio...

qingcharles · 2 years ago
You can also use the patterns to generate DNG/RAW lens profiles which allow automatic lens correction in popular apps like Lightroom etc:

Adobe Lens Profile Creator

https://helpx.adobe.com/camera-raw/digital-negative.html

danilor · 2 years ago
I have difficulty understanding what the transformed image is equivalent to. This makes it feel like the picture was taken at a difference distance and focal length, but[1] it would look different if that were the case because the perspective would be different. Does this have any "physical" interpretation that would make it easier for me to understand? Like, cropping an image is equivalent to changing the focal length; what would this be equivalent to? A type of rectilinear lens?

[1] With the exception maybe for a single plane in focus?

TacticalCoder · 2 years ago
> I have difficulty understanding what the transformed image is equivalent to.

As a non-photographer with zero knowledge about photography, the fixed image, with straight lines, feels much more natural to me.

I'd say it reminds me of 3D games like, say, 3D game simulators?

Are 3D games not reproducing lens deformation more or less correct from a "physics" point of view? I happen to be on vacation atm in an apartment on the beach on the ninth floor with a clear view: what I see is much closer to the "corrected" (not my word but TFA's author's one) version than to the other one.

wlesieutre · 2 years ago
The author is saying "wide angle lens" but means what people would conventionally call a "fish eye lens." Normally when someone says wide angle, it's still assumed to be a rectilinear image projection, same as what you get with 3D game rendering. And if you're talking about a curvy fisheye projection, you specify that.

The iPhone's ultrawide lens is a good example of a rectilinear projection with lots of example photos available.

It can produce weird feeling images with stuff at the edges looking stretched out, and parallel lines being at significant angles to each other, but it does not make straight lines curved like the effect that the author is removing.

Example ultrawide photo with straight lines from reddit https://www.reddit.com/r/iPhoneography/comments/ena7s5/bosto...

antman · 2 years ago
That is how the brain wants to see. When I got a new pair of glasses everything looked very curvy. After a week every line was straight again because the brain learned the new transformation
markerz · 2 years ago
As an artist, this the transformed image is what I would draw using 1-point perspective. Basically making everything straight lines. It intuitively feels a lot more natural and fits into our mental model of how the human world is shaped (i.e. everything is a rectangle)

https://m.youtube.com/watch?v=qOojGBEsWQw

corysama · 2 years ago
I’ve done some work on implementing this as a coder, not a mathematician. So, the following description is just how the process looks while you are implementing it :P

Take the original curved image and put it on a super stretchy rubber sheet. Pull all four corners out diagonally until the curves look straight. You have to pull really hard and the corners will be stretched out into thin spikes.

But, no one wants to see an image that’s 80% long, thin spikes with lots of empty space between them. So, go to the center and crop down to the biggest rectangle you can that doesn’t have empty space around the edges.

hammock · 2 years ago
I would draw an analogy to map projections.

Or, take an image of a soccer ball (the kind with pentagons and hexagons), you can see all of one hemisphere. But it’s a “fisheye” view. If you take the half soccer ball and cut up the shapes and rearrange them on a flat surface, you are adjusting the projection

oasisbob · 2 years ago
It sounds like you know this already, but as any portrait photographer would note, changing the focal length is not equivalent to cropping. It's roughly equivalent, at best.

ie, Telephoto lenses bring a different perspective which includes distance compression. It's very apparent when photographing human faces.

jaffa2 · 2 years ago
If you take a shot with a 35mm and take same shot with a 85mm and then crop the 35mm to the same fov as the 85m the image will look _identical_ (not withstanding lens characteristics etc) the compression you talk about is due to the _distance_ between the subject and then lens changing . You will get the same compression effect if you shoot with a 50mm from 30 feet away …
radiowave · 2 years ago
Changing the focal length doesn't inherently change the perspective, and (resolution and lens aberations aside) is exactly equivalent to cropping.

What changing the focal length does do is (e.g.) make you stand further back, and that changes the perspective, causing distance compression, etc.

srean · 2 years ago
His dissertation looks very interesting

https://hh409.user.srcf.net/index.html#PhDThesis

emtel · 2 years ago
This is cool, but couldn't you generate the correction transformation simply from knowing the lens geometry? I assume this is what my phone is doing when I take wide-angle pictures (which don't have any visible distortion)
harywilke · 2 years ago
Depends on the reason why you are doing this transform. If it's just a visual correction filter, then that will work well enough. If you are trying to track camera movement on a series of images and match a 3D model to the footage, then it's not. You want to analyze the actual images the lens is producing and generate the distortion from that. Every lens is different. Different setups with the same lens may produce different distortions. A warm lens behaves different than a cool lens. Change the focus, and the distortion may change (lens breathing). Some lenses exhibit different distortions at different zoom levels.
tadbit · 2 years ago
Yes. Most professional photo editing and management software has built-in functionality or an add-on for lens distortion correction. However it either requires having the original photo, or at least a non-cropped version with the exif data, or some knowledge of what body and lens and focal length was used.

This utility doesn't require the original non-cropped area nor any other information about the picture that was taken. You could scrape a bunch of pictures from Instagram or Facebook and batch process away.

srean · 2 years ago
A question for those who know optics: If the angle of incidence is past the critical angle of red do all of the visible spectrum get reflected without any chromatic effects ?

Are there cameras that have a sensors laid out on a curve matching the expected surface on which the image is in focus ?

I wonder why there are no cameras (apart from astronomical telescopes) that use reflection only for imaging. Such a camera would be too bulky to be practical ?

zokier · 2 years ago
wdfx · 2 years ago
I have an old Nikon 500/8 ; gotta be honest, it's not very good.
srean · 2 years ago
Thanks for the link. Learned something new.
PaulHoule · 2 years ago
In the early 2000s I was thinking about a machine vision camera that would use a mirror and a small lens to image a whole room, as seen from a corner. I figured it would take about 50 megapixels to get the performance I wanted and at that time 5 megapixels seemed like a lot.

Today now that is no problem. A few years ago I saw this

https://owllabs.com/products/meeting-owl-3

at work, the fisheye lens on it is more compact than what I had in mind and it has enough pixels to pick out individuals speaking in a conference room.

RobotToaster · 2 years ago
> Are there cameras that have a sensors laid out on a curve matching the expected surface on which the image is in focus ?

Not a sensor, but some disposable film cameras have a curved film holder to compensate for low quality optics. Some panoramic film cameras do the same.

Teever · 2 years ago
I had similar thoughts recently because I am working on a catadioptric system for a project at work.

https://www.reddit.com/r/Optics/comments/oimvt0/curved_camer...

https://www.digitalcameraworld.com/news/sonys-new-curved-ima...

It appears that curved sensors maybe exist somewhere in a lab, and have been slightly commericialized, but I didn't see any 'buy now' buttons when I looked.

I didn't dive too deep into it because It's not like I'm going to be changing the sensor in my design at this stage of the game, but it was an idea that a friend suggested when I talked about the limitations of the mirror based system that we're using.

https://techxplore.com/news/2024-07-insect-autonomous-strate...

This link popped up on hackernews a few days ago and I noticed that they were using a mirror in their optical system as well. I haven't had a chance to read beyond that promotional article above so I don't know how they're overcoming the depth of field limitations with this kind of optical set up.

srean · 2 years ago
Very interesting. All the best for your project.
lionkor · 2 years ago
This site could really use a mobile version[0]

[0]: https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_media_q...

dwighttk · 2 years ago
Reader mode works alright
doe_eyes · 2 years ago
It should be noted that this article talks about a pretty niche use case without really spelling it out.

Camera optics are generally designed not to exhibit this kind of distortion. As other commenters note, wide-angle lenses are ground to provide rectilinear projection where horizontal and vertical lines are straight. Further, if a particular lens does exhibit distortion, the usual solution is to measure the effect and construct a reverse mapping that can be applied in software.

There are relatively few situations where you have a distorted image taken with unknown lens, but where you have a regular grid of horizontal and vertical lines for the algorithm to rely on.

Daub · 2 years ago
> There are relatively few situations where you have a distorted image taken with unknown lens, but where you have a regular grid of horizontal and vertical lines for the algorithm to rely on.

In visual effects distortion correction is required is required before effective camera tracking can take place. It is also required for a matte to fit the footage. In such situations, it is not unknown to be given 'mystery meat' footage which requires distortion correction. You would be surprised how many directors and DOPs take VFX voodoo for granted and would rather save five minutes on set at the cost of two days in post production.