Readit News logoReadit News
jchw · a year ago
The original press release calls it "anti-intelligent" which makes some sense. This blog headline calls it "anti-AI" which makes it sound like it is meant to mess with machine learning training algorithms, but actually it's not really "anti-AI", just "not AI". (Whether ML-based image processing should really qualify as "artificial intelligence" in the first place, just because it uses machine learning algorithms, is an entirely different story, but I guess this is just our lives now.)
PaulHoule · a year ago
Getting

https://www.dxo.com/

which uses ML for denoising high ISO shots was like getting a sun in my pocket for indoor sports photography. You will take ML developers out of my cold dead hands.

jchw · a year ago
Well, I'm not a photographer or anything, and I certainly don't have anything in particular against ML algorithms for image processing, it's just another tool after all. Depending on how it's applied, it might be hard to tell it apart from any other image processing algorithm, and in other cases, it blurs the lines between generative AI and image processing.

On the Internet, Waifu2x has proven popular for quite some time. I don't know how it works architecturally, but they've trained an ML model on anime-style illustrations and photos, specifically to upscale and denoise images (particularly, to reverse JPEG artifacts) using a corpus of images before and after downscaling/adding JPEG artifacts. It is incredible when just using it to denoise JPEG and quite impressive for scaling up to 2x. It definitely works better for anime-style illustrations, which suffer from JPEG artifacts more than photographs do, anyhow.

I also like Google Camera's "Night Sight" feature. It's maybe not astounding anymore, but it definitely was a vast improvement for capturing photos at night using a little smartphone camera when it first came around.

That said, there are pitfalls to these more advanced algorithms. They can have different failure modes than people are used to. People routinely fail to realize how dangerous it can be when a machine is "lying" to you in a way that you can't necessarily comprehend the risks of; even before ML we have plenty of good examples, like Xerox machines performing compression that accidentally altered the numbers on the page[1]. With ML algorithms that pull increasingly more signal out of increasingly less information, the potential for bad extrapolations and downright hallucination certainly must increase. There's been some funny examples of this with the iPhone and Google Camera features, but it really does have some interesting implications. Can we always trust these photos to be legally-admissible, for example, even when they're not altered intentionally? Don't know the answer. It's probably not a huge deal, but at some point, this will surely become an issue, and I bet it will be very interesting (and hopefully not too tragic.)

[1]: https://www.dkriesel.com/en/blog/2013/0802_xerox-workcentres...?

qingcharles · a year ago
The nice thing about post-production apps like this is, though, that you get to choose when and how to apply them. I couldn't live without ML photography tools for fixing lighting issues when I can't do it in camera.

Sadly, most cameraphones are now just applying ML before handing the data out to the APIs. Getting the raw sensor data from some phones is literally impossible for third party apps now, as I understand it (looking at you Samsung).

xkcd-sucks · a year ago
A couple years ago before the current "AI" hype cycle, someone sent me an IPhone photo of their boston terrier on a black-and-white patterned Ikea rug. Although it was "just an unprocessed photo", the rug pattern had been propagated over the dog's body replacing some of its original markings.
dagmx · a year ago
Are you sure it wasn’t from a panorama? That’s the only camera mode that does any kind of stitching that could cause that. None of the other camera modes are capable of doing extrapolation like that.
wyager · a year ago
Great. This looks like it may address some problems I've written about in the iPhone photo pipeline:

http://yager.io/comp/comp.html

https://petapixel.com/2023/02/04/the-limits-of-computational...

lxgr · a year ago
On a technical level, this seems a bit silly (everything coming out of a Bayer filter is post-processed in some way, in the end), but I can definitely see people getting tired of "opinionated" photo processing, AI or otherwise.

Tangentially related: I learned today that an old point-and-shoot 1-inch camera I'd bought over 7 years ago is now selling for twice the MRSP used online, apparently because it looks cool/retro and/or because photos and videos coming out of it don't look "pre-processed" in any way.

selykg · a year ago
This will absolutely get me to go back to Halide once iOS 18 hits and I can change the default camera that gets launched from the Home Screen button.

In 2025 I'd love to see "AI" disappear from usage. I know I'm not likely to get that, but damn if I am not tired of hearing about it. I've never wanted a dumb phone more than I have in 2024, or to get rid of my computers.

erksa · a year ago
The processing we're starting to do on the standard images on these phones are crazy. I'm always excited for the camera advancements in the phones, but always have to go hunting for an app that does exactly this.
datadrivenangel · a year ago
There is no such thing as a 'pure' photo, and pretending otherwise is an editorial decision. Our imaging technology collects radiation and we then process that signal and re-emit similar radiation. Every step along that process has decisions made about how that step works.

And that's not even getting into the fun things we can do by applying our own radiation to reflect off the object! We can do a lot with good lighting.

It is nice to have the ability to choose to take more control over automated steps of the process though, especially because modern apple phones have impressive cameras.

criddell · a year ago
That's true, but it feels like a big jump when cameras start replacing the moon with their higher res version of the moon or when people are erased from photos or their facial expression is cut and pasted from another image. You aren't just tuning an photograph, you are fabricating something that never existed.