Readit News logoReadit News
hatthew commented on Pixel 10 Phones   blog.google/products/pixe... · Posted by u/gotmedium
cameronh90 · 4 days ago
Digital has never been light-to-pixel.

At a minimum, you have demosaicing, dark frame subtraction, and some form of tone mapping just to produce anything you'd recognise as an photo. Then to produce a half-way acceptable image will involve denoising, sharpening, dewarping, chromatic aberration correction - and that just gets us up to what was normal at the turn of the millennium. Nowadays without automatic bracketing and stacking, digital image stabilisation, rolling shutter reduction, and much more, you're going to have pretty disappointing phone pics.

I suspect you're trying to draw a distinction with the older predictable techniques of turning sensor data into an image when compared to the modern impenetrable ones that can hallucinate. I know what you're getting at, but there's not really a clear point where one becomes the other. You can consider demosaicing and "super-res zoom" as both types of super-resolution technique intended to convert large amounts of raw sensor data into image that's closer to the ground truth. I've even seen some pretty crazy stuff introduced by an old fashioned Lanczos-resampling based demosaicing filter. Albeit, not Ryan Gosling[0].

Of course, if you don't like any of this, you can configure phones to produce RAW output, or even pick up a mirrorless, and take full control of the processing pipeline. I've been out of the photography world for a while so I'm probably out of date now, but I don't think DNGs can even store all of the raw data that is now used by Apple/Google in their image processing pipelines. Certainly, I never had much luck turning those RAW files into anything that looked good. Apple have ProRAW which I think is some sort of hybrid format but I don't really understand it.

[0] https://petapixel.com/2020/08/17/gigapixel-ai-accidentally-a...

hatthew · 3 days ago
By my understanding, demosaicing almost always just "blurs" the photo slightly, reducing high-frequency information. Tone mapping is unavoidable, invisible to most people, and usually doesn't change the semantic information within an image (the famous counterexample is of course The Dress). Phone cameras in recent years do additional processing to saturate, sharpen, HDR, etc., and I find those distasteful and will happily argue against them. But AI upscaling/enhancement is a step further, and to me feels like a very big step further. It's the first time that an automatic processing step has a very high risk of introducing new (and often incorrect) semantic information that is not available in the original image, the classic example being the samsung moon.
hatthew commented on Pixel 10 Phones   blog.google/products/pixe... · Posted by u/gotmedium
ge96 · 4 days ago
That fake zoom with AI is gross ugh

If I'm taking a picture of something I want it to be real light-to-pixel action not some made up wambo-jambo

hatthew · 4 days ago
I find it kinda scary that this is marketed as "zoom" and "recovering details", when the reality is that it quite literally makes stuff up and hopes you won't notice the difference. You and I know that it's completely fake, but we (or at least I) don't even know how much is faked, and probably 99% of people won't even know that it's fake at all.

How long until someone gets arrested because an AI invented a face that looks like theirs? Hopefully lawyers will be know to throw out evidence like that, but the social media hivemind will happily ruin someone's life based on AI hallucinations.

hatthew commented on The new geography of stolen goods   economist.com/interactive... · Posted by u/tlb
rimbo789 · 5 days ago
The volume of containers is unimaginably huge.

Take the Evergiven. It can fit ~20k containers. A “quick” check each going 2 minutes would add 40k minutes to loading, or 667 hours or 27 days. A month basically.

In a world where time is money no way they are checking all containers.

hatthew · 4 days ago
The hypothetical I'm imagining is that the trucks/trains going into a port go through something like a strip photography [0] x-ray machine, which doesn't need the vehicle to even stop at all. Some barcode/QR code on the side of the container connects to some manifest, and then a human (or AI?) can do a quick sanity check of "oh the manifest says this container is full of teddy bears but it sure looks like there's a car in there."

Obviously if it were that easy then somebody would have done it already, but I don't immediately see why that definitely wouldn't work. My guess is either a fast x-ray machine is implausible, and/or simply matching the contents to a manifest wouldn't be enough of a deterrent to criminals.

[0]: https://en.wikipedia.org/wiki/Strip_photography

hatthew commented on The new geography of stolen goods   economist.com/interactive... · Posted by u/tlb
hatthew · 5 days ago
With all the technology that exists today, I'm surprised that we haven't invented something that would make it logistically and economically feasible to do a quick scan of e.g. all containers going into a port.
hatthew commented on What does Palantir actually do?   wired.com/story/palantir-... · Posted by u/mudil
ianks · 10 days ago
I’d be curious to hear a follow-up article about what Palantir doesn’t do. For better or worse, I think we are living in a time where companies should take principled stands about anti-features.

It’s good to build in all of these optional data and privacy knobs, but I fear that’s not enough.

hatthew · 10 days ago
TFA mentions the most important points, which are that Palantir doesn't provide any data or act on any data.
hatthew commented on GPT-5   openai.com/gpt-5/... · Posted by u/rd
fidotron · 17 days ago
Going by the system card at: https://openai.com/index/gpt-5-system-card/

> GPT‑5 is a unified system . . .

OK

> . . . with a smart and fast model that answers most questions, a deeper reasoning model for harder problems, and a real-time router that quickly decides which model to use based on conversation type, complexity, tool needs, and explicit intent (for example, if you say “think hard about this” in the prompt).

So that's not really a unified system then, it's just supposed to appear as if it is.

This looks like they're not training the single big model but instead have gone off to develop special sub models and attempt to gloss over them with yet another model. That's what you resort to only when doing the end-to-end training has become too expensive for you.

hatthew · 17 days ago
I know this is just arguing semantics, but wouldn't you call it a unified system since it has a single interface that automatically interacts with different components? It's not a unified model, but it seems correct to call it a unified system.
hatthew commented on Drivers who appeal school speed zone camera fines almost guaranteed to lose   abcactionnews.com/news/st... · Posted by u/josephcsible
hatthew · 18 days ago
> the signage requirements for school speed zones only require signs to designate when the school zone is in effect. So even if a school speed zone sign states that drivers must slow down “when [the light is] flashing,” that light doesn’t have to actually be flashing for a driver to get cited.

I'm a little confused. If the light isn't flashing, doesn't that mean that the school zone isn't in effect? I don't understand what about the law makes it possible to get cited when the light isn't flashing.

hatthew commented on A criminal enterprise run by monkeys   wsj.com/lifestyle/monkeys... · Posted by u/mathattack
hatthew · 22 days ago
> A photograph taken by one of the thieves themselves.

uh oh

hatthew commented on iPhone 16 cameras vs. traditional digital cameras   candid9.com/phone-camera/... · Posted by u/sergiotapia
CarVac · 25 days ago
Unfortunately, if the phone camera images are processed without oversharpening, the results are extremely soft.

Also, the wide lenses on most phones are actually very heavily distorted nearly to the point of being fisheye, and made rectilinear with processing.

hatthew · 25 days ago
Yeah, and even with sharpening it's noticeably softer when you zoom in on the photo.

For fisheye, I guess it would have been more accurate to say: the perspective distortion is present in both photos and is stronger for the iphone photo due to a shorter effective focal length, and there is no noticeable fisheye/barrel distortion in the iphone photo.

hatthew commented on iPhone 16 cameras vs. traditional digital cameras   candid9.com/phone-camera/... · Posted by u/sergiotapia
hatthew · 25 days ago
My only significant gripe with phone cameras is that they oversharpen everything. Sharpening can subjectively make things look better as long as you don't zoom in too much, but has one significant problem: desaturation. In high-detail high-contras areas, e.g. the foreground grass, the sharpening pushes many of the pixels towards black or white, which are, notably, not green. This has the overall effect of desaturating these textures, and is the impetus for

Also, unless I am mistaken, the iphone camera doesn't have a fisheye lens, it has a wide angle rectilinear lens. This doesn't "create distortion that doesn't exist with the real camera", it simply amplifies the natural distortions that you get from projecting the 3D world onto a 2d plane. As others point out, this can be easily remedied by moving further away and zooming in.

u/hatthew

KarmaCake day719June 21, 2022View Original