Readit News logoReadit News
underlines · 2 years ago
JPEGLI = A small JPEG

The suffix -li is used in Swiss German dialects. It forms a diminutive of the root word, by adding -li to the end of the root word to convey the smallness of the object and to convey a sense of intimacy or endearment.

This obviously comes out of Google Zürich.

Other notable Google projects using Swiss German:

https://github.com/google/gipfeli high-speed compression

Gipfeli = Croissant

https://github.com/google/guetzli perceptual JPEG encoder

Guetzli = Cookie

https://github.com/weggli-rs/weggli semantic search tool

Weggli = Bread roll

https://github.com/google/brotli lossless compression

Brötli = Small bread

billyhoffman · 2 years ago
Google Zürich also did Zopfli, a DEFLATE-compliant compressor that gets better ratios than gzip by taking longer to compress.

Apparently Zopfli = small sweet breat

https://en.wikipedia.org/wiki/Zopfli

occamrazor · 2 years ago
Zopf means “braid” and it also denotes a medium-size bread type, made with some milk and glazed with yolk, shaped like a braid, traditionally eaten on Sunday.
codetrotter · 2 years ago
They should do XZli next :D

And write it in Rust

codetrotter · 2 years ago
> The suffix -li is used in Swiss German dialects

Seems similar to -let in English.

JPEGlet

Or -ito/-ita in Spanish.

JPEGito

(Joint Photographers Experts Grupito)

Or perhaps, if you want to go full Spanish

GEFCito

(Grupito de Expertos en Fotografía Conjunta)

Someone · 2 years ago
https://en.wikipedia.org/wiki/List_of_diminutives_by_languag... lists many more that “could be seen as diminutives”, at least some of which a were fairly recently used in forming new words (examples: disk ⇒ diskette, computer ⇒ minicomputer)
sa-code · 2 years ago
Or JPEGchen in high German
cout · 2 years ago
Interesting, I was expecting there to be some connection to the deblocking jpeg decoder knusperli.
JyrkiAlakuijala · 2 years ago
That would give additional savings.
jug · 2 years ago
Their claims about Jpegli seem to make WebP obsolete regarding lossy encoding? Similar compression estimates as WebP versus JPEG are brought up.

Hell, I question if AVIF is even worth it with Jpegli.

It's obviously "better" (higher compression) but wait! It's 1) a crappy, limited image format for anything but basic use with obvious video keyframe roots and 2) terribly slow to encode AND 3) decode due to not having any streaming decoders. To decode, you first need to download the entire AVIF to even begin decoding it, which makes it worse than even JPEG/MozJPEG in many cases despite their larger sizes. Yes, this has been benchmarked.

JPEG XL would've still been worth it though because it's just covering so much more ground than JPEG/Jpegli and it has a streaming decoder like a sensible format geared for Internet use, as well as progressive decoding support for mobile networks.

But without that one? Why not just stick with JPEG's then.

lonjil · 2 years ago
> Their claims about Jpegli seem to make WebP obsolete regarding lossy encoding? Similar compression estimates as WebP versus JPEG are brought up.

I believe Jpegli beats WebP for medium to high quality compression. I would guess that more than half of all WebP images on the net would definitely be smaller as Jpegli-encoded JPEGs of similar quality. And note that Jpegli is actually worse than MozJPEG and libjpeg-turbo at medium-low qualities. Something like libjpeg-turbo q75 is the crossover point I believe.

> Hell, I question if AVIF is even worth it with Jpegli.

According to another test [1], for large (like 10+ Mpix) photographs compressed with high quality, Jpegli wins over AVIF. But AVIF seems to win for "web size" images. Though, as for point 2 in your next paragraph, Jpegli is indeed much faster than AVIF.

> JPEG XL would've still been worth it though because it's just covering so much more ground than JPEG/Jpegli and it has a streaming decoder like a sensible format geared for Internet use, as well as progressive decoding support for mobile networks.

Indeed. At a minimum, JXL gives you another 20% size reduction just from the better entropy coding.

[1] https://cloudinary.com/blog/jpeg-xl-and-the-pareto-front

ksec · 2 years ago
> I would guess that more than half of all WebP images on the net would definitely be smaller as Jpegli-encoded JPEGs of similar quality.

That was what I expected a long time ago but it turns out to be a false assumption. According to Google with data from Chrome. 80%+ of images on the web are bpp 1.0+.

miragecraft · 2 years ago
> And note that Jpegli is actually worse than MozJPEG and libjpeg-turbo at medium-low qualities. Something like libjpeg-turbo q75 is the crossover point I believe.

May I ask how you came to this conclusion?

The Cloudinary article appears to show jpegli beating mozjpeg and turbojpeg even at the "Medium" setting (less bits per pixel).

ksec · 2 years ago
Sharing similar view. I even go as far as to say jpegli ( and the potential with XYB ICC ) makes JPEG XL just not quite good enough to be worth the effort.

The good thing is that the author of XL ( Jyrki's ) claims there are potential of 20-30% bitrate savings at the low end. So I hope JPEG XL encoder continues to improve.

JyrkiAlakuijala · 2 years ago
You can always use JPEG XL lossless JPEG1 recompression to get some savings in the high end quality, too — if you trust the quality decision heuristics in jpegli/guetzli/other jpeg encoder more than the JPEG XL encoder itself.

We also provide a ~7000 lines-of-code libjxl-tiny that is more similar to jpeg encoders in complexity and coding approach, and a great starting point for building a hardware encoder.

themerone · 2 years ago
It's not just about the compression ratio. JPEG XL improvements in generational loss are reason enough that it should be the default format for the web.
jug · 2 years ago
Yes, I agree and I think there is a hurdle in mucking with file formats alone because it always affects interoperability somewhere in the end. I think this also needs to be accounted for - the advantages need to outweigh this downside because it is a downside. I still kind of want JPEG XL but I'm starting to question how much of it is simply due to me being a geek that want tech as good as possible rather than a pragmatic view on this, and I didn't question this as much before Jpegli.
bArray · 2 years ago
> In order to quantify Jpegli's image quality improvement we enlisted the help of crowdsourcing raters to compare pairs of images from Cloudinary Image Dataset '22, encoded using three codecs: Jpegli, libjpeg-turbo and MozJPEG, at several bitrates.

Looking further [1]:

> It consists in requiring a choice between two different distortions of the same image, and computes an Elo ranking (an estimate of the probability of each method being considered higher quality by the raters) of distortions based on that. Compared to traditional Opinion Score methods, it avoids requiring test subjects to calibrate their scores.

This seems like a bad way to evaluate image quality. Humans can tend towards liking more highly saturated colours, which would be a distortion of the original image. If it was just a simple kernel that turned any image into a GIF cartoon, and then I had it rated by cartoon enthusiasts, I'm sure I could prove GIF is better than JPEG.

I think that to produce something more fair, it would need to be "Given the following raw image, which of the following two images appears to better represent the above image?" The allowed answers should be "A", "B" and "unsure".

ELO would likely be less appropriate. I would also like to see an analysis regarding which images were most influential in deciding which approach is better and why. Is it colour related, artefact related, information frequency related? I'm sure they could gain some deeper insight into why one method is favoured over the other.

[1] https://github.com/google-research/google-research/blob/mast...

zond · 2 years ago
The next sentence says "The test subject is able to flip between the two distortions, and has the original image available on the side for comparison at all times.", which indicates that the subjects weren't shown only the distortions.
bArray · 2 years ago
The change was literally just made: https://github.com/google-research/google-research/commit/4a...

It appears this was in response to Hacker News comments.

Permik · 2 years ago
One other thing to control for is the subpixel layout of their display which is almost always forgotten in these studies.
bArray · 2 years ago
I did think about this - but then I thought the variation in displays/monitors and people would enhance the experiment.
geraldhh · 2 years ago
> Humans can tend towards liking more highly saturated colours, which would be a distortion of the original image.

android with google photos did/does this whereas apple went with enhanced contrast.

as far as i can tell, they're both wrong but one mostly notices the 'distortion' if used to the other.

Waterluvian · 2 years ago
This is the kind of realm I'm fascinated by: taking an existing chunk of something, respecting the established interfaces (ie. not asking everyone to support yet another format), and seeing if you can squeeze out an objectively better implementation. It's such a delicious kind of performance gain because it's pretty much a "free lunch" with a very comfortable upgrade story.
terrelln · 2 years ago
I agree, this is a very exciting direction. We shouldn’t let existing formats stifle innovation, but there is a lot of value in back porting modern techniques to existing encoders.
aendruk · 2 years ago
Looks like it’s not very competitive at low bitrates. I have a project that currently encodes images with MozJPEG at quality 60 and just tried switching it to Jpegli. When tuned to produce comparable file sizes (--distance=4.0) the Jpegli images are consistently worse.
ruuda · 2 years ago
What is your use case for degrading image quality that much? At quality level 80 the artifacts are already significant.
aendruk · 2 years ago
Thumbnails at a high pixel density. I just want them up fast. Any quality that can be squeezed out of it is a bonus.
egorfine · 2 years ago
Thumbnails. I typically serve them at 2x resolution but extremely heavily compressed. Still looks good enough in browser when scaled down.
Brian_K_White · 2 years ago
I apologize that this will seem like, well it IS frankly, more reaction than is really justified, sorry for that. But this question is an example of a thing people commonly do that I think is not good and I want to point it out once in a while when I see it:

There are infinite use-cases for everything beside one's own tiny personal experience and imagination. It's not remarkable that someone tested for the best version of something you personally don't have a use for.

Pretend they hadn't answered the question. The answer is it doesn't matter.

They stated a goal of x, and compared present-x against testing-x and found present-x was the better-x.

"Why do they want x when I only care about y?" is irrelevant.

I mean you may be idly curious and that's not illegal, but you also stated a reason for the question which makes the question not idle but a challenge (the "when I only care about y" part).

What I mean by "doesn't matter" is, whatever their use-case is, it's automatically always valid, and so it doesn't change anything, and so it doesn't matter.

Their answer happened to be something you probably agree is a valid use-case, but that's just happenstance. They don't have to have a use-case you happen to approve of or even understand.

JyrkiAlakuijala · 2 years ago
I believe that they should be roughly the same in a photography corpus density at quality 60. Consider filing an issue if some image is worse with jpegli.
mgraczyk · 2 years ago
> Jpegli can be encoded with 10+ bits per component.

How are the extra bits encoded?

Is this the JPEG_R/"Ultra HDR" format, or has Google come up with yet another metadata solution? Something else altogether?

Ultra HDR: https://developer.android.com/media/platform/hdr-image-forma...

lonjil · 2 years ago
It's regular old JPEG1. I don't know the details, but it turns out that "8 bit" JPEG actually has enough precision in the format to squeeze out another 2.5 bits, as long as both the encoder and the decoder use high precision math.
actionfromafar · 2 years ago
Wow, this is the first time I heard about that. I wonder if Lightroom uses high precision math.
donatzsky · 2 years ago
This has nothing to do with Ultra HDR. It's "simply" a better JPEG encoder.

Ultra HDR is a standard SDR JPEG + a gain map that allows the construction of an HDR version. Specifically it's an implementation of Adobe's Gain Map specification, with some extra (seemingly pointless) Google bits. Adobe gain Map: https://helpx.adobe.com/camera-raw/using/gain-map.html

mgraczyk · 2 years ago
Thanks, I was on the team that did Ultra HDR at Google so I was curious if it was being used here. Didn't see anything in the code though so that makes sense.
JyrkiAlakuijala · 2 years ago
Ultra HDR can have two jpegs inside, one for the usual image and another for the gain-map.

Hypothetically, both jpegs can be created with jpegli.

Hypothetically, both Ultra HDR jpegs can be decoded with jpegli.

In theory jpegli would remove the 8 bit striping that would otherwise be present in Ultra HDR.

I am not aware of jpegli-based Ultra HDR implementations.

A personal preference for me would be a single Jpegli JPEG and very fast great local tone mapping (HDR source, tone mapping to SDR). Some industry experts are excited about Ultra HDR, but I consider it is likely too complicated to get right in editing software and automated image processing pipelines.

Zardoz84 · 2 years ago
What is the point of that complexity if JPEG XL can store HDR images ?
simonw · 2 years ago
> High quality results. When images are compressed or decompressed through Jpegli, more precise and psychovisually effective computations are performed and images will look clearer and have fewer observable artifacts.

Does anyone have a link to any example images that illustrate this improvement? I guess the examples would need to be encoded in some other lossless image format so I can reliably view them on my computer.

n2d4 · 2 years ago
You can find them in the mucped23.zip file linked here (encoded as PNG): https://github.com/google-research/google-research/tree/mast...
simonw · 2 years ago
Thanks - I downloaded that zip file (460MB!) and extracted one of the examples into a Gist: https://gist.github.com/simonw/5a8054f18f9ea3c560b628b16b00f...

Here's an original: https://gist.githubusercontent.com/simonw/5a8054f18f9ea3c560...

And the jpegli-q95- version: https://gist.githubusercontent.com/simonw/5a8054f18f9ea3c560...

And the same thing with mozjpeg-a95 https://gist.githubusercontent.com/simonw/5a8054f18f9ea3c560...

masfuerte · 2 years ago
You said linked but like a fool I went looking for a zip in the repository. This is the link:

https://cloudinary.com/labs/cid22/mucped23.zip (460MB)

JyrkiAlakuijala · 2 years ago
https://twitter.com/jyzg/status/1622890389068718080

Some earlier results. Perhaps these were with XYB color space, I don't remember ...

andrewla · 2 years ago
As an aside, jpeg is lossless on decode -- once encoded, all decoders will render the same pixels. Since this library produces a valid jpeg file, it should be possible to directly compare the two jpegs.
nigeltao · 2 years ago
> all decoders will render the same pixels

Not true. Even just within libjpeg, there are three different IDCT implementations (jidctflt.c, jidctfst.c, jidctint.c) and they produce different pixels (it's a classic speed vs quality trade-off). It's spec-compliant to choose any of those.

A few years ago, in libjpeg-turbo, they changed the smoothing kernel used for decoding (incomplete) progressive JPEGs, from a 3x3 window to 5x5. This meant the decoder produced different pixels, but again, that's still valid:

https://github.com/libjpeg-turbo/libjpeg-turbo/commit/6d91e9...

JyrkiAlakuijala · 2 years ago
It is approximately correct. The rendering is standards compliant without pixel perfection and most decoders make different compromises and render slightly different pixels.
Mr_Minderbinder · 2 years ago
It would help if the authors explained how exactly they used the Elo rating system to evaluate quality, since this seems like a non-standard and rather unusual use case for this. I am guessing that if an image is rated better than another that counts as a "win"?

Finally, writing "ELO" instead of "Elo" is incorrect (this is just one of my pet peeves but indulge me nevertheless). This is some guy's name not an abbreviation, nor a prog rock band from the 70's! You would not write "ELO's" rating system for the same reason you wouldn't write "DIJKSTRA's" algorithm.

summerlight · 2 years ago