Articles about the merits of JPEG XL come up with some regularity on Hacker News, as if to ask, "why aren't we all using this yet?"
This one has a section on animation and cinemagraphs, saying that video formats like AV1 and HEVC are better suited, which makes sense. Here's my somewhat off-topic question: is there a video format that requires support for looping, like GIFs? GIF is a pretty shoddy format for video compared to a modern video codec, but if a GIF loops, you can expect it to loop seamlessly in any decent viewer.
With videos it seems you have to hope that the video player has an option to loop, and oftentimes there's a brief delay at the end of the video before playback resumes at the beginning. It would be nice if there were a video format that included seamless looping as part of the spec -- but as far as I can tell, there isn't one. Why not? Is it just assumed that anyone who wants looping video will configure their player to do it?
Besides looping, video players also deal kinda badly with low-framerate videos. Meanwhile, (AFAIK) GIFs can have arbitrary frame durations and it generally works fine.
> GIFs can have arbitrary frame durations and it generally works fine.
But we shouldn't be using animated GIFs in 2024.
The valid replacement for the animated GIF is an animated lossless compressed WebP. File sizes are are much more controlled and there is no generational loss when it propagates the internets as viral loop (if we all settled on it and did not recompress it in a lossy format).
Most modern video container formats support arbitrary frame durations, using a 'presentation timestamp' on each frame. After all, loads of things these days use streaming video, where you need to handle dropped frames gracefully.
Of course, not every video player supports them well. Which is kinda understandable, I can see how expecting 30 frames per second from a 30fps video would make things a lot simpler, and work right 99.9% of the time.
> With videos it seems you have to hope that the video player has an option to loop
<video playsinline muted loop> should be nearly as reliable as a GIF in that regard.
The one exception that I've found is that some devices will prevent videos from autoplaying if the user has their battery-saver on, leading to some frustrating bug reports.
The same thing stood out to me. With the popularity of animated GIFs, it's disappointing and ridiculous for a new Web-friendly image format to omit at least a simple multi-image/looping facility.
As for your question about video looping: Nothing prevents that, although I don't know of a container format that has a flag to indicate that it should be looped. Players could eliminate the delay on looping by caching the first few frames.
There is a lossless ultra-packer for existing JPEG files. It's completely reversible, you can get byte-for-byte identical JPEGs back.
Then there is "VarDCT" mode, which acts like JPEG, lossy Webp, or video codecs.
Then there is "Modular Mode", a completely different kind of codec that has different kinds of compression artifacts than JPEG-like codecs. The compression artifacts you see tend to be more like sections becoming more pixelated, or slight color differences. Strong edges don't have ringing artifacts. Modular mode mainly is used for lossless compression, but also allows lossy compression.
Technically it also had a fourth :^) [0] but it was spun out into a separate project of its own, jpegli [1]: JPEG but it uses some tricks from JPEG XL. These include spatially adaptive quantization, quantization matrices that better preserve psychovisual detail, more efficient color spaces, and also HDR (10+ bit depth) support [2].
Pretty good news! I imagine that it'll take a while before libjxl and jpegli won't both supply a cjpegli binary so that'll be mildly annoying at the start, but hopefully this way it'll be adopted quicker so it'll accept more input formats and image software will switch over to jpegli native export instead of using the libjpeg compatible controls.
It's a really excellent software. Its default output quality is storage quality, while the file size is acceptable for mobile data and cloud storage of pictures in most countries. It producing progressive pictures by default still helps when quickly swiping through a whole album of vacation pictures stored on cloud storage, and its progressive output actually reduces size rather than add to it. And it's compatible with everything so now I just throw everything lossy I produce through its default settings until JXL becomes natively supported in Chrome and Windows.
The lossless JPEG recompression is a combination of VarDCT and some additional metadata. In fact, VarDCT should be considered as a (very large) superset of JPEG1 compression. The distinction between VarDCT and Modular in JPEG XL is relatively clear, but in reality VarDCT would still use modular encoding for various data anyway so it is hard to consider one without another. (Compare with Opus, which also uses two main mechanisms but mix them so well that they can't be really separated.)
It’s supported, but unfortunately no 3rd party APIs yet. It’s a bit surprising they wouldn’t ship them on launch to encourage adoption.
I make photography/camera apps and would like to support JPEG XL natively (without having to rely on 3rd party code) so I hope it’s something they add soon!
> It’s a bit surprising they wouldn’t ship them on launch to encourage adoption.
Because Apple is all in in HEIC/HEIV. The "high efficiency" codecs that require up to a second on an M1 Pro to render an image. A comparable image in PNG renders instantly
This article is from 2020. I think so far it has aged reasonably well.
But you might be interested in my more recent articles too. You can find those here: https://cloudinary.com/blog/author/jon_sneyers
> HEIC and AVIF can handle larger [than 35MP, 8MP respectively] images but not directly in a single code stream. You must decompose the image into a grid of independently encoded tiles, which could cause discontinuities at the grid boundaries. [demo image follows].
The newest Fujifilm X cameras have HEIC support but also added 40MP sensors--does this mean they are having to split their HEIC outputs into two encoding grids?
It seems like the iPhone avoided this, as 48MP output is only available as a "ProRAW" i.e. RAW+JPEG, which previously used regular JPEG and now JPEG-XL, but never HEIC.
I recently wrote a script that encodes an image to fall within a size range[0]. After toying with it, I noticed that smaller AVIF files are completely fine for web use, but identically sized JPEG XL files are not. Given ubiquitous browser support for AVIF[1], unless JPEG XL gets much better at smaller sizes, I reluctantly agree that Chrome's call to drop JPEG XL is the right one.
In what environment do you work where you need such low quality images though? In my web environment I only want the highest quality I can get at a reasonable size and I've never been interested in slightly less awful looking tiny images. In another comment I wrote about using jpegli at its default distance of 1 for everything and being happy with that size, so maybe I work in a completely different environment to you.
JPEG XL is supposed to be a progressive mode. Can you read a lower resolution from the file by reading only part of the file, as you can with JPEG 2000? Is there a header which tells you how much file to read for the desired resolution?
You have to first read the image header, an optional ICC profile and finally a portion of the first frame. This first frame might actually be a preview generated by an encoder, but should be fine for our purpose and it's not hard to seek to subsequent frames anyway. The frame itself contains its own header and all offsets to per-frame sections ("TOC"), while there is always one LfGlobal section that contains the heavily downscaled---8x or more---image in the modular bitstream, even when the frame itself uses VarDCT.
Any higher resolution would require some support from the encoder. The prime mechanism relevant here is a version of the modified Haar transform named Squeeze, which generates two half-sized images from one source image. As each output image is placed to distinct sections, only one out of two output images is needed for low-fidelity decoding. If the encoder didn't do any transformation however (often the case in VarDCT images), then all sections would be required regardless of the target resolution.
Therefore it is technically possible and in fact libjxl does support partial decoding by rendering a partial bitstream, but anything more than that would be surprisingly complex. For example how many bytes are needed to ensure that we have at least 8x downscaled image? This generally needs TOC, and yet a pathological encoder can put the LfGlobal section to the very end of frame to mess with decoders (though no such encoder is known at the moment). Any transformation, not just Squeeze, has to be also accounted to ensure that all of them will produce the wanted resolution once combined. Since the ICC profile and TOC already require most entropy encoding stuffs except for meta-adaptive trees, even the calculation of the number of required bytes already needs about 1/2--1/3 of the full decoder in my estimate from building J40.
That said, I'm not very sure this complexity could've been radically reduced without inefficiency in the first place. In fact I've just described what I wanted when I started to build J40! I think there was an informal agreement that the ICC profile could have been made skippable, but you still need all the same stuff for decoding TOC anyway. Transformation is a vital part of compression and can't be easily removed or replaced. So any such tool would be definitely possible, but necessarily complicated, to build.
Something seems wrong im this article.
The side-by-side comparison shows 4 formats:
· Original PNG image (2.6 MB)
· Name "high_fidelity.png", but in fact 298.840 bytes and format: JPEG
· JPEG XL (default settings, 53 KB): indistinguishable from the original
· Name "high_fidelity.png.jxl.png", but in fact 3.801.830 bytes and format: PNG
· WebP (53 KB): some mild but noticeable color banding along with blurry text
· Name "high_fidelity_webp.png", but in fact 289.605 bytes and format PNG
· JPEG (53 KB): strong color banding, halos around the text, small text hard to read
· Name "jpeg_high_fidelity.jpg", but in fact 52.911 bytes and format JPEG
The comparison does not make any sense,
everything is just wrong. Also when encoding the large original PNG image to AVIF, it has only 20.341 Bytes with no visual change, see: http://intercity-vpn.de/files/2024-10-27/upload/
I guess that is because the noise from the lossy encoding creates more entropy, that then has to be losslessly encoded as PNG, which pushes the files size above the original?
This one has a section on animation and cinemagraphs, saying that video formats like AV1 and HEVC are better suited, which makes sense. Here's my somewhat off-topic question: is there a video format that requires support for looping, like GIFs? GIF is a pretty shoddy format for video compared to a modern video codec, but if a GIF loops, you can expect it to loop seamlessly in any decent viewer.
With videos it seems you have to hope that the video player has an option to loop, and oftentimes there's a brief delay at the end of the video before playback resumes at the beginning. It would be nice if there were a video format that included seamless looping as part of the spec -- but as far as I can tell, there isn't one. Why not? Is it just assumed that anyone who wants looping video will configure their player to do it?
But we shouldn't be using animated GIFs in 2024.
The valid replacement for the animated GIF is an animated lossless compressed WebP. File sizes are are much more controlled and there is no generational loss when it propagates the internets as viral loop (if we all settled on it and did not recompress it in a lossy format).
Of course, not every video player supports them well. Which is kinda understandable, I can see how expecting 30 frames per second from a 30fps video would make things a lot simpler, and work right 99.9% of the time.
<video playsinline muted loop> should be nearly as reliable as a GIF in that regard.
The one exception that I've found is that some devices will prevent videos from autoplaying if the user has their battery-saver on, leading to some frustrating bug reports.
HEVC does not have such a flag. Quite unfortunate
As for your question about video looping: Nothing prevents that, although I don't know of a container format that has a flag to indicate that it should be looped. Players could eliminate the delay on looping by caching the first few frames.
Deleted Comment
There is a lossless ultra-packer for existing JPEG files. It's completely reversible, you can get byte-for-byte identical JPEGs back.
Then there is "VarDCT" mode, which acts like JPEG, lossy Webp, or video codecs.
Then there is "Modular Mode", a completely different kind of codec that has different kinds of compression artifacts than JPEG-like codecs. The compression artifacts you see tend to be more like sections becoming more pixelated, or slight color differences. Strong edges don't have ringing artifacts. Modular mode mainly is used for lossless compression, but also allows lossy compression.
[0] https://github.com/libjxl/libjxl/tree/main/lib/jpegli
[1] https://github.com/google/jpegli
[2] https://opensource.googleblog.com/2024/04/introducing-jpegli...
It's a really excellent software. Its default output quality is storage quality, while the file size is acceptable for mobile data and cloud storage of pictures in most countries. It producing progressive pictures by default still helps when quickly swiping through a whole album of vacation pictures stored on cloud storage, and its progressive output actually reduces size rather than add to it. And it's compatible with everything so now I just throw everything lossy I produce through its default settings until JXL becomes natively supported in Chrome and Windows.
I guess Tiff is still the hot mess it always has been too... lol =3
edit: previous discussion about it https://news.ycombinator.com/item?id=41598170
I make photography/camera apps and would like to support JPEG XL natively (without having to rely on 3rd party code) so I hope it’s something they add soon!
Because Apple is all in in HEIC/HEIV. The "high efficiency" codecs that require up to a second on an M1 Pro to render an image. A comparable image in PNG renders instantly
Deleted Comment
> HEIC and AVIF can handle larger [than 35MP, 8MP respectively] images but not directly in a single code stream. You must decompose the image into a grid of independently encoded tiles, which could cause discontinuities at the grid boundaries. [demo image follows].
The newest Fujifilm X cameras have HEIC support but also added 40MP sensors--does this mean they are having to split their HEIC outputs into two encoding grids?
It seems like the iPhone avoided this, as 48MP output is only available as a "ProRAW" i.e. RAW+JPEG, which previously used regular JPEG and now JPEG-XL, but never HEIC.
Apple supporting it surely has to be a signal to begin wider adoption!?
If it becomes used in Firefox, maybe there's a chance that Google would see the benefit in picking it up?
[1] https://github.com/mozilla/standards-positions/pull/1064
>potentially introduce memory safety vulnerabilities across the myriad of applications that would eventually need to support it
Right, new code, new vulnerabilities.
[0] https://gist.github.com/theandrewbailey/4e05e20a229ef2f2c1f9...
[1] https://caniuse.com/avif
You have to first read the image header, an optional ICC profile and finally a portion of the first frame. This first frame might actually be a preview generated by an encoder, but should be fine for our purpose and it's not hard to seek to subsequent frames anyway. The frame itself contains its own header and all offsets to per-frame sections ("TOC"), while there is always one LfGlobal section that contains the heavily downscaled---8x or more---image in the modular bitstream, even when the frame itself uses VarDCT.
Any higher resolution would require some support from the encoder. The prime mechanism relevant here is a version of the modified Haar transform named Squeeze, which generates two half-sized images from one source image. As each output image is placed to distinct sections, only one out of two output images is needed for low-fidelity decoding. If the encoder didn't do any transformation however (often the case in VarDCT images), then all sections would be required regardless of the target resolution.
Therefore it is technically possible and in fact libjxl does support partial decoding by rendering a partial bitstream, but anything more than that would be surprisingly complex. For example how many bytes are needed to ensure that we have at least 8x downscaled image? This generally needs TOC, and yet a pathological encoder can put the LfGlobal section to the very end of frame to mess with decoders (though no such encoder is known at the moment). Any transformation, not just Squeeze, has to be also accounted to ensure that all of them will produce the wanted resolution once combined. Since the ICC profile and TOC already require most entropy encoding stuffs except for meta-adaptive trees, even the calculation of the number of required bytes already needs about 1/2--1/3 of the full decoder in my estimate from building J40.
That said, I'm not very sure this complexity could've been radically reduced without inefficiency in the first place. In fact I've just described what I wanted when I started to build J40! I think there was an informal agreement that the ICC profile could have been made skippable, but you still need all the same stuff for decoding TOC anyway. Transformation is a vital part of compression and can't be easily removed or replaced. So any such tool would be definitely possible, but necessarily complicated, to build.
It's a standard feature of JPEG 2000, but JPEG 2000 decoders are rare, and either expensive or slow and buggy.
means the original was
- loss-converted to JXL, measured as 53k
- then losslessly converted to PNG to be displayed on the website
And your AVIF is certainly not without visual changes. The colours are off and there is visible ringing.
I guess that is because the noise from the lossy encoding creates more entropy, that then has to be losslessly encoded as PNG, which pushes the files size above the original?