I've been building a wordpress plugin that converts JPEG to AVIF on the local server (since everyone wants to sell a service to convert in the cloud).
As a photographer who wants high quality photos on their portfolio website I love how AVIF respects color more the webp.
Superior in file size, image quality, computation required for equivalent quality encoding, no arbitrary resolution caps, progressive decoding which also lets you create 'thumbnails' or resizes by just cutting the byte stream, while also having features to help legacy jpeg files benefit from newer compression losslessly. The only benchmark avif bests at is abhorrently low quality levels that no one genuinely uses.
#!/bin/sh
# originally from https://jpegxl.info/images/precision-machinery-shapes-golden-substance-with-robotic-exactitude.jpg
# URL1="http://intercity-vpn.de/files/2025-10-04/upload/precision-machinery-shapes-golden-substance-with-robotic-exactitude.png"
# URL2="http://intercity-vpn.de/files/2025-10-04/upload/image-png-all-pngquant-q13.png"
curl "$URL1" -so test.png
curl "$URL2" -so distorted.png
# https://github.com/cloudinary/ssimulacra2/tree/main
ssimulacra2 test.png distorted.png
5.90462597
# https://github.com/gianni-rosato/fssimu2
fssimu2 test.png distorted.png
2.17616860If you run the `validate.py` script available in the repo, you should see correlation numbers similar to what I've pre-tested & made available in the README: fssimu2 achieves 99.97% linear correlation with the reference implementation's scores.
fssimu2 is still missing some functionality (like ICC profile reading) but the goal was to produce a production-oriented implementation that is just as useful while being much faster (example: lower memory footprint and speed improvements make fssimu2 a lot more useful in a target quality loop). For research-oriented use cases where the exact SSIMULACRA2 score is desirable, the reference implementation is a better choice. It is worth evaluating whether or not this is your use case; an implementation that is 99.97% accurate is likely just as useful to you if you are doing quality benchmarks, target quality, or something else where SSIMULACRA2's correlation to subjective human ratings is more important than the exactness of the implementation to the reference.
> The error will be much smaller than the error between ssimu2 and actual subjective quality, so I wouldn't worry about it.