There are tons of other examples like this. It’s very easy to get tricked by Google ads if you aren’t suspecting a scam.
There are tons of other examples like this. It’s very easy to get tricked by Google ads if you aren’t suspecting a scam.
This bit from the article made me laugh ruefully though: "it's as simple as buying some black paper and a white gel pen." You can get some beautiful effects with white ink on black paper but it is notoriously difficult to get looking good. White ink is tricky stuff. But that's part of the fun!
In a sentence, color science is a science.
The words you are using have technical meanings.
When we say "Oklab isn't a perceptually accurate color system", we are not saying "it is bad" - we are saying "it is a singular matmul that is meant to imitate a perceptually accurate color system" -- and that really matters, really -- Google doesn't launch Material 3 dynamic color if we just went in on that.
The goal was singular matmul. Not perceptual accuracy.
Let me give you another tell something is really off that you'll understand intuitively.
People love quoting back the Oklab blog post, you'll also see in a sibling comment something about gradients and CAM16-UCS.
The author took two colors across the color wheel, blue and yellow, then claimed that because the CAM16-UCS gradient has gray in it, Oklab is better.
That's an absolutely insane claim.
Blue and yellow are across the color wheel from each other.
Therefore, a linear gradient between the two has to pass through the center of the color wheel.
Therefore a gradient, i.e. a lerp, will have gray in it -- if it didn't, that would be really weird and indicate some sort of fundamental issue with the color modeling.
So of course, Oklab doesn't have gray in the blue-yellow gradient, and this is written up as a good quality.
If they knew what they were talking about at the time, they wouldn't have been doing gradients in CAM16-UCS, and not done a lerp, but used the standard CSS gradient technique of "rotating" to the new point.
Because that's how you avoid gray.
Not making up a new color space, writing it up with a ton of misinfo, then leaving it up without clarification so otherwise-smart people end up completely confused for years, repeating either the blog post or "nothings perfect" ad naseum as an excuse to never engage with anything past it. They walk away with the mistaken understanding a singular matmul somehow magically blew up 50 years of color science.
I just hope this era passes within my lifetime. HSL was a tragedy. This will be worse, if it leaves the ability to do actual color science some sort of fringe slow thing in people's heads.
And I think you might be mis-remembering Ottosson's original blog post; he demonstrates a gradient between white and blue, not blue and yellow.
[1] https://opg.optica.org/oe/fulltext.cfm?uri=oe-32-3-3100
[2] https://github.com/texel-org/color/blob/main/test/spaces/sim...
There is no “one true” UCS model - all of these are just approximations of various perception and color matching studies, and at some point CAM16-UCS will probably be made obsolete as well.
By what metric? If the target is parity with CAM16-UCS, OKLab comes closer than many color spaces also designed to be perceptually uniform.
Oklab is a nightmare in practice - it's not linked to any perceptual color space, but it has the sheen of such in colloquial discussion. It's a singular matmul that is supposed to emulate CAM16 as best as it can.
It reminds me of the initial state of color extraction I walked into at Google, where they were using HSL -- that is more obviously wrong, but I submit they suffer from the same exact issue: their verbiage is close enough to actual verbiage that they obfuscate discussion, and prevent people from working with the actual perceptual spaces, where all of a sudden a ton of problems just...go away.
</end rant>
In practice, quantizers are all slow enough at multimegapixel that I downscale - significantly, IIRC I used 96x96 or 112x112. IIRC you could convert all 16M of RGB to CAM16 and L* in 6 seconds, in debug mode, in Dart, transpiled to Javascript in 2021, so I try to advocate for doing things with a proper color space as much as possible, the perf just doesn't matter.
EDIT: Also, I should point out that my goal was to get a completely dynamic color system built, which required mathematically guaranteeing a given contrast ratio for two given lightness values, no matter hue and chroma, so trying to use pseudo-perceptual-lightness would have been enough to completely prevent that.
I do still think it's bad in general, i.e. if it was people doing effects on images in realtime, a couple weeks ago I finally got past what I had internally at Google, and was able to use appearance modeling (i.e. the AM in CAM-16) to do an exquisite UI whose colors change based on the lighting naturally. https://x.com/jpohhhh/status/1937698857879515450
I don’t know what you mean by “not being linked to any perceptual color space” - it is derived from CAM16 & CIEDE2000, pretty similar in ethos to other spaces like ITP and the more recently published sUCS.
There’s also tons of discussion on w3c GitHub about OKLab, and it’s evolved in many ways since the original blog post such as improved matrices, new lightness estimate and OKHSV/OKHSL, and very useful cusp & gamut approximations.
I have a hard time seeing how it’s a nightmare in practice!
- what works well for this image might not work well for other images! I learned the hard way after lots of testing on this image, only to find things that did not generalize well.
- parametrizing the AB plane weight is pretty useful for color quantization; I’ve found some images will be best with more weight given to colour, and other images need more weight given to tone. OKLab creator suggests a factor of 2 in deltaEOK[1] but again this is something that should be adjustable IMHO..
- there’s another interesting and efficient color space (poorly named) sUCS and sCAM[2] that boasts impressive results in their paper for tasks like this. Although I’ve found it not much better for my needs than OKLab in my brief tests[3] (and note, both color spaces are derived using CIEDE2000)
[1] https://github.com/color-js/color.js/blob/9d812464aa318a9b47...
[2] https://opg.optica.org/oe/fulltext.cfm?uri=oe-32-3-3100&id=5...
It feeds the results from a box cutting quantizer (Wu) into K-Means, giving you deterministic initial clusters and deterministic results. It leverages CIELAB distance to avoid a bunch of computation. I used it for Material 3's dynamic color and it was awesome as it enabled higher cluster counts.
My own gripe with box cutting is that perceptual color spaces tend not to have cube shaped volumes. But they are very fast algorithms.
There’s a surprising amount of stutter and lag on iOS, evident after the loading bar completes and the app freezes for 30 sec. Also during gameplay, quite a bit of stuttering. My guess is GPU texture uploads or shader compilations. Otherwise it was buttery smooth.