They seem like big numbers until you compare it with the enormity of what we already do.
Is it, though? It's genuinely hard for me to tell.
There's both serialization and deserialization of data sets with, e.g., JSON including floating point numbers, implying formatting and parsing, respectively.
Source code (including unit tests etc.) with hard-coded floating point values is compiled, linted, automatically formatted again and again, implying lots of parsing.
Code I usually work with ingests a lot of floating point numbers, but whatever is calculated is seldom displayed as formatted strings and more often gets plotted on graphs.
I think the closest thing that could get most of the way there is https://github.com/domferr/tilingshell/
I'm a long-time Xmonad user. Currently, I'm using Ubuntu 25.04, having upgraded to new non-LTS releases every six months, on two computers running Xmonad. I haven't run into any problems.
[1] https://github.com/Voultapher/sort-research-rs/blob/main/wri...
I wouldn't say your article is too technical; it does go a bit deeper into details, but new concepts are explained well and at a level I found suitable for myself. Having said that, several times I felt that the text was a bit verbose. Using more succinct phrasing needs, of course, a lot of additional effort, but… I guess it's a kind of an optimization as well. :)
Nicholas Nethercote's "How to speed up the Rust compiler" writings[1] fall into this same category for me.
Any others?
Then you top it on with `?` shortcut and the functional interface of Result and suddenly error handling becomes fun and easy to deal with, rather than just "return false" with a "TODO: figure out error handling".
[1] https://doc.rust-lang.org/std/panic/fn.catch_unwind.html