https://en.m.wikipedia.org/wiki/List_of_Japanese_words_of_Po...
I think this list should also include descriptors[0]: it's another metaprogramming feature that allows running code when accessing or setting class attributes similar to @property but more powerful. (edit: nvm, I saw that they are covered in the proxy properties section!)
I think the type system is quite good actually, even if you end up having to sidestep it when doing this kind of meta-programming. The errors I do get are generally the library's fault (old versions of SQLAlchemy make it impossible to assign types anywhere...) and there's a few gotchas (like mutable collections being invariant, so if you take a list as an argument you may have to type it as `Sequence[]` or you'll get type errors) but it's functional and makes the language usable for me.
I stopped using Ruby because upstream would not commit on type checking (yes I know you have a few choices if you want typing, but they're a bit too much overhead for what I usually use Ruby for, which is writing scripts), and I'm glad Python is committing here.
> The truth is that any reasonably complicated software system created by humans will have bugs, regardless of what technology was used to create it.
"Drivers wearing seatbelts still die in car accidents and in some cases seatbelts prevent drivers from getting out of the wreckage so we're better off without them." This is cope.
> Using a stricter language helps with reducing some classes of bugs, at the cost of reduced flexibility in expressing a solution and increased effort creating the software.
...and a much smaller effort debugging the software. A logic error is much easier to reason about than memory corruption or race condition on shared memory. The time you spend designing your system and handling the errors upfront pays dividends later when you get the inevitable errors.
I'm not saying that all software should be rewritten in memory-safe languages, but I'd rather those who choose to use the only language where this kind of errors regularly happens be honest about it.
The Netflix post is sort of bizarre. They claim to be optimizing for minimum mean-squared error (MSE) given a conventional (bicubic) upscaling process [0], but... that should have an analytic closed-form solution, as this post states? You definitely do not need multiple layers of neural networks to achieve it. Then they present VMAF results, but VMAF is very much not equivalent to MSE, so you have no idea if they even improved the metric they optimized for. Subjective results are similarly unpersuasive: it isn't clear if "the deep downscaler was preferred by 77% of test subjects" means they thought it was closer to the original or simply "better" than Lanczos[1]. Netflix may not care: longer watch times are longer watch times. But as an engineer you might want to know if that is because you actually achieved the thing you were optimizing for, or it is due to an artifact of the process that might go away the next time you change something to actually improve what you were optimizing for. You can famously make people prefer one audio track over another by making it louder, and video has similar things around sharpness and contrast (and now, thanks to ML, hallucinated detail).
I agree that you can do better than Lanczos for large downscale factors [2]: you need to do something area-based, like you suggest (I have not looked at OpenCV's implementation, but it could be fine). The biggest thing to get wrong is handling gamma incorrectly, but the right thing to do depends on whether you intend to display the result at the downscaled size or upscaled back to the original size as seems to be assumed here (and whether or not your upscaler is gamma-aware, which it probably isn't).
As an aside to those struggling to see the visual differences, make sure you are looking at the image in its original (1874x1596) resolution: https://redvice.org/assets/images/netflix-downscaler-compare... (or right-click, View Image on the original page). Otherwise you also have your browser's resampling algorithm in the mix. To my eyes on my display there is also a pretty big color shift in the featureless pink background of the painting on the right wall, but when I look at the actual pixel values, that appears to be an optical illusion. Subjective comparison is hard!
[0] Unlike the post, I think this is reasonable. In the past, we did experiments that showed that optimizing for bilinear when nearest-neighbor is used (for chroma) is worse than optimizing for nearest-neighbor when bilinear is used. I suspect something similar will be true for bicubic and bilinear, but these days it may be safe to assume that you will get at least bicubic upscaling (for luma), because bilinear luma looks really bad. I haven't done a recent survey of actual playack devices, though.
[1] It's also not how you report subjective test results: what was the statistical significance? There are standard protocols for these kinds of tests and it would have been helpful to cite which one was used.
[2] Nobody ever says what downscaling factors are being tested here. The example graph shows 1080p to 342p, or ~3.16x, but Netflix goes as low as 144p (from, e.g, 2160p), so they can get pretty large (15x) in practice. A 6-tap filter is not going to cut it.
If you're looking for examples of ringing and hallucinated details, they're really obvious in the framed picture on the right on respectively the character's shirt and the frame.
I am want to know people who use anything other than the very basic git checkout, fetch, push, merge, rebase, stash, status commit, reset.
What are you doing?
I think I have used cherrypick once? Generally I open two copies of the same repo, copy paste the code I need and commit as a new commit - the former invariably ends up as a whole mess which takes longer than copy paste efforts of the “feature” I wanted to retrieve.
This is a genuine question not rage bait. I have been in the game for 15 years. Surely there are mega complex things using git internals / obscure commands to run analytics in CI systems or hosted git solutions. But I want to know what you are doing in your workflow that requires “more” of git.
Forgot to add something to a previous commit? Run "jj squash -i" to move the lines you select to whatever commit you want. Or you can run "jj edit" to check out that commit and edit it directly.
Want to split a commit into two separate commits? Run "jj split".
Need to reorder commits? Run "jj rebase", and if you have a conflict you can "jj edit" the commits that are marked as conflicted to fix it later, unlike Git where you have to run through a lengthy process of fixing conflicts on commits you don't remember and then review the changes later to see whether they still make sense.
If you want to have a messy working copy of your repo that's very easy to do. The workflow would mostly involve:
- Develop the feature
- "jj split" to pick out the stuff you need into a separate commit, which will appear between master and the working copy commit
- "jj describe" to add a commit message
- "jj bookmark set feature-branch" on the commit containing the stuff you want to push
- "jj git push" to push it
- "jj edit" to return to the commit containing the working copy.
You'd end up with a tree that looks kind of like this:
@ ptswumyk 2025-02-12 13:16:36 de46f8c1
│ messy working copy
○ slwozrlr 2025-02-12 13:16:22 feature-branch@origin d3d246a1
│ feature implementation
◆ tssssuzr 2025-02-12 12:34:28 master* 8a9bab0f
│ generate flake registry from inputs
~
So it's not that I really need more features to git, just a better UX, which is what JJ provides.
I've had my rM2 since 2020 and enjoyed the hacking community a lot. I've since lost track - at some point I updated the firmware because I wanted the automatic shapes feature from upstream and couldn't use the framebuffer anymore.
You've summed up a lot of findings that I've made again and again trying to pick up where I left but it's become very confusing.
Looking forward to your next update! No pressure, though :)
I've just remembered: Check out KOreader if you haven't. I think it doesn't rely on QT and it runs on rM2 tablets with recent firmware if you launch it via ssh after stopping xochitl.
I have taken a (short) look at KOReader and saw that there's an instructions page on its wiki on how to install it on the rM2; it still uses rm2fb but it suggests using timower's version, which works on newer versions of the OS. What I should've made more clear in the post was that there are options, they're just less convenient to use because Toltec doesn't work.