Readit News logoReadit News
dimatura commented on PYX: The next step in Python packaging   astral.sh/blog/introducin... · Posted by u/the_mitsuhiko
rob · 18 days ago
Is there a big enough commercial market for private Python package registries to support an entire company and its staff? Looks like they're hiring for $250k engineers, starting a $26k/year OSS fund, etc. Expenses seem a bit high if this is their first project unless they plan on being acquired?
dimatura · 18 days ago
Just one data point, but if it's as nice to use as their open source tools and not outrageously expensive, I'd be a customer. Current offerings for private python package registries are kind of meh. Always wondered why github doesn't offer this.
dimatura commented on All-In on Omarchy at 37signals   world.hey.com/dhh/all-in-... · Posted by u/dotcoma
j3s · 19 days ago
going all-in on Linux is one thing, but going all-in on a specific window manager? with specific keybinds? idk, individual workflows are too specific to be prescribed like this imo.
dimatura · 19 days ago
I had a similar thought, but at the same time, if people were mandated to use Windows or MacOS then that would also pretty much lock you into their respective window managers. I guess it feels more restrictive partly because it's more common to pick and choose WMs on linux. (And partly because, yeah, seems like the setup goes way beyond just a distro+WM).
dimatura commented on How Hyper built a 1M-accurate indoor GPS   andrewhart.me/hyper/... · Posted by u/AndrewHart
dimatura · a month ago
The few examples they show do look pretty good for a wifi-based method, although who knows how cherry-picked they are. I wonder how much the "SLAM" part is contributing and how sensitive that is to the sensor quality on the phone. I would've assumed that they'd be using vision, which seems to be the method of choice for other companies like niantic. The ground-truth data part for vision would certainly be more onerous, though.
dimatura commented on ReproZip – reproducible experiments from command-line executions   github.com/VIDA-NYU/repro... · Posted by u/mihau
dimatura · a month ago
This sounds pretty similar to CDE, which I see they cite in the paper. Back in the pre-docker days I remember using CDE a few times to package some C++ code to run on some servers that didn't have the libraries I needed. Pretty cool tool.
dimatura commented on Tao on “blue team” vs. “red team” LLMs   mathstodon.xyz/@tao/11491... · Posted by u/qsort
jeron · a month ago
so we've reinvented GAN but with LLMs
dimatura · a month ago
I was going to mention this sounds like the idea behind adversarial approaches, which I guess go all the way back to game theory and algorithms like minimax. They're definitely used in the control literature ("adversarial disturbances"). And of course GANs.
dimatura commented on Python classes aren’t always the best solution   adamgrant.micro.blog/2025... · Posted by u/hidelooktropic
braza · a month ago
It never gets old "Stop Writing Classes"[1]

[1] - https://www.youtube.com/watch?v=o9pEzgHorH0

dimatura · a month ago
Yeah, was going to post this - great talk that I've recommended to incoming devs on our team.
dimatura commented on Tiny Code Reader: a $7 QR code sensor   excamera.substack.com/p/t... · Posted by u/jamesbowman
mananaysiempre · a month ago
That’s not what I’m asking about: once you’ve found the QR code in the bag of pixels you got from your camera and converted it to a boolean array of module colours, then yes, all you have left is a bit error-correction math and some amusingly archaic Japanese character encoding schemes—definitely some work, but ultimately just some work. (For that matter, the Wikipedia article on QR codes contains enough detail to do this.)

What has thus far remained a mystery to me is going from a bag of noisy pixels with a blurry photo of a tattoo on a hairy arm surrounded by random desk clutter to array of booleans. I meant “by hand” as in “without libraries”, not “using a human”, as in the latter case the human’s visual cortex does the interesting part! And the open-source Android apps that I’ve looked at just wrap ZXing, which is huge (so a sibling commenter’s suggestion of looking at a different, QR-code-specific library is helpful).

dimatura · a month ago
You can examine the code of zxing-cpp (which is fairly nice IMO) for a simple, "classical computer vision" approach to this. It's not the most robust implementation but it is pretty functional.

But in general, you can divide the problem more or less like this (not necessarily in this order) 1. find the rough spatial region of the barcode. Crop that out and only focus on this 2. Correct ("rectify") for any rotation or perspective skew of the barcode, turn it into a frontoparallel version of the barcode 3. Binarize the image from RGB or grayscale into pure black and white 4. Normalize the size so that each pixel is the smallest spatial unit of the barcode.

dimatura commented on Tiny Code Reader: a $7 QR code sensor   excamera.substack.com/p/t... · Posted by u/jamesbowman
mananaysiempre · a month ago
Somewhat incidentally, is there an actual description of how a low-tech QR code reader would work? I’ve looked for this a few years ago and all solutions I could find were of two flavours: (1) use ZXing (“Zebra Crossing”, a now-unmaintained library[1] for every 1D and 2D barcode under the sun); (2) use OpenCV. Nowhere could I find any discussion of how one would actually deal with the image-processing part by hand. And yet QR codes are 1994 tech, so they should hardly require fancy computer-vision stuff to process.

[1] https://github.com/zxing/zxing

dimatura · a month ago
You can roughly divide barcode reading into a "frontend" and a "backend". The backend is the most well understood (but not necessarily trivial) part: you take a binary image, with each pixel corresponding to one little square in the QR code, and decode its payload. It doesn't need computer vision. The "frontend" is the part that takes the raw image containing the barcode and tries to find the barcode, and convert the barcode it finds into a nice, clean binary image for the backend. This is a computer vision problem and you can arbitrarily fancy, including up to using the latest trends in ML vision models. However, this isn't necessarily needed in most cases; after all, barcodes are designed to be easy to read for machines. With a large, sufficiently well focused and well exposed image of a barcode you can get away with simple classical computer vision algorithms like histogram-based binarization and some heuristics to identify the spatial extent of the barcode (for example, most barcode symbologies mandate "quiet space" (blank space) to be around the barcode, and have start and stop markers; QR codes have those prominent concentric squares on the corners).

As for implementation, Zxing-cpp [1] is still maintained, and pretty good as far as open source options go. At this point I'm not sure how related it is to the original zxing, as it has gone substantial development. It has python bindings which may be easier to use.

On mobile, Google MLkit and Apple vision also have barcode reading APIs, not open source but otherwise "free" as in beer.

[1] https://github.com/zxing-cpp/zxing-cpp

dimatura commented on Log by time, not by count   johnscolaro.xyz/blog/log-... · Posted by u/JohnScolaro
dimatura · a month ago
I've found myself adopting this philosophy for a specific use case: monitoring ML training jobs. It's pretty common to see people output training metrics (loss, validation accuracy, etc) every N batches, iterations or epochs. And that does make sense for a lot of reasons, and it's pretty simple to do. But also when you're exploring models that might have wildly varying inference latencies, or using different hardware, or varying batch sizes, or using a differently sized dataset, all of those might end up reporting too infrequently to get an idea of what's happening or too frequently and just spamming too much output.

Checkpointing the model every N iterations/epochs/batches has a similar problem - you may end up saving very few checkpoints and risk losing work or waste a lot of time/space with lots of checkpoints.

So I've often found myself implementing some kind of monitoring and checkpointing callbacks based on time, e.g., reporting every half an hour, checkpointing every two hours, etc.

dimatura commented on Adding a feature because ChatGPT incorrectly thinks it exists   holovaty.com/writing/chat... · Posted by u/adrianh
momojo · 2 months ago
A light-weight anecdote:

Many many python image-processing libraries have an `imread()` function. I didn't know about this when designing our own bespoke image-lib at work, and went with an esoteric `image_get()` that I never bothered to refactor.

When I ask ChatGPT for help writing one-off scripts using the internal library I often forget to give it more context than just `import mylib` at the top, and it almost always defaults to `mylib.imread()`.

dimatura · 2 months ago
I don't know if there's an earlier source, but I'm guessing Matlab originally popularized the `imread` name, and that OpenCV (along with its python wrapper) took it from there, same for scipy. Scikit-image then followed along, presumably.

u/dimatura

KarmaCake day885May 22, 2010View Original