Readit News logoReadit News
chrash commented on Ask HN: What are your current programming pet peeves?    · Posted by u/alexathrowawa9
ajkjk · a month ago
I can't believe that in 2025 it is still hard to find documentation for basic functionality in languages as ubiquitous as, say, Python.

If I google something simple I get 100 junk sites of garbage (GeeksForGeeks, W3Schools, etc), a bunch of AI-generated crap, a bunch of archaic out-of-date stuff, the official documentation which is a wall of dense text, some tutorials that mention the basics, a bunch of shitty bootcamp blogspam (they make you write blog posts about crap to get your brand out there, you know), some stackexchange posts with twenty different answers from 2010, etc. And I'm thinking of, like, Python here, cause that's what was pissing me off this week. God help you if what you're looking for is made by Apple.

Sure we're using AI to look up documentation now but that should never have been necessary. Just like now we use AI to google things because google is so shitty. It's not that AI is great, it's that search engines and documentation are more dogshit than ever and AI is a bandaid.

also it is amazing to me that shells still exist in more-or-less the same form. dear god can I just get a terminal that runs something like Python directly?

chrash · a month ago
> also it is amazing to me that shells still exist in more-or-less the same form. dear god can I just get a terminal that runs something like Python directly?

i’ve been maining `nushell` for about 1.5 years now, and it’s basically this. a real programming language that is designed as a command runner and output parser. i have a ton of scripts that parse output from common commands like `git` and facilitate a ton of shell-based workflows: https://github.com/covercash2/dotfiles/tree/main/nuenv

Deleted Comment

chrash commented on Yash: Yet Another Shell   github.com/magicant/yash... · Posted by u/InitEnabler
pyinstallwoes · 7 months ago
What do you love about it? How would you demonstrate to a friend its value?
chrash · 7 months ago
well, it’s a few things. a lot of it comes down to reading structured data and scripting. i will often stream logs from k8s or when running a service locally that outputs JSON, and nushell can parse the logs for readability or to find a particular field. i mean, that’s nothing jq couldn’t do. but having an integrated experience based more on readability makes things nice. also when i say scripting i mean actual real functions. you can define a command complete with parsed documented arguments as if you’re using argparse or clap, and it also supports completions. so when i go to sign into the company VPN i have a lil function that grabs my credentials and takes an argument that is one of several providers my company uses, which i can autocomplete with tab because i’ve written a simple completion helper (literally a function that returns a list). it’s documentation as code, and i push all these helpers up to my company git repo so when someone asks me how to do something i have workable readable examples. if you’ve ever wanted a real programming language (like Python) as your shell, i think this is worth a shot.
chrash commented on Yash: Yet Another Shell   github.com/magicant/yash... · Posted by u/InitEnabler
UncleOxidant · 7 months ago
I kind of wish there was a shell that had the option for histories localized to a directory. So I can go into a directory and then look at what I was doing when I was last in that directory. Some complicated compilation command line to compile and link a source file and a bunch of libraries that was run when I was last in that directory, for example. There'd still be the 'history' command for everything and then maybe 'dhistory' for the commands that I ran in that specific directory when I was last there.
chrash · 7 months ago
nushell does this by default, sort of. it will show you autocompletes that are relevant to the current directory and will fallback to history more generally.

i’m a huge nushell fan. if you can stand a non POSIX shell, it’s great for working with any kind of structured data and has a sane mostly FP scripting language

chrash commented on Google dropping continuous scroll in search results   searchengineland.com/goog... · Posted by u/elsewhen
ralusek · a year ago
I love infinite scroll, and hate pagination. All lists should be some combination of these: infinite scroll, filterable, sortable. Pagination should never be involved. If I want to get to something on page 2, I scroll. For anything else, I’m going to sort and filter.

If the thing I want is on page 17, and I see 1, 2, 3…79…159, 160, 161, I’m still just going to want to filter. The absolute best pagination is set up in a way that lets you binary search drill down for what you’re looking for, but even that is something I’d almost never prefer over filtering

chrash · a year ago
hard disagree. i had an experience just this morning looking for some pictures from an event i participated in, and the infinite scrolling was absolutely infuriating. they didn't have an index to filter on, and when i clicked on a picture to download it and navigated back, it took me to the top of the page. i had to scroll through about a dozen loading indicators to get back to where i was. sure, this was a bad implementation, but adding it to every single list of results on the web is asinine trend chasing and bad UX.
chrash commented on Amazon ditches 'just walk out' checkouts at its grocery stores   gizmodo.com/amazon-report... · Posted by u/walterbell
LudwigNagasena · a year ago
GenAI means Generative AI, not General AI.
chrash · a year ago
right i was kinda being cheeky about how large models are all called GenAI by product/business types heh
chrash commented on Amazon ditches 'just walk out' checkouts at its grocery stores   gizmodo.com/amazon-report... · Posted by u/walterbell
wiricon · a year ago
How well does simulated data work in this space? My first stab at doing this scalably would be as follows: given a new product, physically obtain a single instance of the product (or ideally a 3d model, but seems like a big ask from manufacturers at this stage), capture images of it from every conceivable angle and a variety of lighting conditions (seems like you could automate this data capture pretty well with a robotic arm to rotate the object and some kind of lighting rig), get an instance mask for each image (using either human annotator or a 3d reconstruction method or a FG-BG segmentation model), paste those instances on random background images (e.g. from any large image dataset), add distractor objects and other augmentations, and finally train a model on the resulting dataset. Helps that many grocery items are relatively rigid (boxes, bottles, etc). I guess this would only work for e.g. boxes and bottles, which always look the same, you'd need a lot more variety for things like fruit and veg that are non rigid and have a lot of variety in their appearance, and we'd need to take into account changing packaging as well.
chrash · a year ago
as mentioned in another comment, "scale" is not just horizontal, it's vertical as well. with millions of products (UPCs) across different visual tolerances it's hard to generalize. your annotation method is indeed more efficient than a multistep "go take a bunch of pictures and upload them to our severs for annotators" but is still costly in terms of stakeholder buy-in, R&D, hardware costs, and indeed labor. if you can scope your verticals such that you only have, say, 1000 products the problem become feasible, but once you start to scale to an actual grocery store or bodega with ever-shifting visual data requirements the problem doesn't scale well. add in the detail that every store moves merchandise at different rates or has localized merchandise then the problem becomes even more complex.

the simulated data also becomes an issue of cost. we have to produce a realistic (at least according to the model) digital twin that doesn't interfere too much with real data, and measuring that difference is important when you're measuring the difference between Lay's and Lay's Low Sodium.

i'm not saying it's unsolvable. it's just a difficult problem

chrash commented on Amazon ditches 'just walk out' checkouts at its grocery stores   gizmodo.com/amazon-report... · Posted by u/walterbell
jrpt · a year ago
Come on, it isn’t anything to do with being a “marketing stunt.” Often products like this are expected to lose money at first, but they hope with enough R&D and scale that they can make it successful eventually.

For example, you are pointing out that annotating is costly, but that’s an expense that scales independently of the number of stores. So with enough scale it wouldn’t be as big a deal. Or if they figured out some R&D that could improve it too.

chrash · a year ago
right, that's how it starts. but the improvements in methodology simply aren't there as the ML sector has been laser focused on generality in modeling (GenAI as it's affectionately known). "at scale" doesn't just mean more stores; it means more products and thus more annotation. how many UPCs do you figure there are in a given Target or Whole Foods? i assure you it's in the millions.

one advantages of the Amazon Go initiative is a smaller scope of products.

chrash commented on Amazon ditches 'just walk out' checkouts at its grocery stores   gizmodo.com/amazon-report... · Posted by u/walterbell
hackernewds · a year ago
the biggest cost is not annotators at the scale you're imagining. it is labor costs.

Amazon bet that the federal govt would raise labor costs to $20/hr and all their competitors (besides themselves with this tech) would get wiped out. They even publicly campaigned and lobbied. That didn't come to fruition as the election promises turned to fluff, and the populists simply chose to empower unions instead.

chrash · a year ago
i mean, labor cost (as in in-store labor) is the target for this cost optimization. unfortunately for the time being labor cost is not as significant as the other costs associated with annotation and dataset curation. technology costs are not really significant if this can be pulled off at scale.

in-store employees know where things are supposed to be and why, if at all, items are "misplaced" according to the modular design

chrash commented on Amazon ditches 'just walk out' checkouts at its grocery stores   gizmodo.com/amazon-report... · Posted by u/walterbell
shay_ker · a year ago
are larger image/video models unable to catch things like Christmas branding?
chrash · a year ago
a big problem in the space is that products that look very similar will be clustered in the same section. large models are very good at generalizing, so they may be more attuned to "this is a Christmas thing", but they won't know that it should be classified as the same UPC as the thing that was in that spot yesterday without you specifically telling it to. how would it know it's not a misplaced product or a random piece of trash? (you won't believe the things you find on store shelves once you start looking) you can definitely speed up your annotation time with something like SAM[1], but it will never know without training or context that it's the same product but Christmas (ie it resolves to the same UPC).

[1]: https://github.com/facebookresearch/segment-anything

u/chrash

KarmaCake day75February 9, 2024View Original