But looking back at it with an AI nose going, it does have a ton of AI slop feeling to it. LinkedIn slop is kind of interesting to read. The repetition is pretty obvious, though. This reads like it was written by a 10th grade english class trying to fit a very specific structure. Like every section had to check a list of requirements, which it did cuz it's AI.
not trying to start a flame-war, but i can imagine that you get quite some range in the US, if you live in one of those cardboard-inner-walls houses.
in the 30cm thick solid wall apartment i live in my pebble looses connection the next room over, i almost need line-of-sight for it to work. working at my desk, get up, walk 5 meters to the bathroom, watch looses connection.
maybe my smartphone has a weak bluetooth receiver, compared to other models, who knows...
I charge my fenix 7 solar maybe once a month. My use case is about 10 hours a week of activity tracking, usually trail runs. This goes up to about 20 hours a week in the summer but i dont recharge much more often. I use garmin pay and occasionally listen to podcasts on my watch while running. I also use the on-watch maps quite a bit on my trail runs.
To each their own, but it sounds like your wife just couldn't get into the "happy path" routine of an Apple Watch user.
I've been using an Apple Watch since Series 5 introduced the always-on display. I wear it for roughly 23 hours a day, and charge it whenever I'm in the bathroom. I'm fine with this routine 99% of the time, but I'm also not someone who'd camp or stay outdoors for more than a night.
Before that, I was using a Amazfit Bip and was really proud of its 30+ day battery life. I very much prefer the features the Apple Watch has.
even after a few years with battery degradation I rarely recharge my watch more than once every 2-3 weeks.
it's kind of wild to me that folks would daily recharge a watch.
* no longer any pressure to contribute upstream
* no longer any need to use a library at all
* Verbose PRs created with LLMs that are resume-padding
* False issues created with LLM-detection by unsophisticated users
Overall, we've lost the single meeting place of an open-source library that everyone meets at so we can create a better commons. That part is true. It will be interesting to see what follows from this.
I know that for very many small tools, I much prefer to just "write my own" (read: have Claude Code write me something). A friend showed me a worktree manager project on Github and instead of learning to use it, I just had Claude Code create one that was highly idiosyncratic to my needs. Iterative fuzzy search, single keybinding nav, and so on. These kinds of things have low ongoing maintenance and when I want a change I don't need to consult anyone or anything like that.
But we're not at the point where I'd like to run my own Linux-compatible kernel or where I'd even think of writing a Ghostty. So perhaps what's happened is that the baseline for an open-source project being worthwhile to others has increased.
For the moment, for a lot of small ones, I much prefer their feature list and README to their code. Amusing inversion.
AI coding is kind of similar. You tell it what you want and it just sort of pukes it out. You run it then forget about it for the most part.
I think AI coding is kind of going to hit a ceiling, maybe idk, but it'll become an essential part of "getting stuff done quickly".