The big question is if a species can eventually reach some point of collective enlightenment where they leave these primitive impulses behind. But based on the current state of humanity, I'm not to optimistic.
AMD make CPUs (and other silicon products). The more people who can take the software required to initialise those CPUs and put it to new and novel uses (some of which are going to be open source) the more CPUs they can sell. One imagines there would also be benefits in just getting your code and documentation out there in the open, with respect to having less NDAs and support legwork to have to slog through with customers who buy a lot of your silicon. If the code is open, they can probably just use it.
Btrfs is a better choice for sqlite, haven’t seen that issue there.
What you're describing sounds like a bug specific to whichever OS you're using that has a port of ZFS.
by that logic Elixir is even better for agents.
also the link at the bottom of the page is pretty much why I ditched Go: https://go.dev/blog/error-syntax
The AI landscape moves so fast, and this conservative, backwards looking mindset of the new Go dev team doesn't match the forward looking LLM engineering mindset.
I have used (and continue using) many tools and solutions that are not popular. Some of them make me happy, some do not. All of them bring practical advantages.
Not every good tool will become popular, and many popular tools are not good.
I think the corollary that people, when looking also at the data or at constraints, are acting irrationally or its sophistry or whatever is bunk, though. Plenty of people do actually engage in some amount of evaluation, and it's intellectually lazy to disregard that because they may have also made an aesthetic or otherwise personal decision on some level as well.
I use a bunch of niche software, and it's difficult or impossible to completely separate my aesthetic preferences from the concrete reasons I also have for doing so -- my aesthetic preferences and my joy in using certain things actually stem from those concrete benefits in some cases!
For example, a call center might use the excuse of AI to fire a bunch of people. They would have liked to just arbitrarily fire people a few years ago, but if they did that people would notice the reduction in quality and perhaps realize it was done out of self-serving greed (executives get bigger bonuses / look better, etc). The AI excuse means that their service might be worse, perhaps inexcusably so, but no one is going to scrutinize it that closely because there is a palatable justification for why it was done.
This is certainly the type of effect I feel like underlies every story of AI firing I've heard about.