Because the vast majority of development is done by people with a very narrow focus of skills on an extreme deadline, and you actually comfortable with compression, networking, encryption, IO, and all the other taken for granted libraries that wind up daisy chained together?
Because if you are, great, but at the same time, that's not the job description for like 90% of coding jobs. I don't expect my frontend guy to need to know encryption so he can review the form library he's using.
Relying on feature flags is a pie in the sky solution, and realistically developers shouldn't have to be concerned with such environmental issues. Dependency declarations should be relied on to work 100% of the time, whether they're specified as version numbers or checksums. Since they're not reliable in practice, vendoring build and runtime dependencies is the only failproof method.
This isn't to say that larger teams shouldn't support specific distros directly, but my point is that smaller teams simply don't have the resources to do so.
Maybe my laptop is running Alpine and I patches some libraries to support musl and now some methods are NOP. As the developer, why does it matter to you?
You would want me to have some chroot or container installation for me to install a glibc based system so that you can have a consistent behavior on every computer that happens to run your code? Even the ones you do not own?
This is also much easier for the user, since they only need to download and run a single self-contained artifact, that was previously (hopefully) tested to be working as intended.
This has its own problems, of course, but it is the equivalent of vendoring build time dependencies.
The last part of my previous comment was specifically about the practice of distros carrying build time libraries. This might've been acceptable for C/C++ that have historically lacked a dependency manager, but modern languages don't have this problem. It's a burden that distro maintainers shouldn't have to worry about.
No developer is being asked to support every distro. You just need to provide the code and the requirement list. But some developer made the latter overly restrictive. And tailor the project to support only one release process.
> This is also much easier for the user, since they only need to download and run a single self-contained artifact, that was previously (hopefully) tested to be working as intended
`apt install` is way easier than the alternative and more secure.
> It's a burden that distro maintainers shouldn't have to worry about.
There's no burden because no one does it. You have dev version for libraries because you need them to build the software that is being packaged. No one packages library that is not being used by the software available in the distro. It's a software repository, not a library repository.
What happens is that distro developers spend their time patching the upstream so it works with the set included on the distro. This has some arguable benefits to any user that wants to rebuild their software, at the cost of random problems added by that patching that flies under the radar of the upstream developers.
Instead, the GPs proposal of vendoring the dependencies solves that problem, without breaking the compilation, and adds another set of issues that may or may not be a problem. I do argue that it's a good option to keep on one's mind to apply when necessary.
That is not what it's being asked.
As a developer, you just need to provide the code and the list of requirements. And maybe some guide about how to build and run tests. You do not want to care about where I find those dependencies (Maybe I'm running you code as PID 1).
But a lot of developers want to be maintainers as well and they want to enforce what can be installed on the user's system. (And no I don't want docker and multiple versions of nginx)
But your use case is why GNOME have extensions. To alter the defaults and add stuff that they don't care about, but you do. In macOS, you have to basically reverse engineer and use private APIs.
I see this issue as well. A CLI setup with Emacs/VIM doing C/C++ development is very stable, because that's how the majority of linux devs interact with Linux.
What puts a bad taste in my mouth is when you mention issues outside of that setup, the usual response isn't "oh this is an issue we need to fix", it's "well your setup sucks, stop using VSCode/Gnome/Chrome/etc"
It is a massive moral failure though. It shows that after two decades of work, the Linux community has been unable to build a simple sane functional stable development environment better than Win32.
I prefer Incus, because you can’t do adhoc patching with docker. Instead you have to rebuild the images and that becomes a hassle quicky in a homelab settings. Incus have a VM feel while having docker management UX.