Readit News logoReadit News
peppermint_gum · 2 years ago
>not even considering some hostile emails that I recently received from the upstream developer or his public rants on lkml and reddit

It feels like whenever the author of bcachefs comes up, it's always because of some drama.

Just the other day he clashed with Linus Torvalds: https://lore.kernel.org/lkml/CAHk-=wj1Oo9-g-yuwWuHQZU8v=VAsB...

My reading is that he's very passionate, so he wants to "move fast and break things" and doesn't get why the others aren't necessarily very happy about it.

ants_everywhere · 2 years ago
Hey at least it's not the worst behavior we've seen from a Linux file system creator...

I thought Carl Thompson's response was very good and constructive: https://lore.kernel.org/lkml/1816164937.417.1724473375169@ma...

What I don't understand is that IIUC Kent has his development git history well broken up into small tight commits. But he seems to be sending the Linux maintainers patches that are much larger than they want. I don't get why he doesn't take the feedback and work with them to send smaller patches.

EDIT: The culture at Google (where Kent used to work) was small patches, although that did vary by team. At Google you have fleet-wide control and can roll back changes that looked good in testing but worked out poorly in production. You can't do that across all organizations or people who have installed bcachefs. Carl pointed out that Kent seemed to be missing some social aspects, but I feel like he's also not fully appreciating the technical aspects behind why the process is the way it is.

koverstreet · 2 years ago
Honesty, I think I just presented that pull request badly.

I included the rcu_pending and vfs inode rhashtable conversion because I was getting user reports that it fixed issues that were seriously affecting system usability, and because they were algorithmically simple and well tested.

Back in the day, on multiple occasions Linus and others were rewriting core mm code in RC kernels; bcachefs is still experimental, so stuff like this should still be somewhat expected.

complaintdept · 2 years ago
> Hey at least it's not the worst behavior we've seen from a Linux file system creator...

I think that dubious distinction would go to Hans Reiser.

rixed · 2 years ago
It's not about the size of each individual patches but about the large amount of changes in total *during the freeze•.
ralferoo · 2 years ago
It's very clear from that thread that he doesn't understand the purpose of the stable branch. It doesn't mean "stable" as in "the best possible experience", it means it as in "this code has been tested for a long period of time with no serious defects found" so that when the stable branch is promoted to release, everything has undergone a long testing period by a broad user base.

If there is a defect found, the change to a stable branch should literally be the minimal code change to fix the reported issue. Ideally, if it's a newly introduced issue (i.e. since being on the stable branch), the problematic code reverted and a different fix to the original defect applied instead (or left if it's deemed less of an issue than taking another speculative fix). Anything that requires a re-organisation of code, by definition, isn't a minimal fix. Maybe it's the correct long-term solution, but that can be done on the unstable branch, but for the stable branch, the best fix is the simplest work around. If there isn't a simple work around, the best fix is to revert everything back to the previous stable version and keep iterating on the unstable branch.

The guy even admits it as well with his repeated "please don't actually use this in production" style messages - it's hard to give a greater indication than this that the code isn't yet ready for stable.

I can understand why from his perspective he wants his changes in the hands of users as soon as possible - it's something he's poured his heart and soul and he strongly believes it will improve his users' experience. It's also the case that he is happy running the very latest and probably has more confidence in it that an older version. The rational choice from his perspective is to always use the latest code. But, discounting the extremely unlikely situation that his code is entirely bug free, that just means he hasn't yet found the next serious bug. If a big code change is rushed out into the stable branch, it just increases the likelihood that any serious bug won't have the time it needs in testing to have the confidence that's the branch is suitable for promotion to release.

qalmakka · 2 years ago
> The guy even admits it as well with his repeated "please don't actually use this in production" style messages - it's hard to give a greater indication than this that the code isn't yet ready for stable.

True that, and yet the kernel has zero issues keeping Btrfs around even though it's been eating people data since 2010. Kent Overstreet sure is naive at times, but I just can't not sneer at the irony that an experimental filesystem is arguably better than a 15-years old one that's been in the Linux kernel for more than a decade.

_ph_ · 2 years ago
It seems to be a difficult situation: he has bug fixes against the version in the stable kernel for bugs which haven't been reported. I can see both perspectives: on stable you don't want to do development, but also you want all bugfixes you can get. I can also see the point of Linus, who wants just to add bug fixes and to minimize the risk of introducing new bugs.

Considering that Kent himself warns against general use right now, I don't quite see the urgency to get the bug fixes out - in my understanding Linus would happily merge them in the next development kernel. And whoever is set to to run bcachefs right now, might also be happy to run a dev kernel.

Arch-TK · 2 years ago
He is not submitting changes for stable. He is submitting non-regression fixes after the merge window. It's clear he understands the rules and the reasons for them but feels like his own internal development process is equivalent at reducing the chance of major regressions introduced in such a PR such that he can simply persuade Linus to let things go through anyway.

Whether this internal process gives him a pass for getting his non-regression fixes in after the merge window is at the end of the day for Linus to decide. And Linus is finally erring on the side of "Please just do what everyone else is doing" rather than "Okay, fine Kent, just this once".

I would say it's ironic to start a comment saying: "It's very clear from that thread that he doesn't understand the purpose of the stable branch" when it's "very clear" from your opening paragraph that you don't understand the thread.

2OEH8eoCRo0 · 2 years ago
I thought stable means "doesn't change"?
rubiquity · 2 years ago
Let’s not misrepresent Kent over a single incident of sending too much after a merge window. He’s extremely helpful and nice in every interaction I’ve ever read.
qalmakka · 2 years ago
My 2 cents: these are the types of people that actually get the job done. All good software in my experience starts thanks to overachieving human representations of Pareto's law - people that can do alone in months what a team of average-skilled developers do in years.

In this industry it's very, very easy to run in circle, keeping doing stuff over stuff without realising you are in a bikeshedding loop, you're overengineering or simply wasting time. We need people that want to just push forward and assume the responsibility of anything that breaks, otherwise I'm sure that in 30 years we'd all still be using the same stuff we've always used because it's just human nature to stick with what we know works, quirks and all.

ajb · 2 years ago
> he wants to "move fast and break things"

That's not how I read that thread. This is just about where the diligence happens, not that it can be avoided; and exactly how small a fix is mergable in the fixes phase of kernel development.

I don't see that thread as being particularly angry either. There have been ones where both of them have definitely lost their cool; here they are having a (for them) calm disagreement. Linus is just not going to merge this one until the next development phase, which is fine.

There have been arguments involving this developer that do raise questions; I just don't see this as one of them.

htpart · 2 years ago
That's a normal LKML conversation. Nowhere do I see actual bugs pointed out, in fact the only testimony by Carl said that bcachefs has been quite stable so far.

This is just about following procedure. There are people who follow procedure and introduce many bugs, and people who don't but write perfect software.

The bcachefs author cautiously marks bcachefs as unstable, which is normal for file systems. The only issue here is that the patch touched other areas, but in the kernel development model Linus is free not to pull it.

jt2190 · 2 years ago
Is bcachefs-tools going into the mainline distros or into something that’s meant to be less stable and experimental? Linus makes it sound like there’s a more appropriate place for this work.

Edit: Reading through the thread, it seems like there is a claim of rigorous but effectively private testing. Without the ability to audit those results easily it’s causing a lot of worry.

ralferoo · 2 years ago
I don't think Linus is particularly concerned about bcachefs-tools, and whether a particular distributions ships with it or not isn't a concern for the kernel. Presumably though, distributions that don't ship the tools may also want to disable it in the kernel, although I'd imagine they'd leave it alone if it was previously supported.

Linus' complaint was about adding feature work into a supposed minor bug fix, especially because (going from Linus' comments) they were essentially refactors of major systems that impacted other areas of the kernel.

ris · 2 years ago
Great and now the top thread on the HN discussion is about that drama only tangentially referenced in the article.

Dead Comment

JonChesterfield · 2 years ago
This would be a good one for Rust enthusiasts to weigh in on.

The issue raised is that some program written in rust insists on a very specific version of various dependencies. If other people change the metadata, it builds and seems to run ok with different versions. (Developer on reddit clarifies that it builds and does the wrong thing, and recommends dropping Debian as a solution).

Linux likes to pack a finite set of versions of libraries (ideally just one version at a time) and use that dependency for multiple programs, totally at odds with the vendoring specific versions strategy.

I'm not clear what a solution is to this one. In the specific case, sure, drop it from Debian. But this isn't the only program choosing to vendor dependencies instead of use them from the environment.

rlpb · 2 years ago
Maintenance is much more practical if everything in your dependency tree uses the same version of every dependency. Achieving this in practice is much of the work distribution maintainers have to do.

Doing this has a couple of key advantages:

1. when a security vulnerability needs patching in a stable distribution release, the security team only have to patch one version instead of a dozen or more different versions bundled all over the place

2. a library that depends on dependencies A and B, both of which depend on X, can actually work properly if it needs to pass API objects created by X across between A and B, since X is of the same version

In an ecosystem where it's considered acceptable to "pin" versions of dependencies and also call any system that doesn't use the pinned versions "unsupported", both of the above two cases become impractical.

Whether you use shared libraries or static libraries, the above matters still exist.

orra · 2 years ago
> 2. a library that depends on dependencies A and B, both of which depend on X, can actually work properly if it needs to pass API objects created by X across between A and B, since X is of the same version

That feels like an advantage for the developer, not the distro maintainer.

I find that amusing, because you're saying it's easy if they can use one version of the library. Well, that's also true if they're writing an application, hence pinning.

rcxdude · 2 years ago
Generally speaking libraries don't pin versions, applications do (in rust-land, a Cargo.lock is respected for the folder you're building, not your dependencies. A dependency can specify an exact version if it wants but an application can override that kind of thing, and it's generally not considered a good idea). This makes 2) a non-issue for the ecosystem (if an application needs to pass objects from X between A and B, then they'll need to pin to a single version of X). 1) is more of a disadvantage, but it's unclear to me that the effort distro maintainers put in to fix to a single version actually results in a reduction in effort overall.
phlip9 · 2 years ago
Maintenance is much more practical when you use the versions upstream tests in their CI and not whatever mishmash of ancient/silently incompatible deps that each distro separately decides to combine together.
rfoo · 2 years ago
No.

1. Rust is as hostile to dynamic libraries as glibc to static.

2. With everything static you have to rebuild every dependent on any security patch anyways. If you meant with multiple versions maintainers have to backport patches to multiple versions. Maybe don't backport at all? Some people appreciate having backports. Users of software written in Rust does NOT. So why bother doing backports for them?

3.

> a library that depends on dependencies A and B, both of which depend on X, can actually work properly

This will be detected during compile (since it's all static! sorry dynamic linking fans) and dealt with.

viraptor · 2 years ago
This isn't Rust specific. The same issue exists in all languages where the versions can be restricted at project level. It's an issue in Debian because they can't handle multiple concurrent versions (beyond renaming the package) and want every package built without internet access.

But that same issue affects Ruby, Python, etc. if the project specifies strict versions. And if you deal with filesystems, you really want strict versions - I understand why the author wants to keep it that way.

It's more of a self-inflicted Debian policy pain rather than anything to do with Rust. The author could be nice and vendor, but also he's completely right to refuse. The request is basically "we put handcuffs on ourselves, so please you do the work instead".

AshamedCaptain · 2 years ago
In fact, I am tired of the same issue e.g. when evaluating machine learning tools with Python. It's as if everyone just says "dependency = ${exact_version_I_have}" and never considers anything else. Not helped by the fact that most of these projects break ABI every other day which I still think it is just plain evil.

Please don't say "just use containers/virtualenvs/vendoring/whatever" because at some point you obviously want these "containers" to use your real hardware. And your GPU driver is only going to be at one version, no matter how many statically linked executables you have.

cogman10 · 2 years ago
It should be noted that, particularly for rust, it's a silly policy as rust does not support dynamic linking of rust libraries. All rust applications are statically linked.

So trying to force all projects using foo onto the same version of foo is just a huge headache with no real benefit to anything.

nubinetwork · 2 years ago
> This isn't Rust specific. The same issue exists in all languages where the versions can be restricted at project level

Can confirm, I occasionally use an ai program that has to hard code all of their python dependencies, because it's the only way to get it to compile, let alone run properly... and then they go and change the underlying package manager, and figure out they have to hard code even more... it's a bloody mess, but thankfully you can just run it as a docker container once it's all working...

JackSlateur · 2 years ago

  if you deal with filesystems, you really want strict versions
How are doing everybody else ? What is specific with filesystems, here ?

This is a bad dependency management, nothing more. Vendoring is a PITA.

1oooqooq · 2 years ago
it is rust specific because rust is the first attemp to replace proper system engineering languages with one that while nicer on memory management and overall ergonomics, is worse in tooking. cargo brings many malpractices from java/JavaScript (maven, npm, etc) that were always shunned in systems engineering and, mark my words, will be a security nightmare for linux in the near future.
agwa · 2 years ago
The author would be fine with vendoring. The problem is that Debian doesn't allow it.
haileys · 2 years ago
I think this is the same shape of issue that I’ve experienced with Debian for as long as I’ve used it - close to 15 years now.

Debian is a great OS, but it targets stability and long term support for its releases. That just isn’t compatible with newer, faster moving software that’s still working towards stability. I remember it being an issue when I was playing around with Mono around 2010 ish, and it’s an issue now with bcachefs - a very new and fast moving technology.

For motivated users, the solution is to install bcachefs-tools direct from upstream, or from a third party packager (if one exists). When bcachefs stabilises, I’m sure it’ll find its way back into Debian official.

aragilar · 2 years ago
What I think is somewhat interesting in this case (if I've read the post correctly) half the changes were for newer versions of dependencies. If that's the case, then I'm inclined to start wondering what's going on with bcachefs upstream (and the whole dependency tree), if they're not keeping up with versions.
ectospheno · 2 years ago
I love Debian for this stability. It is a great host OS for all my VMs.
IshKebab · 2 years ago
I agree. Debian is a waterfall OS living in an agile world. There's a fundamental mismatch between Debian's philosophy and the reality of today's open source software ecosystem.
BoingBoomTschak · 2 years ago
Gentoo has SLOTs and they work well. But for Go and Rust, I think they chose the vendoring path because it's just too damn Don Quixotesque to try to mirror their NPM tier package repositories.
foobarqux · 2 years ago
As others have said this isn't just Rust, distro "release" packaging is untenable now for non-base-system packages: there are an ever increasing number of packages, they release with updates users need/want at a rate far faster than the distro release schedule and have too many conflicting dependencies.

To deal with this today you install a stable base system with an overlaid "user" package system so Debian/Fedora + nixpkgs/guix/brew/snap/flatpak/mpr/pip/cargo/etc. Unfortunately because because although you can get most packages on nixpkgs you can't get all and you'll need to install multiple so it becomes a nightmare to maintain (how do you remember all the package managers you have to update during the xz vulnerability?) and extremely bloated due to duplicated dependencies (especially for GUI packages that require gnome or kde).

You do get pretty far for CLI-only packages by just adding nixpkgs though. Too bad it's so terrible to use.

MPR is also pretty interesting: You basically leverage the AUR for Debian and with a few changes you could probably make the dependency names translation automatic and transparent. It solves the bloat problem (since you'll mostly use system packages) but doesn't help with library versions.

ploxiln · 2 years ago
Shouldn't the filesystem utilities be base-system packages, though? Like e2fsprogs, xfsprogs ... I guess rust just isn't appropriate for base-system packages, because the dependencies move too fast, and so rust components need to depend on different specific versions of all dependencies, and are "horrifically out-of-date" in just a few months ... and rust developers wouldn't stoop so low as to recognize that e.g. bindgen is a particularly hairy and touchy dependency so they should just vendor the bindings output by bindgen, rather than the entire bindgen tool plus all other dependencies in full ...
lambda · 2 years ago
> Developer on reddit clarifies that it builds and does the wrong thing, and recommends dropping Debian as a solutio

Can you link to the post on Reddit you are referring to?

dfc · 2 years ago
pornel · 2 years ago
> rust insists on a very specific version of various dependencies

It only insists on semver-compatible versions (if a Rust/Cargo package specifies libfoo=5.1, it will work with libfoo=5.9). It's one per major version, not that different from Debian packaging "libfoo5" and "libfoo7" when both are needed.

The difference is that Cargo unifies versions by updating them to the newest compatible, and Debian unifies them by downgrading them to old unsupported versions, ignores compatibility, and reintroduces bugs, and disables required functionality.

psibi · 2 years ago
Related discussion in the bcachefs subreddit: https://www.reddit.com/r/bcachefs/comments/1f4erbg/debiansta...

Kent's reply in that thread has more details.

yshui · 2 years ago
This is pretty dumb on Debian's part. First of all I don't understand why they insist crate dependencies must be pulled from their repository. They are just source code, not built binary. AFAIK there is no other distro that does this, what they do is that they would download crates from crates.io (`cargo vendor` is a command that does this automatically) and build against that. Arch does this, Gentoo does this, NixOS does this, why does Debian has to be different?

Secondly, even if they have to use crates from their repository, I don't understand what's so hard to just have multiple versions of the same crate? That will solve the problem too.

This is just all-around weird what Debian is doing.

(Full disclosure, I am the one who introduced the first piece of Rust code into bcachefs-tools)

acka · 2 years ago
Debian has a Social Contract[1] as well as guidelines {the DFSG}[2] regarding the commitment to only distribute free and open source software that all package maintainers must adhere to. This means that package maintainers must check the licenses of source code and documentation files, clear up any ambiguities by talking to upstream, and (as a last resort) even excise code and/or documentation from Debian's copy of the codebase if it doesn't meet the requirements of the DFSG.

In practise, this means that Debian has to make its own copy of the source code available from a Debian-controlled repository, to ensure that no (accidental or otherwise) change to an upstream source archive can cause non-DFSG compliant Debian source or binary packages to be distributed.

[1] https://www.debian.org/social_contract

[2] https://wiki.debian.org/DebianFreeSoftwareGuidelines

koverstreet · 2 years ago
That justification doesn't work here - I provide vendored tarballs, so the source code availability argument is moot.
ouEight12 · 2 years ago
> Arch does this, Gentoo does this, NixOS does this, why does Debian has to be different?

I say this as someone who ran Gentoo for years and daily drives Arch today.

Because sometimes you don't want entire swaths of your server being rebuilt/tinkered with on a regular basis under the hood. "Move fast, break everything" is great in dev/test land, or a prod environment where the entire fleet is just containers treated like cattle, but contrary to what the SREs of the valley would believe, there's a whole ecosystem of 'stuff' out there that will never be containerized, where servers are still treated like pets, or rather, at least "cherished mules", that just do their job 24/7 and get the occasional required security updates/patches and then go right back to operating the same way they did last year.

cesarb · 2 years ago
> AFAIK there is no other distro that does this, what they do is that they would download crates from crates.io (`cargo vendor` is a command that does this automatically) and build against that.

AFAIK, most traditional distributions do that, not just Debian. They consider it important that software can be rebuilt, even in the far future, with nothing more than a copy of the distribution's binary and source packages. Doing anything which depends on network access during a build of the software is verboten (and AFAIK the automated build hosts block the network to enforce that requirement).

Keep also in mind that these distributions are from before our current hyper-connected time; it was common for a computer to be offline most of the time, and only dial up to the Internet when necessary. You can still download full CD or DVD sets containing all of the Debian binaries and source code, and these should be enough to rebuild anything from that distribution, even on an air-gaped computer.

> Secondly, even if they have to use crates from their repository, I don't understand what's so hard to just have multiple versions of the same crate? That will solve the problem too.

That is often done for C libraries; for instance, Debian stable has both libncurses5 and libncurses6 packages. But it's a lot of work, since for technical reasons, each version has to be an independent package with a separate name, and at least for Debian, each new package has to be reviewed by the small ftpmaster team before being added to the distribution. I don't know whether there's anything Rust-specific that makes this harder (for C libraries, the filenames within the library packages are different, and the -dev packages with the headers conflict with each other so only one can be installed at a time).

There's also the issue that having multiple versions means maintaining multiple versions (applying security fixes and so on).

kijin · 2 years ago
> There's also the issue that having multiple versions means maintaining multiple versions (applying security fixes and so on).

This is the most important part. Debian LTS maintains packages for 5 years. Canonical takes Debian sources, and offers to maintain their LTS for 10 years. Red Hat also promises 10 years of support. They don't want anything in the core part of their stable branches that they can't promise to maintain for the next 5-10 years, when they have no assurance that upstream will even exist that long.

If you want to move fast and break things, that's also fine. Just build and distribute your own .deb or .rpm. No need to bother distro maintainers who are already doing so much thankless work.

jonhohle · 2 years ago
I’d consider the issue to be the opposite. Why does every programming language now have a package manager and all of the infrastructure around package management rather than rely on the OS package manager? As a user I have to deal with apt, ports, pkg, opkg, ipkg, yum, flatpak, snap, docker, cpan, ctan, gems, pip, go modules, cargo, npm, swift packages, etc., etc., which all have different opinions of how and where to package files.

On packaged operating systems (Debian, FreeBSD) - you have the system’s package manager to deal with (apt, pkg respectively). I can have an offline snapshot of _all_ packages that can be mirrors from one place.

IMHO, every programming language having its own package system is the weird thing.

MobiusHorizons · 2 years ago
If you are a developer you almost always eventually need some dependencies that don’t ship with the os package manager, and once some of your dependencies are upstream source, you very quickly find that some dependencies of the sources you download rely on features from newer versions of libraries. If you have multiple clients, you may also need to support both old and new versions of the same dependencies depending on who the work is for. Package managers for a Linux distribution have incompatible goals to these (except maybe nix)
epage · 2 years ago
We want to make our software available to any system without every library maintainer being a packaging expert in every system.

The user experience is much better when working within tkese packaging systems.

You can control versions of software independent of the machine (or what distros ship).

Or in other words, the needs of software development and software distribution are different. You can squint and see similarities but the fill different roles.

SkiFire13 · 2 years ago
Because OS packaging stuff sucks. It adds an enourmous barrier to sharing and publishing stuff.

Imagine that I make a simple OS-agnostic library in some programming language and want to publish it to allow others to use it. Do I need to package for every possible distro? That's a lot of work, and might still not cover everyone. And consider that I might not even use Linux!

A programming language will never get successful if that is what it takes to built up a community.

Moreover in the case of Rust distos are not even forced to build using crates.io. However the downside is that they have to package every single dependency version required, which due to the simplicity of publishing and updating them have become quite a lot and change much often than they would like.

The funny thing is that in the C/C++ world it's common to reimplement functionality due to the difficulty of using some dependencies for them. The result is not really different from vendoring dependencies, except for the reduced testing of those components, and this is completly acceptable to distros compared to vendoring. It makes no sense!

pornel · 2 years ago
1. Because Windows/macOS/iOS/Android don't have a built-in package manager at the same granularity of individual libraries, but modern programming languages still want to have first-class support for all these OSes, not just smugly tell users their OS is inferior.

2. Because most Linux distros can only handle very primitive updates based on simple file overwrites, and keep calling everything impossible to secure if it can't be split and patched within limitations of C-oriented dynamic linker.

3. Because Linux distros have a very wide spread of library versions they support, and they often make arbitrary choices on which versions and which features are allowed, which is a burden for programmers who can't simply pick a library and use it, and need to deal with extra compatibility matrix of outdated buggy versions and disabled features.

From developer perspective with lang-specific packages

• Use 1 or 2 languages in the project, and only need to deal a couple of package repositories, which give the exact deps they want, and it works the same on every OS, including cross-compilation to mobile.

From developer perspective of using OS package managers:

• Different names of packages on each Linux distro, installed differently with different commands. There's no way to specify deps in a universal way. Each distro has a range of LTS/stable/testing flavors, each with a different version of library. Debian has super old useless versions that are so old it's worse than not having them, plus bugs reintroduced by removal of vendored patches.

• macOS users may not have any package manager, may have an obscure one, and even if they have the popular Homebrew, there's no guarantee they have the libs you want installed and kept up-to-date. pkg-config will give you temporary paths to precise library version, and unless you work around that, your binary will break when the lib is updated.

• Windows users are screwed. There are several fragmented package managers, which almost nobody has installed. They have few packages, and there's a lot of fiddly work required to make anything build and install properly.

• Supporting mobile platforms means cross-compilation, and you can't use your OS's package manager.

OS-level packaging suuuuuuuucks. When people say that dependency management in C and C++ is a nightmare, they mean the OS-level package managers are a nightmare.

trueismywork · 2 years ago
Programming language package managers are more for development than deployment.
Twirrim · 2 years ago
>AFAIK there is no other distro that does this, what they do is that they would download crates from crates.io (`cargo vendor` is a command that does this automatically) and build against that.

Note your examples are all bleeding edge / rolling distributions. Debian and the non-bleeding edge distributions go a different route and focus on reproduce-ability and security, among other things.

With the "get from crates.io" route, if someone compromises/hijacks a crate upstream, you're in trouble, immediately. By requiring vendoring of sources, that requires at least some level of manual actions by maintainers to get that compromised source in to the debian repositories to then be distributed out to the users. As you get in towards distributions like RHEL, they get even more cautious on this front.

doublepg23 · 2 years ago
NixOS is not a rolling distro.
Palomides · 2 years ago
debian is substantially older than all of those distros, and you named three that happen to, in my view, been designed specifically in reaction against the debian style of maintenance (creating an harmonious, stable set of packages that's less vulnerable to upstream changes), so it's strange to say that debian is the odd one out

keeping a debian-hosted copy of all source used to build packages seems like a very reasonable defensive move to minimize external infrastructure dependencies and is key in the reproducible builds process

there's definitely a conflict with the modern style of volatile language-specific package management, and I don't think debian's approach is ideal, but there's a reason so many people use debian as a base system

also it seems like the idea of maintaining stable branches of software has fallen out of vogue in general

wojciii · 2 years ago
Perhaps they want to have builds they can reproduce years later?

When libraries are removed you have to update your build. This will not be the case if you use your own infrastructure (crates.io mirror).

cozzyd · 2 years ago
Part of a distros job is to audit all the packages (even if just to a minimal extent), and in many cases patch for various reasons. This is much harder if the source is external and there are N copies of everything.
tsimionescu · 2 years ago
Because Debian and similar distros have a goal of maintaining all of the software that users are expected to use. And this means they commit to fixing security issues in every single piece of software they distribute.

A consequence of this is that they need to fix security issues in every library version that any application they distribute uses, including any statically-linked library. So, if they allow 30 applications to each have their own version of the same library, even Rust crates, and those versions all have a security issue, then the Debian team needs to find or patch themselves 30 different pieces of code. If instead they make sure that all 30 of those applications use the same version of a Rust crate, then they only need to patch one version. Maybe it's not 30 times less work, but it's definitely 10 times work. At the size of the Debian repos, this is an extremely significant difference.

Now, it could be that this commitment from Debian is foolish and should be done away with. I certainly don't think my OS vendor should be the one that maintains all the apps I use - I don't even understand the attraction of that. I do want the OS maintainer to handle the packaging of the base system components, and patch those as needed for every supported version and so on - and so I understand this requirement for them. And I would view bcachefs-tools as a base system component, so this requirement seems sane for it.

Brian_K_White · 2 years ago
It's pretty dumb (your words if you don't like them) not to understand (your words if you don't like them) that, how, and why, different distributions "has to be different". Debian is not Arch or Nix. A tractor is not a race car is not a submarine, even though they are all vehicles.

This seems like a very basic concept for anyone purporting to do any sort of engineering or designing in any field.

If it were me, I would not be so eager to advertize my failure to understsnd such basics, let alone presume to call anyone else pretty dumb.

Brian_K_White · 2 years ago
The same bcachefs that Linus just ripped apart for inconsiderate and irresponsible PRs?

Sensing a pattern and it's not both Debian and the kernel. I mean can you even point at 2 more proven solid large and long term projects that have proven how to do it right?

h2odragon · 2 years ago
this is "static vs shared libraries" fight again. no one remembers the first iterations, and there's more layers of "stuff" in the way now, so now one sees the actual issue for what it is.

"shared libs" was the product of storage constraints; a bobble in the smooth pace of progress that shouldn't be needed now. Our faith in "the way things are done now is the right way" and our (justified) fear of messing with the foundations of running systems will make excising the notion take longer.

mananaysiempre · 2 years ago
It’s rather the “vendoring is hostile to distros” fight. It’s adjacent to the shared vs static one but not the same: it’s absolutely possible to have statically linked binaries tracking distro-wide dependency versions, provided the distro’s build system is robust and automated enough. Not all are.
crote · 2 years ago
"Vendoring is hostile to distros" in turn directly leads to "loose dependencies are hostile to upstream developers".

Distros want to be able to mix & match upstream code with whatever version of the dependency they happen to have lying around in a drawer somewhere. Understandable, as the alternative is having to support dozens of versions of the same library.

Upstream developers want to use a single fixed version of their dependencies for each release. Understandable, as the alternative is having to support support a massive test matrix for every possible version of every dependency, just on the odd chance that some obscure distro wants to mix a brand-new libfoo with a 5-year-old libbar.

And all of this boils down to "versioning is actually really hard", because even with SemVer one's bug is another's feature[0] so there's still no unambiguous way to choose a version number, which in turn means upstream can't choose a version range for their dependencies and be reasonably sure that simply testing with the latest version won't result in breakage with other versions in that range.

If dependency management was easy, we would've solved it decades ago.

[0]: https://xkcd.com/1172/

mrweasel · 2 years ago
Shared libraries also mean that you can patch a large number of programs, simply by updating the shared library.

If you have a large number of statically linked program, you need to recompile all of them, which means that you need to know exactly which applications uses the affected library, when a bug shows up, and recompile all those applications and redeploy them.

Without knowing, I imagine that's also partly why Debian wants to have all programs use dependencies installed via APT, it makes it easier to keep track of which other packages needs to be rebuilt.

Personally I'm a huge fan of having applications ship with all their dependencies, like statically compile Go, Rust or Java jar/war/ear files or even containers. It's super easy to deploy, but you do need to be constantly rebuilding and keeping track of dependencies, and I'm not seeing a ton of people doing that.

For Python programs we use apt as a package manager for libraries, because that removed the burden of keeping on top of security issues, to some extend. We do sometimes need to back-port or build our own packages, but try to push those upstream whenever possible.

viraptor · 2 years ago
This is unrelated to shared libraries. Those strict versions are for compile-time dependencies.
koverstreet · 2 years ago
I would rather something not be packaged at all than packaged badly; this whole experience reads like a lesson in what not to do.

What the author left out in his blog post is that I specifically explained why this was going to be an issue and what was going to happen when he and I first talked about Debian packaging; and why for a package that needs to work correctly for the system to boot (fsck, mount helper, encryption unlock), playing these kinds of games with swapping dependencies out was not a good idea.

But with Debian and Fedora people alike it's been like talking to a brick wall. "No, your concerns don't matter, this is our policy and we're going to do what we want with your code". They've made it very clear that I have no voice or say in how they package it.

So then he breaks the build by switching to the packaged version of bindgen, an old unsupported version.

And he makes no attempt to debug, doesn't even send me the build error until months later.

As a result, Debian users were stuck on a broken version of -tools that wasn't passing mount options correctly, which meant that when a drive died and they needed to mount in degraded mode, they weren't able to. This happened to multiple people - that bothered to report it. This was quickly fixed upstream, but as far as I know it's still broken in Debian; and Jonathan Carter wasn't fielding any of those bug reports, I was.

I'm tired, frustrated and demoralized by this drama and tired of constantly having to respond to it. This has taken up an enormous amount of my time.

I really wish this guy hadn't packaged -tools for Debian in the first place. I never asked for it. I have enough on my plate to do without a bunch of distro drama to deal with as well.

smashed · 2 years ago
> I would rather something not be packaged at all than packaged badly; this whole experience reads like a lesson in what not to do.

You might consider adding a big warning on your official documentation about unsupported distribution packages.

Add links to relevant issue tracker/bug reports or mailing list discussions saying until So and so issue are resolved, the official statement is that distribution package is unsupported, not recommended and deemed dangerous to use.

This is as much leverage as you can have with the distribution community. Then wait for them to upstream patches and attempt to fix the issues. Accept fixes as you consider appropriate or not.

It's also important to at least respect the distribution's opinions in regards to having only one version of a library's major release . Just respect and understanding, you don't have to agree.

Also, all the problematic libs cited by the packager are 0.x.x numbered which sounds like a very young and immature ecosystem of dependencies. Of course, this is bound to cause pain to packagers. I think this speaks volumes about the high level of interest for bcachefs, that they actually tried to make it work instead of not bothering.

mahkoh · 2 years ago
0.2 and 0.4 are different "major releases" of rust crates as you say. The major release is determined by the first non-0 component in the version number. The issue is that debian appears to only allow one version even if there are multiple major versions.

If debian is fine with packaging versions 2.0 and 4.0 but not 0.2 and 0.4, then debian does not understand rust version numbers.

koverstreet · 2 years ago
nah, I'm not taking the fight to Jörg Schilling levels :)
haileys · 2 years ago
Thanks for taking the time to comment here!

> So then he breaks the build by switching to the packaged version of bindgen, an old unsupported version.

Could bcachefs-tools solve this problem upstream of distros by releasing a source tarball with the bindings already generated, so that you can use whatever version of bindgen you want, and bindgen is not needed at all when building from this release tarball?

This would be similar in nature to traditional autotools approach where some amount of pre-compilation is done to produce a release tarball from what actually lives in the source repository.

It's also what I do for some of my ffi crates. I have a script in the root of the repo which regenerates the bindings, but when published to crates.io, the bindings are already generated and bindgen is not part of the build downstream of my crate.

koverstreet · 2 years ago
I already provide tarballs with vendored dependencies - I did this for Fedora, so that package builds wouldn't have to download anything.

I'm not going to special case the bindgen bindings because this really isn't a bindgen specific issue - this could just as easily been any other dependency.

eqvinox · 2 years ago
> this whole experience reads like a lesson in what not to do.

What lessons have you drawn from it?

suncore · 2 years ago
Splitting a piece of software into multiple pieces and shipping the pieces (dependencies) independently is sometimes a good idea, but it has its limits. Maybe the limit should be for dependencies which are very stable and used by many packages (libc, etc.). The hard line policy enforced by Debian here obviously is not working. Happy to see other distros solve this better. This might become really problematic for Debian in the future.