Being an upstream maintainer is incredibly under-appreciated. It’s an unfathomably hard, and somewhat thankless, job (at least if you do it well). A friend of mine was in a cab with Ted Ts’o at a conference and he was reviewing patches on his phone to keep up with the workload (or maybe he was bored who knows).
Despite incredible effort from maintainers, getting necessary changes into Linux can take forever. In the subsystem I depend on (and occasionally contribute to directly) it’s kinda assumed it will take at least a year (probably two) for any substantial project to get merged. This continuously disappoints PMs and Leadership. A lot of people, understandably, chafe against this lack of agility.
OTOH, I’ve been on the other side of kernel bugs. Most recently, a memory arithmetic bug was causing corruption, and took my team at least an engineer year to track down. This makes me quite sympathetic to maintainers demands for quality.
I’ve also been on the other side of the calibration discussions where Open Source work goes under appreciated. The irony never stops (“They won’t merge our patches!” “Are you having your engineers review theirs?”). That and the raw pipeline issues for maintainers (it takes a lot of experience to be a maintainer, which implies spending a lot of a bright engineer’s time on reviewing and contributing upstream to things unrelated to immediate priorities).
a bug taking a year to track down is a negative indicator of the quality of project maintenance, not the person who contributed the bug, whether it's due the code itself or the tooling and testing environments available to verify such important issues.
This isn't wrong per se, but rather, it lacks concrete recommendations for what should be done differently.
I would love to see Linux thoroughly and meaningfully tested. For some parts it's just... hard. (If anyone wants to get their start writing kernel code, have a crack at writing some self-tests for a component that looks complicated. The relevant maintainer will probably be excited to see literally anyone writing tests.)
For this particular bug, the cheapest spot to catch the issue would have been code review. In a normal code base, the next cheapest would have been unit testing, though, in this situation, that may not have caught it given that the underlying bug required someone to break the contract of a function (one part of Linux broke the contract of another. Why did it not BUG_ON for that...).
Eliminating the class of issue required fairly invasive forms of introspection on VMs running a custom module. Sure, we did that... eventually.
Finding it originally required stumbling on a distro of Linux that accidentally manifested the corruption visibly (about once per 50ish 30 minute integration test runs, which is pretty frequently in the scheme of corruption bugs).
The factors at play are all obvious. The old-school guys want to keep things old-school. The new-school guys want to make things better in a new way. Has there been any new rewrite without a BFDL who himself is of the new school? The vim/neovim schism happened and perhaps that's how it ends. I personally like Neovim and I'm glad to be in backers.md and it's a tremendously larger amount of work than to have just changed vim. But c'est la vie.
Egcs vs. gcc was a big deal back in the day and in the end we ended up with gcc by the egcs guys and that was it. When you win, everyone forgets the 'drama' existed. When you lose, everyone remembers you as just the drama guy. RMS had the drama label for decades. Things are not even different. They're the same again. It's like when you'd buy those Chinese NES dupes and they'd have 999 levels of Mario but half the levels would be the same but with different colour bricks. Isomorphic to original but distinct. That's this story.
> didn't the good egcs stuff get merged after all?
EGCS took over as mainline, and what good stuff there was in the old mainline got merged into it. But it took sustaining the fork for years to make that happen.
> The new-school guys want to make things better in a new way.
The new guys definitely want to make things different, but it seems there is a lot of debate over whether it will actually be better. Really, they should just write their own their kernel. If rust is really that much better, they'll will.
There are already kernels written in Rust. Telling them to go write new kernels in Rust is like telling people working on a new audio workstation that they should write a package manager instead. A new kernel that they cannot practically use does not suit their needs. The point is to use Rust where it can suit their needs.
This seems quite ironic to say when the whole drama started with Christoph not even looking at the patches long enough to see what directory they were in before rejecting them.
What happened here? Years ago Linus was talking about how he thought positively about Rust in the Kernel in the future if the kinks could be worked out. Now a group of people have built out a set of drivers which are working great, well tested and integrated, and one maintainer has decided they just don't want to merge it so the whole project is indefinitely stalled.
I'd be pretty upset if I was working on Asahi since the Linux project has basically bait and switched them after an enormous amount of work has been invested.
> The old-school guys want to keep things old-school.
You are missing the whole point here. The kernel is a survival epic amongst millions of other failed projects. You don't get to tell the old captain and its lieutenants how to nail the planks and helm the ship when you just went pass the Titanic and Britannic wrecks because metal is so cool.
They're "old-school" because they have to be. Engineers will excrete their pet project and then leave and now they will have to support it. They are mean because that's the only "power" they have, as is explained in the post.
> This is not the world of the systems hacker. When you debug a
distributed system or an OS kernel, you do it Texas-style. You
gather some mean, stoic people, people who have seen things
die, and you get some primitive tools, like a compass and a
rucksack and a stick that’s pointed on one end, and you walk
into the wilderness and you look for trouble, possibly while
using chewing tobacco. As a systems hacker, you must be pre-
pared to do savage things, unspeakable things, to kill runaway
threads with your bare hands, to write directly to network
ports using telnet and an old copy of an RFC that you found in
the Vatican. When you debug systems code, there are no high-
level debates about font choices and the best kind of turquoise,
because this is the Old Testament, an angry and monochro-
matic world, and it doesn’t matter whether your Arial is Bold
or Condensed when people are covered in boils and pestilence
and Egyptian pharaoh oppression. HCI people discover bugs
by receiving a concerned email from their therapist. Systems
people discover bugs by waking up and discovering that their
first-born children are missing and “ETIMEDOUT ” has been
written in blood on the wall.
What is despair? I have known it—hear my song. Despair is
when you’re debugging a kernel driver and you look at a mem-
ory dump and you see that a pointer has a value of 7. THERE IS
NO HARDWARE ARCHITECTURE THAT IS ALIGNED ON
7. Furthermore, 7 IS TOO SMALL AND ONLY EVIL CODE
WOULD TRY TO ACCESS SMALL NUMBER MEMORY.
Misaligned, small-number memory accesses have stolen
decades from my life. The only things worse than misaligned,
small-number memory accesses are accesses with aligned buf-
fer pointers, but impossibly large buffer lengths. Nothing ruins
a Friday at 5 P.M. faster than taking one last pass through the
log file and discovering a word-aligned buffer address, but a
buffer length of NUMBER OF ELECTRONS IN THE UNI-
VERSE.
A fundamental problem here not yet discussed directly here is how few maintainers there really are for a software project of this magnitude and importance.
Further, the fact that so many of those maintainers are purely on volunteer time.
Now it is certainly somewhat the fault of the maintainers themselves for turning off thousand if not tens of thousands of eager, well-intentioned wannabe contributors over the decades, if not through their attitudes and lack of interpersonal skills, then through impenetrable build systems and hostility towards ergonomic changes.
But forget the eager amateurs - it is unconscionable that major technology companies & cloud providers don't each have damn near an army helping out with Linux and similar technologies - even the parts that do not directly benefit them! - instead of just shoving it into servers so they can target ads for cheap plastic crap 0.000000001% better than they did last week.
Is that really the right statistic? Seems like the relevant one would be the number of maintainers whose maintenance work is company-funded. (Ex, I'd imagine it would be quite bad if most contributions were from company-funded developers but had to be upstreamed by non-company-funded volunteers.)
> But what isn't appreciated, is that it is precisely because people who are long-term members of the community are trusted to stick around and will support code that the have sponsored.
Ok. But if you make it too difficult for new developers to contribute, and the experience too stressful, then no one new will stick around to be long time maintainers, and when the old ones retire or die, there won't be anyone to replace them.
> One of the things which gets very frustrating from the maintainer's
perspective is development teams that are only interested in their pet
feature, and we know, through very bitter experience, that 95+% of
the time, once the code is accepted, the engineers which contribute
the code will disappear, never to be seen again.
This was painful for me to read as someone who has seen how corporations think about “Open Source” - he isn’t wrong at all
He's not wrong, but he hasn't addressed the problem:
Some maintainers are rejecting changes as a way to block a project they disagree with - ie. there is no path forward for the contributor. I wouldn't assume this normally, but they're not hiding this fact, it's self proclaimed.
On the other hand, Linus has been largely in favour of the R4L project, and gave the green light for it to go ahead.
The Linux kernel maintainers need to figure out among themselves whether they want to allow Rust to be introduced or not, and if so under what constraints. If they can't come to an agreement, they're wasting everyone's time.
Once that happens, either the project is canned, or people can stop arguing over if these changes should be upstreamed, and start arguing over how instead, and that's a lot more productive.
It's not the maintainers job to make your project happen.
It's yours.
If you need someone to do free work but you can't convince them then you either get their boss to tell them to do it, you take over their job, or - the one thing that no one under 30 ever seems to do - fork and do the work yourself without anyone stopping you.
IIRC Linus has been in favor conditioned on the agreement of the respective subsystem maintainers. Meaning, he’s not deciding over their heads. And introduction of Rust can be handled differently from subsystem to subsystem.
That doesn't exactly strike me as a bad thing? Isn't that sort of the point of open source? Lots of people working on the things that interest them and through a diversity of interests arriving at a useful project?
> the engineers which contribute the code will disappear
From the other side, as a very occasional contributor, I'd actually want to deal with fixes or reviews around the code I contribute.
But it's usually edge cases on otherwise stable and mature libraries, so hopefully it probably won't happen more than say once in a decade. If I got a mention on a PR I'd reappear, but that doesn't sound like the standard way, I never got involved again on anything submitted.
I feel like either the maintainers are doing an incredibly good job at vetting the PRs, or got used to deal with the aftermath another way and just don't need the original code submitter to reappear most of the time ?
I empathize but we need to have people take over these initiatives and refactor them into something easier the maintain. Not saying introduce a new language or anything but change how we fundamentally look at “The Kernel”. I think reducing scope and making it so hardware providers must maintain their drivers is a good start. If your toolchain doesn’t suffice, create a new tool.
If rust is so much better, create a new kernel in rust and force a paradigm shift. I think we can do better than bicker and fight over it and post email chains about it. Bring solutions. Tech debt is just OpEx to everyone else.
> create a new kernel in rust and force a paradigm shift.
And then what?
A kernel by itself isn't very useful. Even if your kernel is somehow superior in every way, you need software to target it, so to be a successful replacement for linux you basically have to be completely compatible with any software that currently runs on linux, which not only means a massive amount of work just to keep up with changes in linux, but constraints on your own design.
And then there is hardware support. If you are a fledgling project, how do you get hardware vendors to write drivers for you?
>Bring solutions. Tech debt is just OpEx to everyone else.
What if there was a rust compiler to C, that produced like readable, compliant C code for the kernel. And then developers that want to work in rust can publish their original rust code to some third party location, so the reviewers have no idea if the C code they receive it was originally written in rust or not.
And what happens if somebody starts modifying that C code? Even a one way translator like you describe could be a challenge, making it two way seems close to impossible.
It's a cool idea but it's licensed MIT instead of GPL or something copyleft so I won't contribute to it.
Rust-only ecosystem would be pretty cool though. It may be worth just forking linux for the sake of compatibility, and keeping the license going. I don't see a future otherwise.
I'd also like to see better compiler diversity. Maybe once gccrs rolls around we will see different attitudes around rust emerge, compared to C/C++ which have more distributed development.
Resigning as Asahi Linux project lead - https://news.ycombinator.com/item?id=43036904 - Feb 2025 (826 comments)
Asahi Linux lead developer Hector Martin resigns from Linux kernel - https://news.ycombinator.com/item?id=42972062 - Feb 2025 (1015 comments)
Me stepping down as a nouveau kernel maintainer – https://lists.freedesktop.org/archives/nouveau/2025-February...
Despite incredible effort from maintainers, getting necessary changes into Linux can take forever. In the subsystem I depend on (and occasionally contribute to directly) it’s kinda assumed it will take at least a year (probably two) for any substantial project to get merged. This continuously disappoints PMs and Leadership. A lot of people, understandably, chafe against this lack of agility.
OTOH, I’ve been on the other side of kernel bugs. Most recently, a memory arithmetic bug was causing corruption, and took my team at least an engineer year to track down. This makes me quite sympathetic to maintainers demands for quality.
I’ve also been on the other side of the calibration discussions where Open Source work goes under appreciated. The irony never stops (“They won’t merge our patches!” “Are you having your engineers review theirs?”). That and the raw pipeline issues for maintainers (it takes a lot of experience to be a maintainer, which implies spending a lot of a bright engineer’s time on reviewing and contributing upstream to things unrelated to immediate priorities).
I would love to see Linux thoroughly and meaningfully tested. For some parts it's just... hard. (If anyone wants to get their start writing kernel code, have a crack at writing some self-tests for a component that looks complicated. The relevant maintainer will probably be excited to see literally anyone writing tests.)
For this particular bug, the cheapest spot to catch the issue would have been code review. In a normal code base, the next cheapest would have been unit testing, though, in this situation, that may not have caught it given that the underlying bug required someone to break the contract of a function (one part of Linux broke the contract of another. Why did it not BUG_ON for that...).
Eliminating the class of issue required fairly invasive forms of introspection on VMs running a custom module. Sure, we did that... eventually.
Finding it originally required stumbling on a distro of Linux that accidentally manifested the corruption visibly (about once per 50ish 30 minute integration test runs, which is pretty frequently in the scheme of corruption bugs).
Egcs vs. gcc was a big deal back in the day and in the end we ended up with gcc by the egcs guys and that was it. When you win, everyone forgets the 'drama' existed. When you lose, everyone remembers you as just the drama guy. RMS had the drama label for decades. Things are not even different. They're the same again. It's like when you'd buy those Chinese NES dupes and they'd have 999 levels of Mario but half the levels would be the same but with different colour bricks. Isomorphic to original but distinct. That's this story.
didn't the good egcs stuff get merged after all?
EGCS took over as mainline, and what good stuff there was in the old mainline got merged into it. But it took sustaining the fork for years to make that happen.
The new guys definitely want to make things different, but it seems there is a lot of debate over whether it will actually be better. Really, they should just write their own their kernel. If rust is really that much better, they'll will.
Edit: This seems like more of an indication of a culture that lacks effective conflict resolution than any kind of technical question.
The new school guys greatly underappreciate the wisdom of why and instead try to change things without first understanding.
I'd be pretty upset if I was working on Asahi since the Linux project has basically bait and switched them after an enormous amount of work has been invested.
You are missing the whole point here. The kernel is a survival epic amongst millions of other failed projects. You don't get to tell the old captain and its lieutenants how to nail the planks and helm the ship when you just went pass the Titanic and Britannic wrecks because metal is so cool.
They're "old-school" because they have to be. Engineers will excrete their pet project and then leave and now they will have to support it. They are mean because that's the only "power" they have, as is explained in the post.
I'll leave you with this quote from "The Night Watch" by James Mickens (https://www.usenix.org/system/files/1311_05-08_mickens.pdf)
> This is not the world of the systems hacker. When you debug a distributed system or an OS kernel, you do it Texas-style. You gather some mean, stoic people, people who have seen things die, and you get some primitive tools, like a compass and a rucksack and a stick that’s pointed on one end, and you walk into the wilderness and you look for trouble, possibly while using chewing tobacco. As a systems hacker, you must be pre- pared to do savage things, unspeakable things, to kill runaway threads with your bare hands, to write directly to network ports using telnet and an old copy of an RFC that you found in the Vatican. When you debug systems code, there are no high- level debates about font choices and the best kind of turquoise, because this is the Old Testament, an angry and monochro- matic world, and it doesn’t matter whether your Arial is Bold or Condensed when people are covered in boils and pestilence and Egyptian pharaoh oppression. HCI people discover bugs by receiving a concerned email from their therapist. Systems people discover bugs by waking up and discovering that their first-born children are missing and “ETIMEDOUT ” has been written in blood on the wall. What is despair? I have known it—hear my song. Despair is when you’re debugging a kernel driver and you look at a mem- ory dump and you see that a pointer has a value of 7. THERE IS NO HARDWARE ARCHITECTURE THAT IS ALIGNED ON 7. Furthermore, 7 IS TOO SMALL AND ONLY EVIL CODE WOULD TRY TO ACCESS SMALL NUMBER MEMORY. Misaligned, small-number memory accesses have stolen decades from my life. The only things worse than misaligned, small-number memory accesses are accesses with aligned buf- fer pointers, but impossibly large buffer lengths. Nothing ruins a Friday at 5 P.M. faster than taking one last pass through the log file and discovering a word-aligned buffer address, but a buffer length of NUMBER OF ELECTRONS IN THE UNI- VERSE.
https://www.youtube.com/watch?v=OvuEYtkOH88&t=367s
Now it is certainly somewhat the fault of the maintainers themselves for turning off thousand if not tens of thousands of eager, well-intentioned wannabe contributors over the decades, if not through their attitudes and lack of interpersonal skills, then through impenetrable build systems and hostility towards ergonomic changes.
But forget the eager amateurs - it is unconscionable that major technology companies & cloud providers don't each have damn near an army helping out with Linux and similar technologies - even the parts that do not directly benefit them! - instead of just shoving it into servers so they can target ads for cheap plastic crap 0.000000001% better than they did last week.
Greg pointed out in that email thread that:
> over 80% of the contributions come from company-funded developers. [1]
[1] https://lore.kernel.org/lkml/2025020738-observant-rocklike-7...
... they have, and they are selling that as premium. it's the classic "open core" model for the cloud era.
Ok. But if you make it too difficult for new developers to contribute, and the experience too stressful, then no one new will stick around to be long time maintainers, and when the old ones retire or die, there won't be anyone to replace them.
https://files.catbox.moe/u6rold.png
This was painful for me to read as someone who has seen how corporations think about “Open Source” - he isn’t wrong at all
Some maintainers are rejecting changes as a way to block a project they disagree with - ie. there is no path forward for the contributor. I wouldn't assume this normally, but they're not hiding this fact, it's self proclaimed.
On the other hand, Linus has been largely in favour of the R4L project, and gave the green light for it to go ahead.
The Linux kernel maintainers need to figure out among themselves whether they want to allow Rust to be introduced or not, and if so under what constraints. If they can't come to an agreement, they're wasting everyone's time.
Once that happens, either the project is canned, or people can stop arguing over if these changes should be upstreamed, and start arguing over how instead, and that's a lot more productive.
It's yours.
If you need someone to do free work but you can't convince them then you either get their boss to tell them to do it, you take over their job, or - the one thing that no one under 30 ever seems to do - fork and do the work yourself without anyone stopping you.
From the other side, as a very occasional contributor, I'd actually want to deal with fixes or reviews around the code I contribute.
But it's usually edge cases on otherwise stable and mature libraries, so hopefully it probably won't happen more than say once in a decade. If I got a mention on a PR I'd reappear, but that doesn't sound like the standard way, I never got involved again on anything submitted.
I feel like either the maintainers are doing an incredibly good job at vetting the PRs, or got used to deal with the aftermath another way and just don't need the original code submitter to reappear most of the time ?
Am I missing some bigger part of it ?
I empathize but we need to have people take over these initiatives and refactor them into something easier the maintain. Not saying introduce a new language or anything but change how we fundamentally look at “The Kernel”. I think reducing scope and making it so hardware providers must maintain their drivers is a good start. If your toolchain doesn’t suffice, create a new tool.
If rust is so much better, create a new kernel in rust and force a paradigm shift. I think we can do better than bicker and fight over it and post email chains about it. Bring solutions. Tech debt is just OpEx to everyone else.
And then what?
A kernel by itself isn't very useful. Even if your kernel is somehow superior in every way, you need software to target it, so to be a successful replacement for linux you basically have to be completely compatible with any software that currently runs on linux, which not only means a massive amount of work just to keep up with changes in linux, but constraints on your own design.
And then there is hardware support. If you are a fledgling project, how do you get hardware vendors to write drivers for you?
What if there was a rust compiler to C, that produced like readable, compliant C code for the kernel. And then developers that want to work in rust can publish their original rust code to some third party location, so the reviewers have no idea if the C code they receive it was originally written in rust or not.
Try what you said with even well align languages like TS and JS and it would be hard to work with.
How do you propose that happens?
Rust-only ecosystem would be pretty cool though. It may be worth just forking linux for the sake of compatibility, and keeping the license going. I don't see a future otherwise.
I'd also like to see better compiler diversity. Maybe once gccrs rolls around we will see different attitudes around rust emerge, compared to C/C++ which have more distributed development.