Readit News logoReadit News
quartzic · 6 months ago
dang · 6 months ago
Thanks! Macroexpanded:

Resigning as Asahi Linux project lead - https://news.ycombinator.com/item?id=43036904 - Feb 2025 (826 comments)

Asahi Linux lead developer Hector Martin resigns from Linux kernel - https://news.ycombinator.com/item?id=42972062 - Feb 2025 (1015 comments)

oxnrtr · 6 months ago
Next resignation:

Me stepping down as a nouveau kernel maintainerhttps://lists.freedesktop.org/archives/nouveau/2025-February...

strstr · 6 months ago
Being an upstream maintainer is incredibly under-appreciated. It’s an unfathomably hard, and somewhat thankless, job (at least if you do it well). A friend of mine was in a cab with Ted Ts’o at a conference and he was reviewing patches on his phone to keep up with the workload (or maybe he was bored who knows).

Despite incredible effort from maintainers, getting necessary changes into Linux can take forever. In the subsystem I depend on (and occasionally contribute to directly) it’s kinda assumed it will take at least a year (probably two) for any substantial project to get merged. This continuously disappoints PMs and Leadership. A lot of people, understandably, chafe against this lack of agility.

OTOH, I’ve been on the other side of kernel bugs. Most recently, a memory arithmetic bug was causing corruption, and took my team at least an engineer year to track down. This makes me quite sympathetic to maintainers demands for quality.

I’ve also been on the other side of the calibration discussions where Open Source work goes under appreciated. The irony never stops (“They won’t merge our patches!” “Are you having your engineers review theirs?”). That and the raw pipeline issues for maintainers (it takes a lot of experience to be a maintainer, which implies spending a lot of a bright engineer’s time on reviewing and contributing upstream to things unrelated to immediate priorities).

skrtskrt · 6 months ago
a bug taking a year to track down is a negative indicator of the quality of project maintenance, not the person who contributed the bug, whether it's due the code itself or the tooling and testing environments available to verify such important issues.
strstr · 6 months ago
This isn't wrong per se, but rather, it lacks concrete recommendations for what should be done differently.

I would love to see Linux thoroughly and meaningfully tested. For some parts it's just... hard. (If anyone wants to get their start writing kernel code, have a crack at writing some self-tests for a component that looks complicated. The relevant maintainer will probably be excited to see literally anyone writing tests.)

For this particular bug, the cheapest spot to catch the issue would have been code review. In a normal code base, the next cheapest would have been unit testing, though, in this situation, that may not have caught it given that the underlying bug required someone to break the contract of a function (one part of Linux broke the contract of another. Why did it not BUG_ON for that...).

Eliminating the class of issue required fairly invasive forms of introspection on VMs running a custom module. Sure, we did that... eventually.

Finding it originally required stumbling on a distro of Linux that accidentally manifested the corruption visibly (about once per 50ish 30 minute integration test runs, which is pretty frequently in the scheme of corruption bugs).

arjie · 6 months ago
The factors at play are all obvious. The old-school guys want to keep things old-school. The new-school guys want to make things better in a new way. Has there been any new rewrite without a BFDL who himself is of the new school? The vim/neovim schism happened and perhaps that's how it ends. I personally like Neovim and I'm glad to be in backers.md and it's a tremendously larger amount of work than to have just changed vim. But c'est la vie.

Egcs vs. gcc was a big deal back in the day and in the end we ended up with gcc by the egcs guys and that was it. When you win, everyone forgets the 'drama' existed. When you lose, everyone remembers you as just the drama guy. RMS had the drama label for decades. Things are not even different. They're the same again. It's like when you'd buy those Chinese NES dupes and they'd have 999 levels of Mario but half the levels would be the same but with different colour bricks. Isomorphic to original but distinct. That's this story.

markhahn · 6 months ago
if your lens is old-bad vs new-good, you are blind to merits.

didn't the good egcs stuff get merged after all?

lmm · 6 months ago
> didn't the good egcs stuff get merged after all?

EGCS took over as mainline, and what good stuff there was in the old mainline got merged into it. But it took sustaining the fork for years to make that happen.

patrick451 · 6 months ago
> The new-school guys want to make things better in a new way.

The new guys definitely want to make things different, but it seems there is a lot of debate over whether it will actually be better. Really, they should just write their own their kernel. If rust is really that much better, they'll will.

taurknaut · 6 months ago
Well I certainly don't use linux because it's written in C. Who would?

Edit: This seems like more of an indication of a culture that lacks effective conflict resolution than any kind of technical question.

dralley · 6 months ago
There are already kernels written in Rust. Telling them to go write new kernels in Rust is like telling people working on a new audio workstation that they should write a package manager instead. A new kernel that they cannot practically use does not suit their needs. The point is to use Rust where it can suit their needs.
exabrial · 6 months ago
>The old-school guys want to keep things old-school. The new-school guys want to make things better in a new way

The new school guys greatly underappreciate the wisdom of why and instead try to change things without first understanding.

dralley · 6 months ago
This seems quite ironic to say when the whole drama started with Christoph not even looking at the patches long enough to see what directory they were in before rejecting them.
Gigachad · 6 months ago
What happened here? Years ago Linus was talking about how he thought positively about Rust in the Kernel in the future if the kinks could be worked out. Now a group of people have built out a set of drivers which are working great, well tested and integrated, and one maintainer has decided they just don't want to merge it so the whole project is indefinitely stalled.

I'd be pretty upset if I was working on Asahi since the Linux project has basically bait and switched them after an enormous amount of work has been invested.

TeeMassive · 6 months ago
> The old-school guys want to keep things old-school.

You are missing the whole point here. The kernel is a survival epic amongst millions of other failed projects. You don't get to tell the old captain and its lieutenants how to nail the planks and helm the ship when you just went pass the Titanic and Britannic wrecks because metal is so cool.

They're "old-school" because they have to be. Engineers will excrete their pet project and then leave and now they will have to support it. They are mean because that's the only "power" they have, as is explained in the post.

I'll leave you with this quote from "The Night Watch" by James Mickens (https://www.usenix.org/system/files/1311_05-08_mickens.pdf)

> This is not the world of the systems hacker. When you debug a distributed system or an OS kernel, you do it Texas-style. You gather some mean, stoic people, people who have seen things die, and you get some primitive tools, like a compass and a rucksack and a stick that’s pointed on one end, and you walk into the wilderness and you look for trouble, possibly while using chewing tobacco. As a systems hacker, you must be pre- pared to do savage things, unspeakable things, to kill runaway threads with your bare hands, to write directly to network ports using telnet and an old copy of an RFC that you found in the Vatican. When you debug systems code, there are no high- level debates about font choices and the best kind of turquoise, because this is the Old Testament, an angry and monochro- matic world, and it doesn’t matter whether your Arial is Bold or Condensed when people are covered in boils and pestilence and Egyptian pharaoh oppression. HCI people discover bugs by receiving a concerned email from their therapist. Systems people discover bugs by waking up and discovering that their first-born children are missing and “ETIMEDOUT ” has been written in blood on the wall. What is despair? I have known it—hear my song. Despair is when you’re debugging a kernel driver and you look at a mem- ory dump and you see that a pointer has a value of 7. THERE IS NO HARDWARE ARCHITECTURE THAT IS ALIGNED ON 7. Furthermore, 7 IS TOO SMALL AND ONLY EVIL CODE WOULD TRY TO ACCESS SMALL NUMBER MEMORY. Misaligned, small-number memory accesses have stolen decades from my life. The only things worse than misaligned, small-number memory accesses are accesses with aligned buf- fer pointers, but impossibly large buffer lengths. Nothing ruins a Friday at 5 P.M. faster than taking one last pass through the log file and discovering a word-aligned buffer address, but a buffer length of NUMBER OF ELECTRONS IN THE UNI- VERSE.

dralley · 6 months ago
The reasoning Linus himself gives for greenlighting Rust is, among other things, to avoid stagnation. So OP's description seems more apt than yours.

https://www.youtube.com/watch?v=OvuEYtkOH88&t=367s

skrtskrt · 6 months ago
A fundamental problem here not yet discussed directly here is how few maintainers there really are for a software project of this magnitude and importance. Further, the fact that so many of those maintainers are purely on volunteer time.

Now it is certainly somewhat the fault of the maintainers themselves for turning off thousand if not tens of thousands of eager, well-intentioned wannabe contributors over the decades, if not through their attitudes and lack of interpersonal skills, then through impenetrable build systems and hostility towards ergonomic changes.

But forget the eager amateurs - it is unconscionable that major technology companies & cloud providers don't each have damn near an army helping out with Linux and similar technologies - even the parts that do not directly benefit them! - instead of just shoving it into servers so they can target ads for cheap plastic crap 0.000000001% better than they did last week.

shkkmo · 6 months ago
> Further, the fact that so many of those maintainers are purely on volunteer time.

Greg pointed out in that email thread that:

> over 80% of the contributions come from company-funded developers. [1]

[1] https://lore.kernel.org/lkml/2025020738-observant-rocklike-7...

cle · 6 months ago
Is that really the right statistic? Seems like the relevant one would be the number of maintainers whose maintenance work is company-funded. (Ex, I'd imagine it would be quite bad if most contributions were from company-funded developers but had to be upstreamed by non-company-funded volunteers.)
pas · 6 months ago
> major technology companies & cloud providers don't each have ...

... they have, and they are selling that as premium. it's the classic "open core" model for the cloud era.

thayne · 6 months ago
> But what isn't appreciated, is that it is precisely because people who are long-term members of the community are trusted to stick around and will support code that the have sponsored.

Ok. But if you make it too difficult for new developers to contribute, and the experience too stressful, then no one new will stick around to be long time maintainers, and when the old ones retire or die, there won't be anyone to replace them.

shae · 6 months ago
I tried to help with the Linux kernel once in 2001 or so. I decided my calm was worth more than dealing with abrasive kernel devs.
polishdude20 · 6 months ago
Same thing happening with ham radio.
mouse_ · 6 months ago
The whole community shows symptoms of organizational sabotage.

https://files.catbox.moe/u6rold.png

ambicapter · 6 months ago
Sabotage or just the inevitable ways that organizations tend to decay (not that they have to, but if they do it tends to look a certain way).
jf · 6 months ago
> One of the things which gets very frustrating from the maintainer's perspective is development teams that are only interested in their pet feature, and we know, through very bitter experience, that 95+% of the time, once the code is accepted, the engineers which contribute the code will disappear, never to be seen again.

This was painful for me to read as someone who has seen how corporations think about “Open Source” - he isn’t wrong at all

Diggsey · 6 months ago
He's not wrong, but he hasn't addressed the problem:

Some maintainers are rejecting changes as a way to block a project they disagree with - ie. there is no path forward for the contributor. I wouldn't assume this normally, but they're not hiding this fact, it's self proclaimed.

On the other hand, Linus has been largely in favour of the R4L project, and gave the green light for it to go ahead.

The Linux kernel maintainers need to figure out among themselves whether they want to allow Rust to be introduced or not, and if so under what constraints. If they can't come to an agreement, they're wasting everyone's time.

Once that happens, either the project is canned, or people can stop arguing over if these changes should be upstreamed, and start arguing over how instead, and that's a lot more productive.

llm_trw · 6 months ago
It's not the maintainers job to make your project happen.

It's yours.

If you need someone to do free work but you can't convince them then you either get their boss to tell them to do it, you take over their job, or - the one thing that no one under 30 ever seems to do - fork and do the work yourself without anyone stopping you.

layer8 · 6 months ago
IIRC Linus has been in favor conditioned on the agreement of the respective subsystem maintainers. Meaning, he’s not deciding over their heads. And introduction of Rust can be handled differently from subsystem to subsystem.
harimau777 · 6 months ago
That doesn't exactly strike me as a bad thing? Isn't that sort of the point of open source? Lots of people working on the things that interest them and through a diversity of interests arriving at a useful project?
makeitdouble · 6 months ago
> the engineers which contribute the code will disappear

From the other side, as a very occasional contributor, I'd actually want to deal with fixes or reviews around the code I contribute.

But it's usually edge cases on otherwise stable and mature libraries, so hopefully it probably won't happen more than say once in a decade. If I got a mention on a PR I'd reappear, but that doesn't sound like the standard way, I never got involved again on anything submitted.

I feel like either the maintainers are doing an incredibly good job at vetting the PRs, or got used to deal with the aftermath another way and just don't need the original code submitter to reappear most of the time ?

Am I missing some bigger part of it ?

reactordev · 6 months ago
The problem is aging code sours like milk.

I empathize but we need to have people take over these initiatives and refactor them into something easier the maintain. Not saying introduce a new language or anything but change how we fundamentally look at “The Kernel”. I think reducing scope and making it so hardware providers must maintain their drivers is a good start. If your toolchain doesn’t suffice, create a new tool.

If rust is so much better, create a new kernel in rust and force a paradigm shift. I think we can do better than bicker and fight over it and post email chains about it. Bring solutions. Tech debt is just OpEx to everyone else.

thayne · 6 months ago
> create a new kernel in rust and force a paradigm shift.

And then what?

A kernel by itself isn't very useful. Even if your kernel is somehow superior in every way, you need software to target it, so to be a successful replacement for linux you basically have to be completely compatible with any software that currently runs on linux, which not only means a massive amount of work just to keep up with changes in linux, but constraints on your own design.

And then there is hardware support. If you are a fledgling project, how do you get hardware vendors to write drivers for you?

abenga · 6 months ago
No part of this explains why Linux should be changed. It just means the new project has a lot of work to do.
beeflet · 6 months ago
>Bring solutions. Tech debt is just OpEx to everyone else.

What if there was a rust compiler to C, that produced like readable, compliant C code for the kernel. And then developers that want to work in rust can publish their original rust code to some third party location, so the reviewers have no idea if the C code they receive it was originally written in rust or not.

aqueueaqueue · 6 months ago
The output of a compiler produce unidiomatic code. And if the C gets edited you need to back port to Rust which is more work and may be impossible.

Try what you said with even well align languages like TS and JS and it would be hard to work with.

microtherion · 6 months ago
And what happens if somebody starts modifying that C code? Even a one way translator like you describe could be a challenge, making it two way seems close to impossible.
pas · 6 months ago
The problem is that the C guys don't want to define certain semantics. Currently "it works", but Rust wants invariants.
numbsafari · 6 months ago
> making it so hardware providers must maintain their drivers is a good start

How do you propose that happens?

jfbfkdnxbdkdb · 6 months ago
There is redox os
beeflet · 6 months ago
It's a cool idea but it's licensed MIT instead of GPL or something copyleft so I won't contribute to it.

Rust-only ecosystem would be pretty cool though. It may be worth just forking linux for the sake of compatibility, and keeping the license going. I don't see a future otherwise.

I'd also like to see better compiler diversity. Maybe once gccrs rolls around we will see different attitudes around rust emerge, compared to C/C++ which have more distributed development.