I think these types of drivers, taking care of parsing in a place where I would personally say parsing should be avoided where possible, is an excellent place to move towards Rust. The language may not be great at doing things like low-level memory management, but parsing data and forwarding it to hardware seems like an excellent use case here.
And yes, you can parse safely in C. Unfortunately, correctness in C has been proven to be rather difficult to achieve.
The other main tradeoff is that explicit lifetime annotations make (big) refactorings or explorative coding harder, unless one uses a lot copies or shared pointers (unacceptable for Kernel code).
However, I am not familiar how those two tradeoffs ought to be solved in Kernel.
I would expect that Rust would do rarely changing high level abstractions expected to never change fundamentally in design and/or being limited in scope.
I believe the reason is that it’s not safe to send arbitrary bitstreams directly to hardware decoders and given the “stateless” nature of them, you need something trusted to run the full video encoder / decoder logic.
I had the same question. If your chip can do, say, the DCT in hardware, why not just expose that unit directly to userspace? And if userspace sends invalid data to that unit, surely the kernel can just handle the fault and return an error? I must be missing something.
At any rate, it's unfortunate that entire media file formats have to run in kernel space in order to implement hardware acceleration. There's no better way to do it?
The kernel has to interpose at least a little bit because some of these hardware devices can read and write anywhere, so you can't just let random users sent them commands.
It's because the value proposition of Rust and the reality of it's implementation don't match. So, they went through all this effort to put it in the kernel, but then realized, it's not really useful for much, outside of making "safe" drivers that no one really needs.
This seems like a case of two classic mistakes to me… #1 starting with a solution, and then applying it to problems (imperfectly); #2 solving the problem at the wrong level (related to problem #1) - in this case at too low a level, which can work, but is a lot more work than a solution at the right level.
That is: why not sandboxing rather than a rewrite?
The kernel has API in place to run code in userspace from within a kernel module ("usermode driver"). That would in theory be one way something complex in the kernel could be isolated. The obvious downside being the additional context switches and round trips for the data adding latency.
Here's a blog post that demonstrates embedding an entire Go program as a blob into a kernel module and running it in userspace from the module:
Rust must be removed from the kernel. It is a huge time sink for smaller companies to deal with another language and frameworks. Effort should be focused on getting more hardware support mainlined such that device manufacturers have less work which will increase adoption.
Memory safety can be addressed via kernel tools or frameworks and should not be the job of a language IMHO.
This reads like a comment from twenty years ago, to be honest.
Memory safety could've been addressed through kernel tools and frameworks for decades, but it hasn't. And it _should_ be part of the language, as even low level languages like C try not to clobber memory and specify undefined behaviour in cases where you may accidentally end up doing it anyway.
There are good arguments for and against other Rust features such as the way panic!() works and the strictness of the borrow checker. However, "C and tooling can do everything your fancy pants new language does" has been said for longer than I've been alive and yet every month I see CVE reports about major projects caused by bugs that would never have passed the Rust compiler.
Out of every reason I can think of, a lack of hardware support seems like the least likely reason for a hardware manufacturer not to upstream hardware support. Look at companies like Qualcom, with massive ranges of devices and working kernel drivers, hacked together because upstreaming doesn't benefit them. Look at companies like Apple, who doesn't care if their software works on Linux or not. The Linux kernel supports everything from 90s supercomputers to drones to smart toothbrushes, there's no lack of hardware support.
remember java? they forced OO onto users. a huge mistake. developers should not be forced to use a paradigm. same with memory safety. if they don’t get it, find better developers and pay more. essentially you are saying rust is great because companies can hire clueless people and rust will compensate for their inability.
it’s clear from your comment you never built a device yourself running linux on an obscure SoC dealing with patches that were never accepted.
I see these frankly crazy opinions fairly regularly and I'm genuinely curious how you come to these conclusions.
Have you written much C or C++? What kind of kernel tools or frameworks are you thinking of? Have you ever used Rust? Are you familiar with the different kinds of memory errors?
I really struggle to imagine how anyone who is actually familiar with all this stuff could say things like this but you aren't the first...
> raw pointer arithmetic and problematic memcpy() calls can be eliminated, array accesses can be checked at run time, and error paths can be greatly simplified. Complicated algorithms can be expressed more succinctly through the use of more modern abstractions such as iterators, ranges, generics, and the like.
I know why people choose Rust today, but we could have had all of those benefits decades ago if not for the recalcitrance of reflexive C++ haters.
It took C++ decades to get good, in my opinion. If the kernel team had switched to C++ decades ago, we probably would've ended up with a worse kernel code base. For me, C++11 and C++14 created the modern C++ that's actually usable, but compilers took a while to implement all those features (efficiently and correctly).
I find the C++ equivalent of the Rust iterators and such to be even harder to read (almost an accomplishment, given the density of Rust code); I don't think features like ranges would be expressed more succinctly using std::range the same way it can be done in Rust, for instance. I also find C++'s iterators' API design rather verbose, and I don't think there's much good to be said of implementing generics through C++ templates. There are good reasons for why the language was designed this way, but I get the impression succinctness didn't seem to be a primary objective designing them. Rather, I get the feeling that the language designers prioritised making the features accessible to people who already knew C++ and were used to the complexer side of C++.
Of course, but ergonomics, vibe, or look-and-feel are important. C++ doesn't have a package manager, the .hpp thing is a mess (modules took ages), SFINAE, preprocessor macros vs proc-macros, and so on.
That said, Linux is a very conservative and idiosyncratic project. It's really the bazaar. (No issue tracker, firehose of emails, etc.)
Not to mention: what C++ subset to use, and how to deal with the endless discussion-circlejerks around that one topic alone.
For instance most of the C++ stdlib is useless for kernel (or embedded) development, so you end up with a C++ that's not much more than a "C with namespaces". At least Rust brings a couple of actual language improvements to the table (not a big fan of Rust, but if the only other option is C++, then Rust is the clear winner).
They're arguably still not there yet, sure the standard was ratified years ago but the implementations are still a mess and uptake is almost nonexistent.
Crucially, the Rust for Linux people proposed to do the work to make Rust for Linux. Whenever we see this complaint from C++ people their idea is basically "LOL, Linus should rewrite kernel to be suitable for C++" and the answer is No. That's not going to happen.
Once you refuse their fantasy "Don't lift a finger" way to add C++, they lose interest.
C++ even today has plenty footguns that will make code unsafe, and some of the safety comes at performance costs, or uses features that are ill-suited for the kernel.
And yes, you can parse safely in C. Unfortunately, correctness in C has been proven to be rather difficult to achieve.
The other main tradeoff is that explicit lifetime annotations make (big) refactorings or explorative coding harder, unless one uses a lot copies or shared pointers (unacceptable for Kernel code).
However, I am not familiar how those two tradeoffs ought to be solved in Kernel. I would expect that Rust would do rarely changing high level abstractions expected to never change fundamentally in design and/or being limited in scope.
Deleted Comment
At any rate, it's unfortunate that entire media file formats have to run in kernel space in order to implement hardware acceleration. There's no better way to do it?
Deleted Comment
That is: why not sandboxing rather than a rewrite?
Here's a blog post that demonstrates embedding an entire Go program as a blob into a kernel module and running it in userspace from the module:
https://www.sigma-star.at/blog/2023/07/embedded-go-prog/
Memory safety can be addressed via kernel tools or frameworks and should not be the job of a language IMHO.
Memory safety could've been addressed through kernel tools and frameworks for decades, but it hasn't. And it _should_ be part of the language, as even low level languages like C try not to clobber memory and specify undefined behaviour in cases where you may accidentally end up doing it anyway.
There are good arguments for and against other Rust features such as the way panic!() works and the strictness of the borrow checker. However, "C and tooling can do everything your fancy pants new language does" has been said for longer than I've been alive and yet every month I see CVE reports about major projects caused by bugs that would never have passed the Rust compiler.
Out of every reason I can think of, a lack of hardware support seems like the least likely reason for a hardware manufacturer not to upstream hardware support. Look at companies like Qualcom, with massive ranges of devices and working kernel drivers, hacked together because upstreaming doesn't benefit them. Look at companies like Apple, who doesn't care if their software works on Linux or not. The Linux kernel supports everything from 90s supercomputers to drones to smart toothbrushes, there's no lack of hardware support.
it’s clear from your comment you never built a device yourself running linux on an obscure SoC dealing with patches that were never accepted.
And Rust does seem to be increasing the rate and level of support small outfits can offer - see Asahi Linux for an example of that.
if your usage requires that fine. don’t place your requirements on other users.
also you are basically saying adopting linux is only for big tech. i don’t think that is in the spirit of open source.
Have you written much C or C++? What kind of kernel tools or frameworks are you thinking of? Have you ever used Rust? Are you familiar with the different kinds of memory errors?
I really struggle to imagine how anyone who is actually familiar with all this stuff could say things like this but you aren't the first...
i don’t need or want rust.
I know why people choose Rust today, but we could have had all of those benefits decades ago if not for the recalcitrance of reflexive C++ haters.
I find the C++ equivalent of the Rust iterators and such to be even harder to read (almost an accomplishment, given the density of Rust code); I don't think features like ranges would be expressed more succinctly using std::range the same way it can be done in Rust, for instance. I also find C++'s iterators' API design rather verbose, and I don't think there's much good to be said of implementing generics through C++ templates. There are good reasons for why the language was designed this way, but I get the impression succinctness didn't seem to be a primary objective designing them. Rather, I get the feeling that the language designers prioritised making the features accessible to people who already knew C++ and were used to the complexer side of C++.
That said, Linux is a very conservative and idiosyncratic project. It's really the bazaar. (No issue tracker, firehose of emails, etc.)
For instance most of the C++ stdlib is useless for kernel (or embedded) development, so you end up with a C++ that's not much more than a "C with namespaces". At least Rust brings a couple of actual language improvements to the table (not a big fan of Rust, but if the only other option is C++, then Rust is the clear winner).
They're arguably still not there yet, sure the standard was ratified years ago but the implementations are still a mess and uptake is almost nonexistent.
Once you refuse their fantasy "Don't lift a finger" way to add C++, they lose interest.
I don't reflexively hate C++, just all the implementations of it.
Deleted Comment