I'm opposed to this, and it's not because of some idiologic kernel-should-be-pure-c thing. I'm opposed to it because the rust compiler is slow. And the rust compiler is written in rust. Compiling rust is a nightmare if you don't have a high-end PC.
I want to be able to actually compile my software if I wish so. This is becoming increasingly difficult. Rust is adding to the problem.
I'm not a fan of rust either, not by a long mile. But current kernel approach to memory safety is a complete, utter, demonstrated failure. Look at last week's CVE-2022-41674:
This is a catastrophic bug, that (after some work on developing an actual RCE) lets anybody within wifi range to get root on your laptop (or phone, or access point). And all it took is this one line:
We've all been repeating the "1000 eyes - all bugs are shallow" mantra for far too long. This one was in the mainline for more than 3 years, and nobody noticed. How many more are lurking there?
Separately I feel "with enough eyes, all bugs are shallow" fails to apply in codebases with many users but most of whom consume it as a black box without looking inside. Some issues making it harder for me to read the code I use:
- The Linux kernel and issuance .so files lack a "view source" button on compiled binaries. And even checking out the matching source, building a replacement binary, and diffing your local changes from the matching source is an arduous progress to setup per program/library from a tarball/Git tag, wait for the computer to finish, install dependency .so files globally, ensure symbols are present, ensure you can breakpoint static functions...
- Dynamic dispatch and generic code might help maintainers and code extensibility but (in my experience) definitely impede external eyeballs from understanding code.
is this really that bad of a bug though? i need to be using that driver, and someone near me has to be actively knowing and trying to inject... and it's a DoS ? i'm sincerely asking btw
I'm not sure it's realistic to expect a safety focused compiler to compete with one that doesn't offer those checks.
We should aspire to make it as fast but in the short term a slower compiler in exchange for less CVEs and random buffer overrun crashes seems like a reasonable trade off to me.
Distributions such as Fedora offer build infrastructure[1] that you can use to compile packages to use in your system for testing if you feel your local hardware isn't powerful enough.
Actually C/C++ compilers are shooting themselves in the foot performance-wise with the header mechanism, so writing a compiler that's faster than C/C++ is not that hard. Not sure what the reasons of the Rust compiler being (even more?) slow are, maybe the ability to easily interface with C code has something to do with it?
There are languages with similar type safety, which are faster, because the authors have placed focus on having several ways available to compile the code.
- the Linux project is very likely to stick to simple, fast elements of Rust (based on the excellent approach of the Linux/Rust devs thus far)
- the more Rust is used, the more work will be done to improve its performance
- you can still build a kernel on a low-powered device... i've built kernels that took > 12 hours on, for example, PA-RISC boxes that were once regarded as beefy :-)
- most people don't (and shouldn't) compile their kernel, and by most I mean more than 99%
Could you expand on why people shouldn't compile their kernel?
I think it's fairly useful to compile their own to get a better understanding of what the kernel does and to better suit everyone's needs. For example, if I have little free space on my boot partition and I have my disk encrypted, I want my kernel to be as small as possibile, so I will deselect every driver I don't need. Or maybe the driver for my new device is not included in the kernel builds of my distribution.
Not only I woundn't say that most people shouldn't compile their kernel, I would say that most linux users* should do it at least once, so they can understand the power they have compared to closed-source operaring systems.
*with linux users I mean users that use linux as their main operating system, not people that do ssh once in a while or rarely boots their linux partition
I used to work on the Rust compiler itself on a Chromebook with a 1.1GHz dual core, 4G RAM and 32G of disk. That's about as far from a high-end PC you can get. Most mid-range phones nowadays have more processing power and memory than that. And the Rust compiler has been sped up considerably since then. Even with a 4 year old mid-tier PC you can get a complete Rust compile in half an hour. Roughly half of that is building LLVM.
So you can of course compile your Rust compiler. If you are used to compile clang or gcc, it's not that much of a hassle. And the benefits have already been shown. If you only want to compile Rust code, and not develop it, mrustc might also be a good choice for you (it doesn't implement borrowck, just what's needed for codegen).
Finally, if you don't want to use Rust drivers, you can simply configure them out and don't need to build Rust. It'll be quite a long while until Rust will arrive in the kernel outside of drivers (which tend to benefit most from Rust anyway).
BTW, people assume it's slow because of the safety checks, but that's not the case. `cargo check` runs just the checks, and is pretty fast.
Majority of the time is spent in LLVM, because rustc throws a ton of code at it to clean up. This is being addressed by MIR optimizations (rustc's built-in optimizer working on higher-level code) to remove costly abstractions before they become a pile of low-level code to eliminate.
I remember the times when installing Gentoo was a matter of days. Compiling the kernel was a matter of hours. Sure, rustc is slower than gcc, but if you're not constantly compiling your software over and over and over again, then the time spent in the compiler is not your primary concern. Start the compile, go to bed, let it run.
Running Gentoo on 9900KS and 980 Pro SSD with 64GB of memory, kernel compiles in about 8 minutes, the longest project to build I've seen is Chromium and QT Web Engine at ~120m. It's OK but I'm tempted by the new Ryzen CPUs.
It's a fair remark, but your requirement is very niche. There's very few people who will value compilation time over runtime speed, safety, good abstractions, expressiveness.
This ethos played out in the C++ community 5-ish years ago. It turned into a compiler with quadratic behavior, and eventually many people who didn't care at all about compiler performance cared a lot.
It turns out that compiler speed makes development faster, keeps people interested in the language, and ultimately allows more iterations before release (which can be better for speed and safety than throwing in a bunch of extra compile steps).
Rust could have been designed for faster compilation without sacrificing runtime speed, safety, good abstractions, expressiveness and whatnot in any regard. The people designing Rust just either did not care or did not have the experience to do so. And it can not be fixed anymore because that would require too much breaking changes.
And yes, a substantial time is also spent for things like the borrow checker but no all of it.
An no, it is not a niche requirement. Short compile times are absolutely critical for developer productivity. One main reason Golang exists and got popular is that people got fed up with how slow C++ is to compile. Not to mention that most people on this earth are not as privileged as to have a beefy machine.
> I'm opposed to this. I want to be able to actually compile my software if I wish so.
So what you're saying is that you're opposed to millions of people having more secure software, and perhaps millions of dollars spared from breaches, because it makes your own occasional singular personal experience of compiling the software faster?
I mean, slow compile times affect everyone who compiles the kernel. And being able to compile things yourself (possibly with patches) is one of the key features of open source...
Considering only compile time is a shallow approach to the idea of using Rust more widely. I would encourage you to think about the aggregate amount of time our industry spends finding, and then fixing, and then repairing the damage done by, classes of bugs which idiomatic Rust completely prevents. It does all this without impacting runtime (like GC language often do).
Fast compute at this point is quite literally the least expensive part of the equation. Machines will get faster. Compilers will get optimized.
We've spent decades optimizing the developer experience (compile times) at the expense of the rigor, robustness, and quality of our resulting product. I've been doing this for 30 years, and I can categorically say that I've spent FAR more time chasing NPE, OBO, and race condition bugs than I would have ever added to my build time with a slightly slower compiler.
The idea here would be to compile in a way that just assumes everything is correct and either crashes catastrophically or produces invalid output otherwise. But in doing so, it should allow at least slightly faster compilation.
You could even remove the need for this "fast and loose" compiler to do any type inference by shipping pre-processed source with all types resolved. However I don't know if this addition would fit your needs if, e.g., your goal is to be able to compile from any given commit rather than only official releases.
At very least it could be an interesting experiment to discover what tradeoffs are possible.
While I lack personal experience with Rust and I really appreciate fast compilers (that is why I am a Go user), over all features and characteristics, Rust seems to be the best choice for safe kernel development. Other posters have described well how urgent it is, to improve the security of kernel code. So just not doing anything about this, doesn't seem to be a good option.
It seems, there is a wide group of developers which thinks that Rust is the best candidate as a kernel development language. If you see issues with that choice, now would be the time to propose an alternative and try to find momentum in the developer community supporting that alternative. While I also lack practical experience there, by all what I heard, ADA could be one. But I don't know how it exactly compares to Rust and what the trade offs are. But so far, no one has pushed for ADA as a possible kernel implementation language.
Just as a data point, my desktop is a Xeon e3-1230 v5, which is the same silicon as the i7 from 2015. The CPU cost $250 new, the entire desktop was $1100 or so, not including the monitors.
Then ran the build step "time ./x.py build -j 8"
...
Build completed successfully in 0:36:53
real 36m53.398s
user 254m52.720s
sys 12m48.289s
Seems pretty reasonable considering it's not a particularly high end desktop from 2015. Seems like a cheap price to pay for increased reliability and security.
Absolutely, and my own experience is that it's possible to create a language with Rust's safety guarantees while making it quick to compile.
I know this because I've managed to add a form of RAII and borrow checking to portable C11, and C is known for being faster than Rust to compile. Imagine what happened if we made a language with that stuff built-in.
The funny thing is that C is also slower to compile than it could be because of headers.
Yeah we should probably optimize for the 0.5% niche of people that care. Who needs builtin in memory and thread safety on core server systems so long as I can compile the kernel on my 486. Oh never mind, they are dropping 486 support too soon. Why can't it stay 1996 forever.
Rust compiler does all the Rust’s magic (where the borrow checker is the biggest part). This magic is really important and helpful, and it's better to do it during compilation, not in runtime (for huge performance benefits).
I'm aware. I know how rust works and why people want it.
I haven't checked the times, but if the borrow checker is really the slowest part, maybe making rust skip it is a valid approach for end users. Sounds like an interesting experiment.
It is expected to compile rust with a rust written-compiler (rust fanboys will improve it, probably).
Now, I wonder what is the most reasonable option: writting a naive and simple 'c11' compiler or a naive and simple rust compiler.
I wonder if somebody has done a "syntax complexity diff" between 'c11' and rust.
I know that linux is written in "gcc C", not 'c11'... so...
On the other end of the software stack, we have servo, mozilla web engine written in rust. What's up there? Still a drop of rust in a ocean of c++? (SDK included).
Because after years, if it is still impossible to run servo without c++ code, this is bad omens for kernel rust.
Rust as a language to write kernel drivers for <POPULAR ARCHITECTURE> is a great achievement, but what about all the other architectures that only have a C compiler and do not want/can't to depend on LLVM?
What about the billions poured into LLVM that made writing an optimizing Rust compiler possible in reasonable time, that is still written in C++?
Lastly, I hope to be wrong on this, but watching at Google history being backed by them is, unfortunately, a course,
Therefore, for the Linux kernel specifically, I think the only concern is whether or not GCC remains supported in addition to LLVM - as GCC and Clang are, as far as I know, the only two compilers which actually can be used to build the kernel as we speak. There's work being done on both gcc-rs and rustc_codegen_gcc to allow using GCC as a Rust backend, meaning that all platforms currently supported by the Linux kernel should be eventually capable of being supported without porting a compiler backend.
Writing an LLVM backend for <LESS POPULAR ARCH> isn't impossible, it just hasn't been done yet. Writing GNU Rust compiler isn't impossible, it just hasn't been done yet. What better way to encourage it than to write more high quality and useful code in the language. Seems like an unfair criticism to levy.
Compilers are just short running tools. They can hog as much memory for the short period during compilation and die away after that. But a kernel is a critical piece of long software that can have CVE. The criticality is not to have the CVE in the first place.
I’d like to get involved with contributing to the prossimo project like the blog suggested, however there doesn’t seem to be any more information anywhere on memorysafety.org to do so.
And the reality is that there are no absolute guarantees. Ever. The "Rust is safe" is not some kind of absolute guarantee of code safety. Never has been. Anybody who believes that should probably re-take their kindergarten year, and stop believing in the Easter bunny and Santa Claus.
A bad rewrite from an unsafe to a safe language would mean safety issues traded for logical errors in most cases. Which sounds like a win if you ask me.
So even the "rewrite it in Rust, badly" has a decent ring to it tbh (although I'm not arguing there should be a rush to so).
IMHO the entire Linux kernel should be rewritten as a microkernel in Rust. Another option would be to use the seL4 kernel and salvage parts of Linux to become device drivers and services.
This comment reminded me of Redox (rust microkernel), which I haven’t looked at in a while. It’s an impressive project, but it was funny to see that in their top news post, one of the items is about tracking down memory-corruption/use-after-free bugs:
> “After having thoroughly debugged the orbital/orblogin memory corruption bug with little success, I decided to go as far as phase out the old paging code (ActivePageTable/InactivePageTable/Mapper etc.) in favor of RMM (Redox Memory Manager). Surprisingly, this fixed the bug entirely in the process, and it turns out the issue was simply that parent page tables were not properly unmapped (causing use-after-free), most likely due to the coexistence of RMM and the old paging code, which did not agree on how the number of page table entries were counted.”
This project surely uses rust more thoroughly and idiomatically than the Linux kernel ever will. And yet here we are with memory corruption and use after free bugs. And the text indicates the bug was so hard to track down that they basically gave up and just replaced the old code.
Rust may prove to be beneficial to Linux, but there is too much over-promising hype at this point.
When you write an operating system you eventually will have to go down into the boiler room and write assembly and such. This is most likely where the bugs occur.
Also, you need unsafe() blocks once in a while to interoperate with the extremely low-level assembler parts.
So I'm not in the least surprised there are still memory corruption errors in Redox.
Mind you the bug occurred in the memory paging code and was eventually fixed.
Go ahead, then, and RiiR. (Rewrite it in Rust.) It's not really a novel thought that X thing might be better leveraging Rust's features. The novel thing is actually doing it.
The whole point of eBPF is that it's a limited subset of an execution environment that not only provides a memory safe environment, but just as crucially all eBPF programs are guaranteed to terminate.
It's not a general purpose computation environment in a strict sense.
You need to realize that
(a) reality trumps fantasy
(b) kernel needs trump any Rust needs
And the reality is that there are no absolute guarantees. Ever. The "Rust is safe" is not some kind of absolute guarantee of code safety. Never has been. Anybody who believes that should probably re-take their kindergarten year, and stop believing in the Easter bunny and Santa Claus.
> The "Rust is safe" is not some kind of absolute guarantee of code safety
Exactly. Some people act like we don't have the whole branch of "formal proofs" in CS. Memory safety is just once aspect of program safety.
Like, IMO, programs written in Coq, F* or even C programs verified by Frama-C are much more "safe" than Rust programs that advertise their "safety" on the mere fact that they are written in Rust.
The reality is that people are adding critical code to the kernel and surrounding infrastructure (OpenSSL) on a Friday night after a long weeks work and never bother to look at it again.
We absolutely need something like Rust to cover our backs!
FWIW, Walter Bright have been, for years, proposing the C standard to fix the billion dollar mistake in a backwards compatible way. Stroustrup is the head of the C standards committee, just saying.
Many of the Rust "evangelists" thinks/behaves like Rust's guarantees are absolute and leads to absolutely unbreakable software by default, and advocate that the language is the silver bullet combining abilities of C, C++ (depending on application) without any of their downsides.
When you hit a limitation you really need to implement in Rust, they say "Hey, there's unsafe{}, use that". Also, they advocate that unsafe{} is equal to C/C++ in programming freedom, which is again not.
When they're reminded that reality is not like that, they get upset and defensive. This comment is a nice flag to remind this reality.
I congratulate Rust for being what it is, but it's not a silver bullet and it's not the next C or C++. It's just a very nice programming language for various applications.
Being all shiny-eyed doesn't work in CS or programming in general, and also hardware doesn't work like that (a deterministic, perfectly good behaving, spec-obeying magic box with some included smoke for higher performance).
I want to be able to actually compile my software if I wish so. This is becoming increasingly difficult. Rust is adding to the problem.
https://seclists.org/oss-sec/2022/q4/23
This is a catastrophic bug, that (after some work on developing an actual RCE) lets anybody within wifi range to get root on your laptop (or phone, or access point). And all it took is this one line:
https://git.kernel.org/pub/scm/linux/kernel/git/wireless/wir...
We've all been repeating the "1000 eyes - all bugs are shallow" mantra for far too long. This one was in the mainline for more than 3 years, and nobody noticed. How many more are lurking there?
- The Linux kernel and issuance .so files lack a "view source" button on compiled binaries. And even checking out the matching source, building a replacement binary, and diffing your local changes from the matching source is an arduous progress to setup per program/library from a tarball/Git tag, wait for the computer to finish, install dependency .so files globally, ensure symbols are present, ensure you can breakpoint static functions...
- Dynamic dispatch and generic code might help maintainers and code extensibility but (in my experience) definitely impede external eyeballs from understanding code.
While I agree that bug is serious, that "some" is doing pretty heavy lifting here. Is there an RCE for this bug?
Dead Comment
We should aspire to make it as fast but in the short term a slower compiler in exchange for less CVEs and random buffer overrun crashes seems like a reasonable trade off to me.
Distributions such as Fedora offer build infrastructure[1] that you can use to compile packages to use in your system for testing if you feel your local hardware isn't powerful enough.
[1]: https://copr.fedorainfracloud.org
Ada, Delphi, OCaml, C#/F# (.NET Native / Native AOT), D, Nim,...
(Same reason why C with typedefs is slower to compile than plain C, why C++ is slower, etc)
(that and cargo dependencies, etc - also C compilers have some +30yrs of optimizations)
- the Linux project is very likely to stick to simple, fast elements of Rust (based on the excellent approach of the Linux/Rust devs thus far)
- the more Rust is used, the more work will be done to improve its performance
- you can still build a kernel on a low-powered device... i've built kernels that took > 12 hours on, for example, PA-RISC boxes that were once regarded as beefy :-)
- most people don't (and shouldn't) compile their kernel, and by most I mean more than 99%
Not only I woundn't say that most people shouldn't compile their kernel, I would say that most linux users* should do it at least once, so they can understand the power they have compared to closed-source operaring systems.
*with linux users I mean users that use linux as their main operating system, not people that do ssh once in a while or rarely boots their linux partition
With every year, abandoning OSS and looking for non-computing hobbies gets more attractive.
All I really want is to be able to compile my stuff without waiting overnight (or more).
So you can of course compile your Rust compiler. If you are used to compile clang or gcc, it's not that much of a hassle. And the benefits have already been shown. If you only want to compile Rust code, and not develop it, mrustc might also be a good choice for you (it doesn't implement borrowck, just what's needed for codegen).
Finally, if you don't want to use Rust drivers, you can simply configure them out and don't need to build Rust. It'll be quite a long while until Rust will arrive in the kernel outside of drivers (which tend to benefit most from Rust anyway).
Majority of the time is spent in LLVM, because rustc throws a ton of code at it to clean up. This is being addressed by MIR optimizations (rustc's built-in optimizer working on higher-level code) to remove costly abstractions before they become a pile of low-level code to eliminate.
It turns out that compiler speed makes development faster, keeps people interested in the language, and ultimately allows more iterations before release (which can be better for speed and safety than throwing in a bunch of extra compile steps).
And yes, a substantial time is also spent for things like the borrow checker but no all of it.
An no, it is not a niche requirement. Short compile times are absolutely critical for developer productivity. One main reason Golang exists and got popular is that people got fed up with how slow C++ is to compile. Not to mention that most people on this earth are not as privileged as to have a beefy machine.
So what you're saying is that you're opposed to millions of people having more secure software, and perhaps millions of dollars spared from breaches, because it makes your own occasional singular personal experience of compiling the software faster?
Or did you mean something else?
Fast compute at this point is quite literally the least expensive part of the equation. Machines will get faster. Compilers will get optimized.
We've spent decades optimizing the developer experience (compile times) at the expense of the rigor, robustness, and quality of our resulting product. I've been doing this for 30 years, and I can categorically say that I've spent FAR more time chasing NPE, OBO, and race condition bugs than I would have ever added to my build time with a slightly slower compiler.
itripn&
The idea here would be to compile in a way that just assumes everything is correct and either crashes catastrophically or produces invalid output otherwise. But in doing so, it should allow at least slightly faster compilation.
You could even remove the need for this "fast and loose" compiler to do any type inference by shipping pre-processed source with all types resolved. However I don't know if this addition would fit your needs if, e.g., your goal is to be able to compile from any given commit rather than only official releases.
At very least it could be an interesting experiment to discover what tradeoffs are possible.
It seems, there is a wide group of developers which thinks that Rust is the best candidate as a kernel development language. If you see issues with that choice, now would be the time to propose an alternative and try to find momentum in the developer community supporting that alternative. While I also lack practical experience there, by all what I heard, ADA could be one. But I don't know how it exactly compares to Rust and what the trade offs are. But so far, no one has pushed for ADA as a possible kernel implementation language.
So to bootstrap rust this way, you'd need to go GCC 4.2 -> GCC 13+ -> Rust
I followed the https://rustc-dev-guide.rust-lang.org/building/how-to-build-...
Then ran the build step "time ./x.py build -j 8" ... Build completed successfully in 0:36:53
real 36m53.398s user 254m52.720s sys 12m48.289s
Seems pretty reasonable considering it's not a particularly high end desktop from 2015. Seems like a cheap price to pay for increased reliability and security.
If I had a choice between 50% CVEs and 2h per compile or 5' compiles, I'd take the former in a blink.
I know this because I've managed to add a form of RAII and borrow checking to portable C11, and C is known for being faster than Rust to compile. Imagine what happened if we made a language with that stuff built-in.
The funny thing is that C is also slower to compile than it could be because of headers.
Deleted Comment
I haven't checked the times, but if the borrow checker is really the slowest part, maybe making rust skip it is a valid approach for end users. Sounds like an interesting experiment.
Dead Comment
Now, I wonder what is the most reasonable option: writting a naive and simple 'c11' compiler or a naive and simple rust compiler.
I wonder if somebody has done a "syntax complexity diff" between 'c11' and rust.
I know that linux is written in "gcc C", not 'c11'... so...
On the other end of the software stack, we have servo, mozilla web engine written in rust. What's up there? Still a drop of rust in a ocean of c++? (SDK included).
Because after years, if it is still impossible to run servo without c++ code, this is bad omens for kernel rust.
Does anyone know how much Google pays for this kind of stuff? And is it like a limited-in-time kind of contract or something similar?
Rust as a language to write kernel drivers for <POPULAR ARCHITECTURE> is a great achievement, but what about all the other architectures that only have a C compiler and do not want/can't to depend on LLVM?
What about the billions poured into LLVM that made writing an optimizing Rust compiler possible in reasonable time, that is still written in C++?
Lastly, I hope to be wrong on this, but watching at Google history being backed by them is, unfortunately, a course,
Therefore, for the Linux kernel specifically, I think the only concern is whether or not GCC remains supported in addition to LLVM - as GCC and Clang are, as far as I know, the only two compilers which actually can be used to build the kernel as we speak. There's work being done on both gcc-rs and rustc_codegen_gcc to allow using GCC as a Rust backend, meaning that all platforms currently supported by the Linux kernel should be eventually capable of being supported without porting a compiler backend.
I believe that was a feature.
non-GCC compiler, like Clang+LLVM, are (were?) considered to be open source but not Free Software (TM)
> It’s awesome that we can already compile Rust for Linux with very few hacks: we’re very close to being able to compile it on the master branch.
https://blog.antoyo.xyz/rustc_codegen_gcc-progress-report-16
I think you mean it just isn't finished yet, but it's almost there.
I guess the most critical part of a Linux Kernel is being able to compile it in the first place for the architecture you're using.
Virtually every platform/architecture out there provides a C compiler.
Can't answer on the CVE part, we have no data to discuss the matter in a meaningful way.
There's certainly hope, it doesn't mean data will prove us right.
They either don't compile the rust code, which is fine, or they wait for gcc support.
> What about the billions poured into LLVM that made writing an optimizing Rust compiler possible in reasonable time, that is still written in C++?
I'm not sure what you're asking here.
> Lastly, I hope to be wrong on this, but watching at Google history being backed by them is, unfortunately, a course,
Also unclear on what this statement is about.
(OK, a bit sarky, but still, Rust isn't a magic bullet...)
Of course it won't fix all those half-assed logical bugs, but it will put an end to memory and thread related bugs, of which there are many.
So even the "rewrite it in Rust, badly" has a decent ring to it tbh (although I'm not arguing there should be a rush to so).
Deleted Comment
> “After having thoroughly debugged the orbital/orblogin memory corruption bug with little success, I decided to go as far as phase out the old paging code (ActivePageTable/InactivePageTable/Mapper etc.) in favor of RMM (Redox Memory Manager). Surprisingly, this fixed the bug entirely in the process, and it turns out the issue was simply that parent page tables were not properly unmapped (causing use-after-free), most likely due to the coexistence of RMM and the old paging code, which did not agree on how the number of page table entries were counted.”
(https://www.redox-os.org/news/drivers-and-kernel-7/)
This project surely uses rust more thoroughly and idiomatically than the Linux kernel ever will. And yet here we are with memory corruption and use after free bugs. And the text indicates the bug was so hard to track down that they basically gave up and just replaced the old code.
Rust may prove to be beneficial to Linux, but there is too much over-promising hype at this point.
Also, you need unsafe() blocks once in a while to interoperate with the extremely low-level assembler parts.
So I'm not in the least surprised there are still memory corruption errors in Redox.
Mind you the bug occurred in the memory paging code and was eventually fixed.
Deleted Comment
https://www.destroyallsoftware.com/talks/the-birth-and-death...
Jokes aside, I wouldn't be surprised to see WASM in the kernel one day, perhaps to replace eBPF.
This may not be too crazy of an idea, it would enable fine-grained secure sandboxing like what Firefox is doing.
Securing Firefox with WebAssembly - https://hacks.mozilla.org/2020/02/securing-firefox-with-weba...
Practical third-party library sandboxing with RLBox - https://rlbox.dev/
The whole point of eBPF is that it's a limited subset of an execution environment that not only provides a memory safe environment, but just as crucially all eBPF programs are guaranteed to terminate.
It's not a general purpose computation environment in a strict sense.
Deleted Comment
Exactly. Some people act like we don't have the whole branch of "formal proofs" in CS. Memory safety is just once aspect of program safety.
Like, IMO, programs written in Coq, F* or even C programs verified by Frama-C are much more "safe" than Rust programs that advertise their "safety" on the mere fact that they are written in Rust.
We absolutely need something like Rust to cover our backs!
It’s not that Linus is against Rust at all, just clearing up some disagreements.
When you hit a limitation you really need to implement in Rust, they say "Hey, there's unsafe{}, use that". Also, they advocate that unsafe{} is equal to C/C++ in programming freedom, which is again not.
When they're reminded that reality is not like that, they get upset and defensive. This comment is a nice flag to remind this reality.
I congratulate Rust for being what it is, but it's not a silver bullet and it's not the next C or C++. It's just a very nice programming language for various applications.
Being all shiny-eyed doesn't work in CS or programming in general, and also hardware doesn't work like that (a deterministic, perfectly good behaving, spec-obeying magic box with some included smoke for higher performance).
What would you like elaboration on? The relationship seems pretty clear to me?
Deleted Comment