Rust in Linux will be fantastic except for compile time. Rust (and the world) needs a Manhattan Project to build a fast Rust compiler (where by "fast" I mean both efficient and scalably parallel when compiling a workspace with many crates and long dependency chains).
To put this in perspective, though: increasing the number of people paid to work on the Rust compiler by 10x would only mean hiring about 25 people. Compared to the size of the projects that are starting to depend on Rust, that's a rounding error.
It's important to note that run-time tradeoffs were made in the interest of fast compilation speed. When it comes to optimizing code emitted from D source, it's important to leverage the alternate implementations from GCC or the LLVM-based LDC...which don't sport nearly as fast compiles as DMD, the reference D compiler.
The D language uses GC, so it's unlikely to be acceptable for the Linux kernel. (That's not to say that you can't write a kernel in D, just that the Linux kernel maintainers don't want to write a kernel that requires GC.)
Rust's ownership model, which allows you to write straightforward, non-leaky, non-use-after-free-filled code without a GC, is quite complex at compile time. It's definitely possible to make it faster, but it's definitely a thing Rust does that D doesn't do.
Nick Nethercote and others have done a lot of work with traditional profiling and optimization like that. They've done a great job, but for my project the fundamental problem seems to be limited parallelism, which I think requires more fundamental design changes (perhaps getting away from the crate-at-a-time approach of rustc).
Though TBH I'm not an expert on the compiler and I'm not confident I know the root problems or the solutions.
Slightly off topic, I see D brought up incredibly consistently when Rust is mentioned on HN.. Is it just me?? (it's usually by walter themself but I digress..)
D was dead by all practical measures since its inception. It was never revolutionary enough to warrant the huge costs of migrating there. This is where Rust comes in...
Typescript allows to compile without type checking at the sacrifice of correctness. If you would have a slow computer or huge code-base, you could consider using the type checker only for tests, pre-checkin check, or CI. (Not that Typescript is slow - but as an conceptual example)
Would something similar be possible for Rust, so you still have the ability for correctness for CI and release builds, but allow for fast compilation when necessary or desired?
Typescript would have a far easier time in that scenario because much of it's syntax is simply Javascript with extra types attached. So, it could (I don't know the implementation details at all) 'just' strip away all the typescript-syntax, and give you your Javascript files.
Rust has the issue that there's a lot of type-checking done, even in basic scenarios: such as setting a variable `let alpha = some_function(abc);`, you have to do some form of basic typechecking here to get the type of `alpha` (since that affects how it's stored on the stack). The simple case of `fn some_function(abc: u32) -> i16;` would be 'easy', but more complicated scenarios like generics would make your life harder.
Though, I'm sure there's parts of the checking that could be eventually written so that they can be turned off, but I don't think it would provide nearly as much benefit as Typescript's non-correct compilation.
Personally, I think it would be better to spend that time on just making the compiler faster. This is perhaps easier than I made it sound, but this is my assumptions as to why it wouldn't work that well.
Yes. The `mrustc` alternative implementation doesn't do things like borrow checking, it assumes the source is "correct".
> [skipping typechecking would] allow for fast compilation when necessary or desired?
The real expenses in a "release" compilation pipeline are in the optimisation passes and codegen, so the gains are mostly in avoiding optimisations (debug build) or avoiding codegen entirely (`cargo check`).
For instance on the latest ripgrep head on my machine:
I’ve heard someone say about the rust compiler that it will never allow compilation without its safety checks in place because it would decrease faith in any rust binary.
I know a lot of people will disagree with me, and I know that the rust compiler has room for improvement, but compile time isn't really that bad for rust when you consider what it actually does for you.
Assuming no unsafe blocks, rust's type system eliminates entire classes of memory safety errors. It fundamentally eliminates data races. It eliminates large swaths of logical errors, due to its use of algebraic data types, and strong type enforcement. No null pointers, explicit error handling, and no undefined behavior.
It boggles my mind that people will balk at compile times for rust, but then not even bat an eye at their C++ testkit that runs for 8+ hours, primarily checking for errors that rust would never let you make in the first place.
Part of what makes rustc's slow compile times so frustrating is that the slow compile times are mostly unrelated to all of the fancy compile-time analysis you listed. You can see this in action if you run `cargo check`. That will run all the special static analysis (typeck, borrowck, dropck, etc.) and complete in a very reasonable amount of time.
To a first approximation, rustc is slow because it generates terrible LLVM IR, and relies on a number of slow LLVM optimization passes to smooth that IR into something reasonable. As far as I can tell this is entirely a result of not having the necessary engineering resources to throw at the problem; Mozilla has a limited budget for Rust, and needs to spread that budget across not just the compiler toolchain, but the language and ecosystem.
Brian Anderson's been working on a blog post series that covers Rust's slow compile times in more detail, if you're curious: https://pingcap.com/blog/tag/Rust
I think your and OP's perspectives are not exclusive:
A. Yes, as you say, taken from a distance and considering the whole development lifecycle from idea to shipping, "compile time isn't really that bad for rust when you consider what it actually does for you".
B. However, -during- development, you want compile speed, regardless of all the benefits. I don't know about you, but I have a hard time rationalizing "it's okay, rustc is doing many things" at that moment. I just want it to be faster, and as OP mentioned, I hope there's a future where companies / sponsors get to "hiring about 25 people" to crunch and improve things noticeably (and I will do more than hope, I already donate money to Ferrous Systems to improve rust-analyzer, and will donate if a "make rust faster" fund asks for it). Your argument sounds to me a bit like "who cares about making Python faster? It's just glue". To which I answer: maybe (though not always), but even then but I'll take faster glue anytime :) .
To sum up: rust being already awesome and being "fast given all the thing it does" doesn't mean it shouldn't be faster, and doesn't mean speaking about it is futile.
> Rust in Linux will be fantastic except for compile time. Rust (and the world) needs a Manhattan Project to build a fast Rust compiler (where by "fast" I mean both efficient and scalably parallel when compiling a workspace with many crates and long dependency chains).
A basic technique to parallelize sections is to split a project into modules. Is this not an option for the linux kernel?
Rust has great support for modules (crates in Rust lingo). It can compile modules in parallel when they don't depend on each other.
The problem is that when you have module A depending on module B, rustc doesn't do a good job of compiling A and B in parallel. In contrast, in C/C++ you can handcraft your header files to ensure that A and B compile in parallel.
https://github.com/StanfordSNR/gg - should be almost the same as the GCC/LLVM thunk extractor. You have to pay for the borrow checker, but LLVM IR optimization passes should be the same complexity.
> increasing the number of people paid to work on the Rust compiler by 10x would only mean hiring about 25 people.
Citation needed ?
I know at least 20 people being paid to work on the Rust compiler, which crank's your 10x number from 25 to 200. And this is only the people I can come up with from the top of my head.
Depends on what you mean by "research". If you mean "problems with a high degree of novelty" then quite a lot of research, I think. Rust is unusual: it requires a lot of tricky type inference and static analysis from the compiler, and doesn't make developers structure their code with header files etc to make separate compilation easier; BUT unlike most languages with those properties, people are writing large real-world projects in it.
When you’re trying to balance a bad budget, you can’t dismiss many of the cost centers and still succeed.
You can prioritize, but that makes the rest a matter of “when” not if.
It can be hard to tell from the outside if someone is avoiding a problem because they don’t want to solve it, or because they are hoping for a more inspired solution. Tackling a problem when all the options are bad may block another alternative when it does finally surface. I’d lump this in with irreversible decisions and the advice to delay them to the last responsible moment.
Who know what internal plumbing has to be moved around to make the compiler more incremental, parallel, or both.
Yes, we should have a national politburo assign funds to Rust development. Although most of private industry (proles) disagrees, us in the Administration believe that Rust is superior. The proles--ahem I mean programmers with jobs--keep causing problems by using Unsafe technologies and being too dumb to use them. Rust should be funded by The People. It is a Public Good after all because using anything else would be an act of defiance against the will of The People.
> increasing the number of people paid to work on the Rust compiler by 10x would only mean hiring about 25 people. Compared to the size of the projects that are starting to depend on Rust, that's a rounding error.
And are you going to pay for the "rounding error" or just expecting someone to pay for millions a year for your idea?
As for who pays for it: that's a tricky issue! But if we get into a situation where (for example) 10,000 developers spend an hour a day waiting for builds, we know 50 developers could fix the issue in a year, but that doesn't happen because we can't figure out how to structure it economically --- that would be a failure of organizational imagination.
This is such a low effort sneer-comment, it's a dismissive way to attack and putdown any idea that includes any amount of payment. It's the discussion equivalent of "whoever smelt it, dealt it" - if someone mentions an idea which costs money, demand as a first response if they are going to pay for it (assuming they aren't, instant dismissal), or accuse that the alternative is a kind of entitled and unfair expectation held of others, when neither need be the case.
"That volcano looks like it's becoming active, maybe some sensors could give us an early warning of problems" - Are YOU going to pay for your little "idea"??? Then WHO IS?
"That tree is getting dangeoursly high, it's at risk of falling down in a big storm this winter, maybe we could get it cut down before that" - Are YOU going to pay for it?!
"Dumping raw sewage in the river is making people sick, treating it first would take a small amount of space and a small fraction of the council's existing budget" - and you want to TAX ME for YOUR clean river, I suppose?!
As if OP is the only person who would benefit, as if automatically assuming the worst possible intent for who would pay for it, and as if "who would pay" is the first and only thing worth demanding an answer to, before even having a discussion about whether it's worth doing at all - and which of many ways it might be done.
I think OP meant that companies, like Amazon and MS, whose projects are starting to depend on rust should be putting some resources. Although to be fair, Microsoft and Amazon do pay for their infrastructure if I am not wrong.
Business reason: When you're in the business of building software, people are your biggest expense. You want to waste their time as little as possible.
Dev reason: tight feedback loops are encouraging, and make it easier to build momentum. I hate being deep in a problem then having to "wait" for a compile.
But compile times have a massive effect on developer time, which has a massive effect on the productivity and the amount of work that can actually be done.
Comparing compile time to run time is rather strange, but compared as a percentage of a developer’s time, it’s very obvious why people are worried about it.
> increasing the number of people paid to work on the Rust compiler by 10x would only mean hiring about 25 people
I don't think there are 25 people qualified to do that work looking for jobs. It's not something you can just throw money at, you need really qualified people to do that kind of work. Those people are in very high demand.
Note that that would be increasing the number for pay, you already have many people who have the relevant qualifications because they’re already doing the work, just only in their spare time.
Second, the compiler team has put a ton of effort into mentorship infrastructure, and you can go from not knowing a ton to being a productive member of the team with their help. It "just" takes time and desire to do it.
There are certainly way more than 25 skilled programmers who are experts in compilers that could be hired easily by offering enough money (on the order of several hundred thousand dollars per year).
> a Manhattan Project to build a fast Rust compiler
Or work on getting what Rust gives you in a different language that isn't Rust (maybe entirely new, maybe not[1]).
One underdiscussed thing about Rust is that it's so clearly a byproduct of the C++ world. And the most disappointing thing about Rust is that it probably won't ever be seen as a prototype that should be thrown away.
If you want to convince people that Rust should be thrown away in favour of mbeddr or something else, you need to make an argument based on specific design flaws of Rust, not just tar it by association with C++.
That argument would have to not just explain why an improved language is needed, but also why Rust can't evolve into that language via the edition system. It would also have to convince people that whatever benefits the improved language brings justify moving away from the existing crate ecosystem, tool ecosystem, and developer pool, and building new ones (or how you would reuse/extend/coopt them).
(In reality I think the influence of Haskell and even node.js were as important to Rust as C++.)
• Rust is an ML-family language dressed in a C-family syntax to look palatable to existing systems programmers. It's a byproduct of the C++ world only to the extent that it has to work on the CPU architectures and operating systems influenced by C and C++.
• Rust is mainly based on established research and existing languages from 80s and 90s. It's a distilled version of these, not a prototype.
Can you elaborate on the underdiscussion of Rust as a byproduct of the C++ world?
I'm into Rust, so I guess I assumed it was common knowledge that it was originally written to be like OCaml with better concurrency, and that one of the early goals was to replace C++ in Mozilla code.
But, language-wise, I don't even see that much similarity with C++ except for the philosophy of zero-cost abstraction and the trivial syntax-style of generics being <>.
Rust is actually a byproduct of OCaml and to a lesser extent Haskell. This becomes obvious the more you use it, especially if you’ve written substantial code in either of those languages before. To really drive this home, I’d wager that you have a better chance of translating OCaml or non-fancy Haskell straight into Rust than you do C++.
You said two different things about Rust but you did not substantiate anything. What do you mean by
"Rust is so clearly a byproduct of the C++ world"?
Furthermore, in which way is Rust defective to the point that it should be seen as a prototype that should be thrown away?
mbbeddr seems really nice and interesting but it is not just a language. It is a set of integrated tools around one.
Rust shares goals with C++, i.e. it wants to be a high-performance and systems language like C++, but I find the differences to be quite substantial and worthwhile.
These threads always devolve into "rust is too slow" written by developers (or enthusiasts) that have never written no_std code in production. I've written and shipped firmware for embedded devices written in rust, yes, still using cargo and external crates, and had zero issues with compile time because the nature of the dependencies in the deps tree is different and very carefully curated.
Anyway, I really just wanted to point out that from the mailing list we have Linus and Greg endorsing this experiment/effort from the Linux side and a commitment from Josh on behalf of the rust team to grow the language itself with the needs of the kernel in mind. That's quite impressive and more than I could have hoped for.
I've actually played with writing kernel code in rust - for Windows/NT, however - and it's quite weird to be able to use such high-level type constructs in code where you typically manually chases pointers and wouldn't be surprised to see statically allocated global variables used to monitor reference counts.
Linus has commented on Rust twice (that I'm aware of).
First, back in 2016:
Q: What do you think of the projects currently underway to develop OS kernels in languages like Rust (touted for having built-in safeties that C does not)?
A: That's not a new phenomenon at all. We've had the system people who used Modula-2 or Ada, and I have to say Rust looks a lot better than either of those two disasters.
I'm not convinced about Rust for an OS kernel (there's a lot more to system programming than the kernel, though), but at the same time there is no question that C has a lot of limitations.
People have been looking at that for years now. I’m convinced it’s going to happen one day. It might not be Rust, but it’s going to happen that we will have different models for writing these kinds of things.” He acknowledges that right now it’s C or assembly, “but things are afoot.” Though he also adds a word of caution. “These things take a long, long time. The kind of infrastructure you need to start integrating other languages into a kernel, and making people trust these other languages — that’s a big step.
I was once in the same room as him at a Linux Foundation event, and really wanted to ask him about it, but also didn't want to be a bother.
Note that the C++ opinion everyone cites is from 2007. I don't know how he feels about C++ today. It seems he's using it for some things at least. I know I've changed a lot since 2007, and so has C++.
I don't know Linus's reasons specifically, but our presentation at Linux Security Summit last year laid out why we think that Linus's past objections to C++ don't apply to Rust. See slides 19-21 of https://ldpreload.com/p/kernel-modules-in-rust-lssna2019.pdf .
His previous objections were:
In fact, in Linux we did try C++ once already, back in 1992.
It sucks. Trust me - writing kernel code in C++ is a BLOODY STUPID IDEA.
The fact is, C++ compilers are not trustworthy. They were even worse in
1992, but some fundamental facts haven't changed:
- the whole C++ exception handling thing is fundamentally broken. It's
_especially_ broken for kernels.
- any compiler or language that likes to hide things like memory
allocations behind your back just isn't a good choice for a kernel.
- you can write object-oriented code (useful for filesystems etc) in C,
_without_ the crap that is C++.
In brief, Rust does not rely on C++-style exception handling/unwinding, it does not do memory allocations behind your back, and its OO model is closer to the existing kernel OO implementation in C than it is to C++'s model. (There are other good safe languages besides Rust that I personally like in general but do not satisfy these constraints for this particular use case.)
The title is not very accurate, this is a thread about a discussion on this topic which will happen at the upcoming Linux Plumbers Conference in late August.
I see what you mean. I just posted it here with the same title that was used on /r/Linux, and that I found accurate (for the same reasons chrismorgan exposed in a sibling comment) but now I agree with you that it could cause some confusion.
Maybe a moderator could rename the post to “discussion about Linux kernel in-tree support” or something like that?
The title is perfectly accurate, it’s an email thread about Linux kernel in-tree Rust support. Sure, you could misconstrue such a title to be implying that the Linux kernel supports Rust in-tree already it if you wanted to, but half the titles on a site like this could be similarly misconstrued.
I don't see them discussing what I view as the biggest question of such conversion: is the converted code safer than C, or is it a direct translation with all the issues that we were trying to fix by using Rust? (This came up, IIRC, with automatic C to Go; it worked, but was only useful as a first step because it gave you unsafe unidiomatic Go code)
Except that one of Linus' most vocal offenses on C++ was due to operator overloading and how basic, seemingly native things like + can actually do a lot of hidden stuff unknown to the programmer. He must have softened on this since Rust offers the same facilities for operator overloading.
Could you give me a pointer to a discussion of type soudness in Rust?
I recently watched perhaps 3 years old video about Rust where Simon Peyton-Jones asked about this, and Niko Matsakis answered it was ongoing work back then.
I think that was after others took over the UI development. The back end of that program also was still C as far as I remember from their presentation and the move was mostly motivated by the GTK community, the documentation and different priorities on cross platform support.
Did he have an "attitude" about C++ in general? I thought he only commented on it with respect to operating system development. He did make much more general statements about Java, though.
Perhaps the possibility of rust improving kernel security/robustness makes the idea of rust integration seem like it carries its own weight, where C++ has more downsides (perceived or real) and fewer upsides.
Rust's history/origins - loosely, being designed to allow replacing Mozilla's C/C++ with safer Rust that performs well - feel like a good fit for kernel drivers even if the core kernel bits will always be C.
Yes, very much so. Not even only in the context of Rust but the insight, to fail fast, integrate early and do work in the open, instead of some hidden work, failing after a long time, when revealed.
So this surprises me, obviously this is early discussion about a potential topic, but the general consensus seemed to be more positive than I thought.
I thought I'd remembered reading something (maybe from linus) that seemed very against having rust in the kernel, can anyone find a source for that, I searched a little and can't?
(caveat, I obviously realise that linus isn't supporting rust in the kernel, and is only saying something bounded that, if we have it, it shouldn't be completely hidden behind some config options, but it doesn't match my memory)
Shame I hadn't seen that thread. He's right that most of the rust projects "replacing" standard Unix utils are not feature equivalent and differ in intent, but that doesn't mean there aren't other efforts to do exactly what he is asking.
Eg here's tac (admittedly not cat, but hey, ./goo | tac | tac) published a few months before his email: https://github.com/neosmart/tac
Just stating something obvious since I don't see it noted here: Linus is a smart guy with this idea of not wanting it to be some niche feature that nobody enables and hence nobody cares about and sees breakage from.
To put this in perspective, though: increasing the number of people paid to work on the Rust compiler by 10x would only mean hiring about 25 people. Compared to the size of the projects that are starting to depend on Rust, that's a rounding error.
Dlang compiles quickly not because the language is simple, but because the compiler authors care about compilation speed:
Lexer skips four spaces at once https://github.com/dlang/dmd/pull/11095
Optimize core logic to 7x compilation speed of language feature https://github.com/dlang/dmd/pull/11303
Cache rarely used types to improve memory locality https://github.com/dlang/dmd/pull/11363
Rust's ownership model, which allows you to write straightforward, non-leaky, non-use-after-free-filled code without a GC, is quite complex at compile time. It's definitely possible to make it faster, but it's definitely a thing Rust does that D doesn't do.
Though TBH I'm not an expert on the compiler and I'm not confident I know the root problems or the solutions.
Is this meant to imply that the Rust compiler authors don't care about compilation speed?
Would something similar be possible for Rust, so you still have the ability for correctness for CI and release builds, but allow for fast compilation when necessary or desired?
1) Technical debt in rustc producing large amounts of LLVM IR and expecting LLVM to optimize it away
2) generic monomorphization producing vast amount of IR
IIRC.
Yes. The `mrustc` alternative implementation doesn't do things like borrow checking, it assumes the source is "correct".
> [skipping typechecking would] allow for fast compilation when necessary or desired?
The real expenses in a "release" compilation pipeline are in the optimisation passes and codegen, so the gains are mostly in avoiding optimisations (debug build) or avoiding codegen entirely (`cargo check`).
For instance on the latest ripgrep head on my machine:
* cargo check takes ~200s user
* cargo build (debug) takes ~300s user
* carbo build --release takes ~900s user
Deleted Comment
Outside the kernel, I don't think it's going to be true for Android or Windows.
Assuming no unsafe blocks, rust's type system eliminates entire classes of memory safety errors. It fundamentally eliminates data races. It eliminates large swaths of logical errors, due to its use of algebraic data types, and strong type enforcement. No null pointers, explicit error handling, and no undefined behavior.
It boggles my mind that people will balk at compile times for rust, but then not even bat an eye at their C++ testkit that runs for 8+ hours, primarily checking for errors that rust would never let you make in the first place.
To a first approximation, rustc is slow because it generates terrible LLVM IR, and relies on a number of slow LLVM optimization passes to smooth that IR into something reasonable. As far as I can tell this is entirely a result of not having the necessary engineering resources to throw at the problem; Mozilla has a limited budget for Rust, and needs to spread that budget across not just the compiler toolchain, but the language and ecosystem.
Brian Anderson's been working on a blog post series that covers Rust's slow compile times in more detail, if you're curious: https://pingcap.com/blog/tag/Rust
A. Yes, as you say, taken from a distance and considering the whole development lifecycle from idea to shipping, "compile time isn't really that bad for rust when you consider what it actually does for you".
B. However, -during- development, you want compile speed, regardless of all the benefits. I don't know about you, but I have a hard time rationalizing "it's okay, rustc is doing many things" at that moment. I just want it to be faster, and as OP mentioned, I hope there's a future where companies / sponsors get to "hiring about 25 people" to crunch and improve things noticeably (and I will do more than hope, I already donate money to Ferrous Systems to improve rust-analyzer, and will donate if a "make rust faster" fund asks for it). Your argument sounds to me a bit like "who cares about making Python faster? It's just glue". To which I answer: maybe (though not always), but even then but I'll take faster glue anytime :) .
To sum up: rust being already awesome and being "fast given all the thing it does" doesn't mean it shouldn't be faster, and doesn't mean speaking about it is futile.
A basic technique to parallelize sections is to split a project into modules. Is this not an option for the linux kernel?
The problem is that when you have module A depending on module B, rustc doesn't do a good job of compiling A and B in parallel. In contrast, in C/C++ you can handcraft your header files to ensure that A and B compile in parallel.
Citation needed ?
I know at least 20 people being paid to work on the Rust compiler, which crank's your 10x number from 25 to 200. And this is only the people I can come up with from the top of my head.
Trying to get a program to quickly do the exact same thing over and over again is a colossal waste of resource.
You can prioritize, but that makes the rest a matter of “when” not if.
It can be hard to tell from the outside if someone is avoiding a problem because they don’t want to solve it, or because they are hoping for a more inspired solution. Tackling a problem when all the options are bad may block another alternative when it does finally surface. I’d lump this in with irreversible decisions and the advice to delay them to the last responsible moment.
Who know what internal plumbing has to be moved around to make the compiler more incremental, parallel, or both.
Dead Comment
And are you going to pay for the "rounding error" or just expecting someone to pay for millions a year for your idea?
As for who pays for it: that's a tricky issue! But if we get into a situation where (for example) 10,000 developers spend an hour a day waiting for builds, we know 50 developers could fix the issue in a year, but that doesn't happen because we can't figure out how to structure it economically --- that would be a failure of organizational imagination.
"That volcano looks like it's becoming active, maybe some sensors could give us an early warning of problems" - Are YOU going to pay for your little "idea"??? Then WHO IS?
"That tree is getting dangeoursly high, it's at risk of falling down in a big storm this winter, maybe we could get it cut down before that" - Are YOU going to pay for it?!
"Dumping raw sewage in the river is making people sick, treating it first would take a small amount of space and a small fraction of the council's existing budget" - and you want to TAX ME for YOUR clean river, I suppose?!
As if OP is the only person who would benefit, as if automatically assuming the worst possible intent for who would pay for it, and as if "who would pay" is the first and only thing worth demanding an answer to, before even having a discussion about whether it's worth doing at all - and which of many ways it might be done.
Deleted Comment
For the lifetime of a binary, compilation time is like 0.000001% of its existence.
I’m more than happy to wait longer compared to other languages (even zero time at all with scripting languages) while cargo does it’s thing.
Dev reason: tight feedback loops are encouraging, and make it easier to build momentum. I hate being deep in a problem then having to "wait" for a compile.
Comparing compile time to run time is rather strange, but compared as a percentage of a developer’s time, it’s very obvious why people are worried about it.
I don't think there are 25 people qualified to do that work looking for jobs. It's not something you can just throw money at, you need really qualified people to do that kind of work. Those people are in very high demand.
Second, the compiler team has put a ton of effort into mentorship infrastructure, and you can go from not knowing a ton to being a productive member of the team with their help. It "just" takes time and desire to do it.
Yes, most of them are employed. Some are under-employed. Some are well employed but might be keen to work on a project this important.
Or work on getting what Rust gives you in a different language that isn't Rust (maybe entirely new, maybe not[1]).
One underdiscussed thing about Rust is that it's so clearly a byproduct of the C++ world. And the most disappointing thing about Rust is that it probably won't ever be seen as a prototype that should be thrown away.
1. http://mbeddr.com/
That argument would have to not just explain why an improved language is needed, but also why Rust can't evolve into that language via the edition system. It would also have to convince people that whatever benefits the improved language brings justify moving away from the existing crate ecosystem, tool ecosystem, and developer pool, and building new ones (or how you would reuse/extend/coopt them).
(In reality I think the influence of Haskell and even node.js were as important to Rust as C++.)
• Rust is mainly based on established research and existing languages from 80s and 90s. It's a distilled version of these, not a prototype.
I recommend checking out the first pitch deck for Rust: http://venge.net/graydon/talks/intro-talk-2.pdf
I'm into Rust, so I guess I assumed it was common knowledge that it was originally written to be like OCaml with better concurrency, and that one of the early goals was to replace C++ in Mozilla code.
But, language-wise, I don't even see that much similarity with C++ except for the philosophy of zero-cost abstraction and the trivial syntax-style of generics being <>.
Can that ever happen with a language that lots of people use?
"Rust is so clearly a byproduct of the C++ world"?
Furthermore, in which way is Rust defective to the point that it should be seen as a prototype that should be thrown away?
mbbeddr seems really nice and interesting but it is not just a language. It is a set of integrated tools around one.
Rust shares goals with C++, i.e. it wants to be a high-performance and systems language like C++, but I find the differences to be quite substantial and worthwhile.
Anyway, I really just wanted to point out that from the mailing list we have Linus and Greg endorsing this experiment/effort from the Linux side and a commitment from Josh on behalf of the rust team to grow the language itself with the needs of the kernel in mind. That's quite impressive and more than I could have hoped for.
I've actually played with writing kernel code in rust - for Windows/NT, however - and it's quite weird to be able to use such high-level type constructs in code where you typically manually chases pointers and wouldn't be surprised to see statically allocated global variables used to monitor reference counts.
Why is that? Did he ever gave out his reasons?
First, back in 2016:
Q: What do you think of the projects currently underway to develop OS kernels in languages like Rust (touted for having built-in safeties that C does not)?
A: That's not a new phenomenon at all. We've had the system people who used Modula-2 or Ada, and I have to say Rust looks a lot better than either of those two disasters.
I'm not convinced about Rust for an OS kernel (there's a lot more to system programming than the kernel, though), but at the same time there is no question that C has a lot of limitations.
https://www.infoworld.com/article/3109150/linux-at-25-linus-...
Then second, much more recently:
People have been looking at that for years now. I’m convinced it’s going to happen one day. It might not be Rust, but it’s going to happen that we will have different models for writing these kinds of things.” He acknowledges that right now it’s C or assembly, “but things are afoot.” Though he also adds a word of caution. “These things take a long, long time. The kind of infrastructure you need to start integrating other languages into a kernel, and making people trust these other languages — that’s a big step.
https://thenewstack.io/linus-torvalds-on-diversity-longevity...
I was once in the same room as him at a Linux Foundation event, and really wanted to ask him about it, but also didn't want to be a bother.
Note that the C++ opinion everyone cites is from 2007. I don't know how he feels about C++ today. It seems he's using it for some things at least. I know I've changed a lot since 2007, and so has C++.
His previous objections were:
In brief, Rust does not rely on C++-style exception handling/unwinding, it does not do memory allocations behind your back, and its OO model is closer to the existing kernel OO implementation in C than it is to C++'s model. (There are other good safe languages besides Rust that I personally like in general but do not satisfy these constraints for this particular use case.)I didn't mean it with any negative connotation, btw.
Maybe a moderator could rename the post to “discussion about Linux kernel in-tree support” or something like that?
The title the person you are responding to is complaining about is not the title of a email thread but the title of a hacker news post.
[1] https://immunant.com/blog/2020/06/kernel_modules/
[2] https://github.com/immunant/c2rust
They are also interested in "unsafe to safe refactoring tools" in my understanding, but they're not there yet.
https://doc.rust-lang.org/stable/rust-by-example/trait/ops.h...
Couldn't find anything proper by googling.
Initially he started with C and GTK+ and later migrated to C++ and QT Framework.
[1] https://subsurface-divelog.org/
Dead Comment
Rust's history/origins - loosely, being designed to allow replacing Mozilla's C/C++ with safer Rust that performs well - feel like a good fit for kernel drivers even if the core kernel bits will always be C.
I thought I'd remembered reading something (maybe from linus) that seemed very against having rust in the kernel, can anyone find a source for that, I searched a little and can't?
(caveat, I obviously realise that linus isn't supporting rust in the kernel, and is only saying something bounded that, if we have it, it shouldn't be completely hidden behind some config options, but it doesn't match my memory)
https://marc.info/?l=openbsd-misc&m=151233345723889&w=2
Eg here's tac (admittedly not cat, but hey, ./goo | tac | tac) published a few months before his email: https://github.com/neosmart/tac
Thanks for finding it for me.
Also, he is suggesting this change to experienced people who did not do it that way. So experience is not 100%.
Is there anyway to have them shown more friendly?