As someone who has had to deal with legacy C++ compilers from multiple platforms, I didn't have the luxury of dropping compilers I didn't like but instead had to program to the lowest common (working) denominator of those frontends.
For a frontend to be taken seriously, I see the following questions needing to be addressed:
- How long until the frontend is viable?
- What will it take to keep the frontend is viable?
- How far will each frontend lag in shipping lang/lib features?
- What is the relative user upgrade cadence of each frontend?
- How do developers test multiple frontend versions?
- How do developers workaround frontend bugs?
- How do we communicate out frontend support for crates, especially if crate authors take the approach of this person and drop support as they become a problem for them?
A GCC backend for rustc is a smaller, well scoped target that can more easily be kept up-to-date (I would hope, upstreamed in Rust) that resolves a lot of the pressing needs (platform support).
For any finite resource allocation, I see getting a GCC backend to be a priority for quick wins and then focus on gcc-rs as an experimental hedge against future problems.
> For any finite resource allocation, I see getting a GCC backend to be a priority for quick wins and then focus on gcc-rs as an experimental hedge against future problems.
Doesn't that only work if it's the same people working on all those things? AFAIK, it's different people doing the different projects. You can't just re-assign open-source developers however you want.
Yes, people can work on what they want and good for them, it helps build a healthy community.
What my point is more about the conversation for the larger community of what we want to prioritize and put in the effort to support. If we have the opportunity to influence where people gravitate to, which should it be? If companies want to come in and sponsor work, where would we, as a community, want that to go?
Also if gcc-rs "makes it", the cost will shift from that one or two developers to the whole community. We will need to test our projects against multiple versions of each frontend, our projects will need to advertize what they support, Rust will need to get compiler vendor and versions flags for us to conditionalize compilation to handle discrepancies and bugs, and crate authors will need to carry around these compiler conditionals.
You can't directly re-assign those developers, but you can try to influence their decisions. It's not only about the current developers either, but also potential new contributors that may be considering either project.
That it was perceived necessary to write this indicates something very wrong.
A healthy response to news of another compiler for your new language is, "Wonderful! People are demonstrating their belief in the future of our language with a big time investment! Another compiler will help us validate our language specification, which will be badly needed when we go to ISO to ask for a Rust standard Working Group."
It is just possible that a second compiler could be twice or (as happened with Pascal and C++!) 100 times faster. Faster builds would mean thousands of hours saved for users of the language—even, potentially, for people working on the old compiler!
A second implementation of the standard library could offer similar benefits.
More implementations is an absolute prerequisite to language maturity. All languages not doomed get them. It will be a net good for Rust not to be doomed.
The problem is that there are fewer qualified contributors that project authors would like. Every piece of code which is written and becomes used needs people to contribute continued effort. Else the project dies, and the efforts put into it, and into depending on it, are wasted.
That said, I think that C++98 is a terrible language. I would understand targeting C++11. I would understand targeting the last version of OCaml which GCC4 can build. But the number of footguns C++98 introduces would make it for me a worse choice than plain C.
C++98 is less fun to code in than C++11, which is less fun than C++20.
But with modern-style library support, C++98 can be quite safe and usable, at least single-threaded. But some things are unavoidably verbose. The only serious functional lack is moves, but that is a performance optimization. C++98 code can be upgraded to current whenever the perceived need to hang back dissipates.
That said, implementing another compiler just for the sake of bootstrapping convenience would be powerfully silly: if you code a new Rust compiler, it should be the very best Rust compiler you can write, and better on at least some axes than the flagship one.
Coding it in C++17 or 20 would be just good sense: Religion notwithstanding, C++ is a mature and powerful language well suited to the task.
One real peculiarity of churn around bootstrapping is that, given good support for cross compilation, you hardly ever need to bootstrap at all. So if that is a thing that worries you, work supporting cross-compiling is your most productive course.
GCC-RS will never have the same level of investment as rustc, which means that either it'll rot or the entire ecosystem will rot due being tied down waiting for it to catch up.
Also, the Rust community at large is not at all enthusiastic about handing the language over to ISO. The design-by-comitee-based, incrementalist, waterfall throw-it-over-the-wall-every-3-years approach is pretty much the exact opposite of the continuous delivery-based way Rust is developed, and arguably one of the main reasons why C++ sucks.
There is absolutely no slightest hint of "throw it over the wall" in ISO C++ development. The compiler implementers and heaviest users work deeply and collaboratively together to bring each Standard to readiness. Each new C++ Standard is a markedly better language than any before. A mature language, C++ necessarily evolves more slowly than Rust does, but Rust will soon become more stable, too, ISO or no.
What does it mean when you feel a need to lie about your perceived competition?
In fact, there is no such competition: literally hundreds pick up coding C++ professionally for each individual who so much as tries out Rust. The fast rise of Rust has no detectable effect on the growth of C++ usage, even 40 years on. Rust's and C++'s coexistence will last for as long as Rust is used at all.
You are welcome not to like C++, but lying about it says more about you than about it.
The fate of Gcc-Rs will be determined by future events unknown to you or to anyone else. It could become critically important to Rust's own future, someday.
We need stable and free infrastructure. It is unfortunate how few people realize what GCC does for everyone. It certainly ain't perfect, but imagine we only had clang/llvm. Llvm regularly breaks the API and cannot be bootstrapped, AFAIK. Without GCC, I doubt we had android, or capable cheap routers, the whole python and node.js ecosystems would be in a completely different situation, and we would permanently facing the threat of propiatary languages. I support rust in GCC, too (and I am glad that they also have Go support).
In GCC, the compiler frontends are generally developed in parallel with, and distributed in parallel with the backend. There are some frontends which are distributed separately, but I expect that gcc-rs will be integrated into the mainline gcc source tree once complete. So it's not really an issue in practice. To put it another way: llvm api breakage doesn't affect clang, but it does affect other compilers using llvm.
Gcc is like a BSD, while llvm is like linux (except that linux makes a commitment to compatibility which llvm does not).
> mrustc is not the solution [...] as it’s only capable of building rustc 1.39
This seems like an odd point. Instead of updating mrustc to support a couple of more features every once and a while we should write a whole new frontend? It seems like they are basically solving the same problem, except mrustc has chosen a subset of it (it assumes correct code). Of course GCC-rs also has a subset in a way as most of the codegen is handled by the GCC core.
To be clear I'm not saying that mrustc is a better approach to bootstrapping, but for this specific point is seems that the two approaches are basically equivalent.
Thinking about the dangers of subtle incompatibilities, I think the existence of crater[1] could make a huge difference, once alternative implementations are far enough along.
Arguably the nearest thing to a specification for the Rust language at present is "what crater would be happy with".
That seems more useful than thinking of rustc as a reference implementation, because when I read the release notes every now and again they say things to the effect of "this change is technically backwards-incompatible but crater reassures us that it won't cause anyone any trouble".
Crater costs a non-negligible amount of compute time, though, enough that the maintainers only do crater runs occasionally. Adding an entire new compiler toolchain to cover each run isn't ideal.
That comment does have many valid points. Note that the issue at hand isn't whether an alternative implementation of the compiler is viable at all, but rather whether an alternative front-end (i.e. parsing, type checking etc) is desirable. I have to admit that I don't really see any advantages to having multiple frontends, when the one that's already there is open source under a permissive license.
The conversation over gcc-rs online has always seemed weird to me. These are open source projects that aren't really competing for the same resources, people are free to work on what they find interesting and everyone should support their freedom to do so.
That said I think if you had money involved it would be better spent creating LLVM backends than porting yet another language to gcc. I'm not sure "this is time consuming to bootstrap from gcc 4" is a compelling enough argument, at least nowhere near compelling as "this programming language cannot target these processor families because the backend doesn't exist and no one has the time to write one."
> Further, mrustc is only capable of targeting x86. Performing this reduction is thus impossible on riscv or aarch64, and for bootstrapping on these platforms, one would have to build every single version of rustc.
That reminds me.
Couldn't the bootstrapping problem be solved by making an official, certified wasm build of the rust compiler, and distributing a signed binary of that build?
That way your bootstrap chain only needs two steps (wasm interpreter hand-coded in local assembly, running the compiler binary -> actual compiler). It's safe, assuming you trust the source producing the wasm binary, or run a bootstrap chain to produce the binary yourself. It can target any architecture, provided you can get/write a wasm interpreter for it (not too hard).
It certainly doesn't see like a problem where the most cost-efficient solution is to rewrite an entire compiler, frontend and backend, in a different less-convenient language.
The goal for Bootstrappable Builds is to not rely on compiled binaries, except for a small easily auditable 512-byte binary seed, and from there, and source code without any generated files, build an entire Linux distro:
Personally I feel like we're missing something (and I did read the article, for the record). I suspect the problem is that they either cannot produce a Rust cross-compiler (but I think that exists?), or such a compiler cannot compile Rust itself correctly for some reason. So in your case, I guess you must not be able to create a `wasm` version of Rust that is capable of compiling the Rust compiler to native code on whatever platform you picked.
Otherwise your logic is basically how other compilers get bootstrapped, just replace `wasm` with some other already-supported platform like x86 - Produce an x86 to ARCH cross-compiler and compile the compiler with the new cross-compiler. The result is a compiler that runs on ARCH.
What you're missing is, mainly, the so-called "Trusting Trust" attack, where a compiler is backdoored such that whenever it's used to compile another compiler, the same backdoor is inserted into that compiler. Thus even assuming good intentions on the part of the current Rust maintainers, a backdoor could hypothetically have been inserted into version X's official compiler binaries at some point, which then infected version X+1, which infected version X+2, and so on. I say hypothetically, because IIRC Rust's bootstrap chain has been replicated, including with mrustc, resulting in identical binaries. There is absolutely no need to re-replicate it on every random architecture. (And even if you did want to do so, qemu already exists and can emulate x86, though wasm might be slightly faster.) But some people have strange ideas about how bootstrapping ought to work...
For a frontend to be taken seriously, I see the following questions needing to be addressed:
- How long until the frontend is viable?
- What will it take to keep the frontend is viable?
- How far will each frontend lag in shipping lang/lib features?
- What is the relative user upgrade cadence of each frontend?
- How do developers test multiple frontend versions?
- How do developers workaround frontend bugs?
- How do we communicate out frontend support for crates, especially if crate authors take the approach of this person and drop support as they become a problem for them?
A GCC backend for rustc is a smaller, well scoped target that can more easily be kept up-to-date (I would hope, upstreamed in Rust) that resolves a lot of the pressing needs (platform support).
For any finite resource allocation, I see getting a GCC backend to be a priority for quick wins and then focus on gcc-rs as an experimental hedge against future problems.
Doesn't that only work if it's the same people working on all those things? AFAIK, it's different people doing the different projects. You can't just re-assign open-source developers however you want.
What my point is more about the conversation for the larger community of what we want to prioritize and put in the effort to support. If we have the opportunity to influence where people gravitate to, which should it be? If companies want to come in and sponsor work, where would we, as a community, want that to go?
Also if gcc-rs "makes it", the cost will shift from that one or two developers to the whole community. We will need to test our projects against multiple versions of each frontend, our projects will need to advertize what they support, Rust will need to get compiler vendor and versions flags for us to conditionalize compilation to handle discrepancies and bugs, and crate authors will need to carry around these compiler conditionals.
A healthy response to news of another compiler for your new language is, "Wonderful! People are demonstrating their belief in the future of our language with a big time investment! Another compiler will help us validate our language specification, which will be badly needed when we go to ISO to ask for a Rust standard Working Group."
It is just possible that a second compiler could be twice or (as happened with Pascal and C++!) 100 times faster. Faster builds would mean thousands of hours saved for users of the language—even, potentially, for people working on the old compiler!
A second implementation of the standard library could offer similar benefits.
More implementations is an absolute prerequisite to language maturity. All languages not doomed get them. It will be a net good for Rust not to be doomed.
That said, I think that C++98 is a terrible language. I would understand targeting C++11. I would understand targeting the last version of OCaml which GCC4 can build. But the number of footguns C++98 introduces would make it for me a worse choice than plain C.
But with modern-style library support, C++98 can be quite safe and usable, at least single-threaded. But some things are unavoidably verbose. The only serious functional lack is moves, but that is a performance optimization. C++98 code can be upgraded to current whenever the perceived need to hang back dissipates.
That said, implementing another compiler just for the sake of bootstrapping convenience would be powerfully silly: if you code a new Rust compiler, it should be the very best Rust compiler you can write, and better on at least some axes than the flagship one.
Coding it in C++17 or 20 would be just good sense: Religion notwithstanding, C++ is a mature and powerful language well suited to the task.
One real peculiarity of churn around bootstrapping is that, given good support for cross compilation, you hardly ever need to bootstrap at all. So if that is a thing that worries you, work supporting cross-compiling is your most productive course.
Just because it is C++98, it isn't a recipe to write C with a C++ compiler.
Also, the Rust community at large is not at all enthusiastic about handing the language over to ISO. The design-by-comitee-based, incrementalist, waterfall throw-it-over-the-wall-every-3-years approach is pretty much the exact opposite of the continuous delivery-based way Rust is developed, and arguably one of the main reasons why C++ sucks.
What does it mean when you feel a need to lie about your perceived competition?
In fact, there is no such competition: literally hundreds pick up coding C++ professionally for each individual who so much as tries out Rust. The fast rise of Rust has no detectable effect on the growth of C++ usage, even 40 years on. Rust's and C++'s coexistence will last for as long as Rust is used at all.
You are welcome not to like C++, but lying about it says more about you than about it.
The fate of Gcc-Rs will be determined by future events unknown to you or to anyone else. It could become critically important to Rust's own future, someday.
As opposed to the stable GCC API?
> cannot be bootstrapped
Say what now?
In GCC, the compiler frontends are generally developed in parallel with, and distributed in parallel with the backend. There are some frontends which are distributed separately, but I expect that gcc-rs will be integrated into the mainline gcc source tree once complete. So it's not really an issue in practice. To put it another way: llvm api breakage doesn't affect clang, but it does affect other compilers using llvm.
Gcc is like a BSD, while llvm is like linux (except that linux makes a commitment to compatibility which llvm does not).
How? Can't you compile it with GCC?
This seems like an odd point. Instead of updating mrustc to support a couple of more features every once and a while we should write a whole new frontend? It seems like they are basically solving the same problem, except mrustc has chosen a subset of it (it assumes correct code). Of course GCC-rs also has a subset in a way as most of the codegen is handled by the GCC core.
To be clear I'm not saying that mrustc is a better approach to bootstrapping, but for this specific point is seems that the two approaches are basically equivalent.
Deleted Comment
Arguably the nearest thing to a specification for the Rust language at present is "what crater would be happy with".
That seems more useful than thinking of rustc as a reference implementation, because when I read the release notes every now and again they say things to the effect of "this change is technically backwards-incompatible but crater reassures us that it won't cause anyone any trouble".
[1] https://github.com/rust-lang/crater
https://www.reddit.com/r/rust/comments/njckp1/rustc_codegen_...
That comment does have many valid points. Note that the issue at hand isn't whether an alternative implementation of the compiler is viable at all, but rather whether an alternative front-end (i.e. parsing, type checking etc) is desirable. I have to admit that I don't really see any advantages to having multiple frontends, when the one that's already there is open source under a permissive license.
That said I think if you had money involved it would be better spent creating LLVM backends than porting yet another language to gcc. I'm not sure "this is time consuming to bootstrap from gcc 4" is a compelling enough argument, at least nowhere near compelling as "this programming language cannot target these processor families because the backend doesn't exist and no one has the time to write one."
That reminds me.
Couldn't the bootstrapping problem be solved by making an official, certified wasm build of the rust compiler, and distributing a signed binary of that build?
That way your bootstrap chain only needs two steps (wasm interpreter hand-coded in local assembly, running the compiler binary -> actual compiler). It's safe, assuming you trust the source producing the wasm binary, or run a bootstrap chain to produce the binary yourself. It can target any architecture, provided you can get/write a wasm interpreter for it (not too hard).
It certainly doesn't see like a problem where the most cost-efficient solution is to rewrite an entire compiler, frontend and backend, in a different less-convenient language.
https://bootstrappable.org/https://bootstrapping.miraheze.org/
Otherwise your logic is basically how other compilers get bootstrapped, just replace `wasm` with some other already-supported platform like x86 - Produce an x86 to ARCH cross-compiler and compile the compiler with the new cross-compiler. The result is a compiler that runs on ARCH.