I have a personal principle of not using LLVM-based languages (equal parts I don't like how LLVM people behave against GCC and I support free software first and foremost), so I personally watch gccrs closely, and my personal ban on Rust will be lifted the day gccrs becomes an alternative compiler.
This brings me to the second biggest reservation about Rust. The language is moving too fast, without any apparent plan or maturing process. Things are labeled unstable, there's no spec, and apparently nobody is working on these very seriously, which you also noted.
I don't understand why people are hostile against gccrs? Can you point me to some discussions?
> It feels somewhat like a corporate takeover, not something where good and benign technology is most important.
As I noted above, the whole Rust ecosystem feels like it's on a crusade, esp. against C++. I write C++, and I play with pointers a lot, and I understand the gotchas, and also the team dynamics and how it's becoming harder to write good software with larger teams regardless of programming language, but the way Rust propels itself forward leaves a very bad taste in the mouth, and I personally don't like to be forced into something. So, while gccrs will remove my personal ban, I'm not sure I'll take the language enthusiastically. On the other hand, another language Rust people apparently hate, Go, ticks all the right boxes as a programming language. Yes, they made some mistakes and turned back from some of them at the last moment, but the whole ordeal looks tidier and better than Rust.
In short, being able to borrow-check things is not a license to push people around like this, and they are building themselves a good countering force with all this enthusiasm they're pumping around.
Oh, I'll only thank them for making other programming languages improve much faster. Like how LLVM has stirred GCC devs into action and made GCC a much better compiler in no time.
Sure. And in C and Zig, it's "trivial" to make a global mutable variable, it "just requires" you to flawlessly uphold memory access invariants manually across all possible concurrent states of your program.
Stop beating around the bush. Rust is just easier than nearly any other language for writing concurrent programs, and it's not even close (though obligatory shout out to Erlang).
As a background, you might ask why you need different runtimes ever. Why not just make everything async and be done with it, especially if the language is able to hide that complexity?
1. In the context of a systems language that's not an option. You might be writing an OS, embedded code, a game with atypical performance demands requiring more care with the IO, some kernel-bypass shenanigan, etc. Even just selecting between a few builtin choices (like single-threaded async vs multi-threaded async vs single-threaded sync) doesn't provide enough flexibility for the range of programs you're trying to allow a user to write.
2. Similarly, even initializing a truly arbitrary IO effect once at compile-time doesn't always suffice. Maybe you normally want a multi-threaded solution but need more care with respect to concurrency in some critical section and need to swap in a different IO. Maybe you normally get to interact with the normal internet but have a mode/section/interface/etc where you need to send messages through stranger networking conditions (20s ping, 99% packet loss, 0.1kbps upload on the far side, custom hardware, etc). Maybe some part of your application needs bounded latency and is fine dropping packets but some other part needs high throughput and no dropped packets at any latency cost. Maybe your disk hardware is such that it makes sense for networking to be async and disk to be sync. And so on. You can potentially work around that in a world with a single IO implementation if you can hack around it with different compilation units or something, but it gets complicated.
Part of the answer then is that you need (or really want) something equivalent to different IO runtimes, hot-swappable for each function call. I gave some high-level ideas as to why that might be the case, but high-level observations often don't resonate, so let's look at a concrete case where `await` is less ergonomic:
1. Take something like TLS as an example (stdlib or 3rd-party, doesn't really matter). The handshake code is complicated, so a normal implementation calls into an IO abstraction layer and physically does reads and writes (as opposed to, e.g., a pure state-machine implementation which returns some metadata about which action to perform next -- I hacked together a terrible version of that at one point [0] if you want to see what I mean). What if you want to run it on an embedded device? If it were written with async it would likely have enough other baggage that it wouldn't fit or otherwise wouldn't work. What if you want to hide your transmission in other data to sneak it past prying eyes (steganography, nowadays that's relatively easy to do via LLMs interestingly enough, and you can embed arbitrary data in messages which are human-readable and purport to discuss completely other things without exposing hi/lo-bit patterns or other such things that normally break steganography)? Then the kernel socket abstraction doesn't work at all, and "just using await" doesn't fix the problem. Basically, any place you want to use that library (and, arguably, that's the sort of code where you should absolutely use a library rather than rolling it yourself), if the implementer had a "just use await" mentality then you're SOL if you need to use it in literally any other context.
I was going to write more concrete cases, but this comment is getting to be too long. The general observation is that "just use await" hinders code re-use. If you're writing code for your own consumption and also never need those other uses then it's a non-issue, but with a clever choice of abstraction it _might_ be possible (old Zig had a solution that didn't quite hit the mark IMO, and time will tell if this one is good enough, but I'm optimistic) to enable the IO code people naturally write to be appropriately generic by default and thus empower future developers via a more composable set of primitives.
They really nailed that with the allocator interface, and if this works then my only real concern is a generic "what next" -- it's pushing toward an effect system, but integrating those with a systems language is mostly an unsolved problem, and adding a 3rd, 4th, etc explicit parameter to nearly every function is going to get unwieldy in a hurry (back-of-the-envelope idea I've had stewing if I ever write a whole "major" language is to basically do what Zig currently does and pack all those "effects" into a single effect parameter that you pass into each function, still allowing you to customize each function call, still allowing you to inspect which functions require allocators or whatever, but making the experience more pleasant if you have a little syntactic sugar around sub-effects and if the parent type class is comptime-known).
[0] https://github.com/hmusgrave/rayloop/blob/d5e797967c42b9c891...
desalination isn’t just expensive, it’s existentially costly in terms of energy consumption, and I don’t see any dyson spheres in production.
It costs approx 3kwh of energy to desalinate one cubic meter of water.