Readit News logoReadit News
gary17the · a year ago
> The rust RFC process is a graveyard of good ideas.

I actually have quite an opposite view: I think the Rust core team is 100% correct to make it very hard to add new "features" to the PL, in order to prevent the "language surface" from being bloated, inconsistent and unpredictable.

I've seen this happen before: I started out as a Swift fan, even though I have been working with Objective-C++ for years, considered it an awesome powerhouse and I did not really need a new PL for anything in particular in the world of iOS development. With time, Swift's insistence on introducing tons of new language "features" such as multiple, redundant function names, e.g., "isMultiple(of:)", multiple rules for parsing curly braces at al. to make the SwiftUI declarative paradigm possible, multiple rules for reference and value types and mutability thereof, multiple shorthand notations such as argument names inside closures, etc. - all that made me just dump Swift altogether. I would have to focus on Swift development exclusively just to keep up, which I was not willing to do.

Good ideas are "dime a dozen". Please keep Rust as lean as possible.

josephg · a year ago
Author here. I hear what you're saying. But there's lots of times while using rust where the language supports feature X and feature Y, but the features can't be used together.

For example, you can write functions which return an impl Trait. And structs can contain arbitrary fields. But you can't write a struct which contains a value returned via impl Trait - because you can't name the type.

Or, I can write if a && b. And I can write if let Some(x) = x. But I can't combine those features together to write if let Some(x) = x && b.

I want things like this to be fixed. Do I want rust to be "bigger"? I mean, measured by the number of lines in the compiler, probably yeah? But measured from the point of view of "how complex is rust to learn and use", feature holes make the language more complex. Fixing these problems would make the language simpler to learn and simpler to use, because developers don't have to remember as much stuff. You can just program the obvious way.

Pin didn't take much work to implement in the standard library. But its not a "lean" feature. It takes a massive cognitive burden to use - to say nothing of how complex code that uses it becomes. I'd rather clean, simple, easy to read rust code and a complex borrow checker than a simple compiler and hard to use language.

withoutboats3 · a year ago
These features are slow to be accepted for good reasons, not just out of some sort of pique. For example, the design space around combining `if let` pattern matching with boolean expressions has a lot of fraught issues around the scoping of the bindings declared in the pattern. This becomes especially complex when you consider the `||` operator. The obvious examples you want to use work fine, but the feature needs to be designed in such a way that the language remains internally consistent and works in all edge cases.

> Pin didn't take much work to implement in the standard library. But its not a "lean" feature. It takes a massive cognitive burden to use - to say nothing of how complex code that uses it becomes. I'd rather clean, simple, easy to read rust code and a complex borrow checker than a simple compiler and a horrible language.

Your commentary on Pin in this post is even more sophomoric than the rest of it and mostly either wrong or off the point. I find this quite frustrating, especially since I wrote detailed posts explaining Pin and its development just a few months ago.

https://without.boats/blog/pin/https://without.boats/blog/pinned-places/

gary17the · a year ago
I think it would be helpful to clearly distinguish between PL simplification (e.g., "if let Some(x) = x, x == 42 {}") and convenience-driven PL expansion (e.g., "let @discardable lhs = rhs ?? 0;"). In case of the former, I'm with you. In case of the latter, I'm not. Rust likely isn't meant to be a tool that is easy to learn at all costs (since the borrow checker does exist, after all). Rust is, IMvHO, supposed to be like vi: hard to learn and easy to use :).
valenterry · a year ago
Yeah, I agree wholeheartedly with that. This is also what I really dislike in many languages.

You should have a look at Scala 3. Not saying that I'm perfectly happy with the direction of the language - but Scala really got those foundations well and made it so that it has few features but they are very powerful and can be combined very well.

Rust took a lot of inspiration from Scala for a reason - but then Rust wants to achieve zero-cost abstraction and do high-performance, so it has to make compromises accordingly for good reasons. Some of those compromises affect the ergonomics of the language unfortunately.

nindalf · a year ago
It is easier to make big breaking changes when there are fewer users, for sure. I think what you ignore is both the progress that is being made and how difficult it is to make that progress while maintaining a stable language that works for all the existing users.

I'll give an example - async traits. On the surface it seems fairly simple to add? I can say async fn, but for the longest time I couldn't say async fn inside a trait? It took years of work to solve all the thorny issues blocking this in a stable, backwards compatible way and finally ship it [1]. There is still more work to be done but the good news is that they're making good progress here!

You pointed out one feature that Rust in Linux needs (no panics), but there are several more [2]. That list looks vast, because it is. It represents years of work completed and several more years of work in the Rust and Rust for Linux projects. It might seem reasonable to ask why we can't have it right now, but like Linus said recently "getting kernel Rust up to production levels will happen, but it will take years". [3] He also pointed out that the project to build Linux with clang took 10 years, so slow progress shouldn't discourage folks. The important thing is that the Rust project maintainers have publicly committed to working on it right now - "For 2024H2 we will work to close the largest gaps that block support (for adopting Rust in the kernel)". [4]

You dream of a language that could make bold breaking changes and mention Python 2.7 in passing. The Python 2/3 split was immensely painful and widely considered to be a mistake, even among the people who had advocated for it. The Rust project has a better mechanism for small, opt-in, breaking changes - the Edition system. That has worked well for the last 9 years and has led to tremendous adoption - more than doubling every year [5]. IMO there's no reason to fix what isn't broken.

I guess what I'm saying is, patience is the key here. Each release might not bring much because it only represents 6 weeks of work, but the cumulative effect of a year's worth of changes is pretty fantastic. Keep the faith.

[1] - https://blog.rust-lang.org/2023/12/21/async-fn-rpit-in-trait...

[2] - https://github.com/Rust-for-Linux/linux/issues/2

[3] - https://lwn.net/SubscriberLink/991062/b0df468b40b21f5d/

[4] - https://blog.rust-lang.org/2024/08/12/Project-goals.html

[5] - https://lib.rs/stats

devit · a year ago
You can make the struct generic on the type to have a field being an impl Trait type.
cdaringe · a year ago
Dude YES. This stuff was maddening in rust. “How many links to GitHub issues will the compiler fire at me today” was a sincere feeling. I think it’s much better these days, but you still get em doing async work
kazinator · a year ago
It's easier to write RFCs than to implement them, and there are more people who can write RFCs. At any popularity level, you have more of the former. Therefore, an RFC queue will always look like a graveyard of good ideas, even if 100% of the queued ideas are being accepted and eventually worked on, simply due to the in/out differential rate.

If you want an edge over the people who are writing RFCs, don't write an RFC. Write a complete, production-ready implementation of your idea, with documentation and test cases, which can be cleanly merged into the tree.

JoshTriplett · a year ago
> If you want an edge over the people who are writing RFCs, don't write an RFC. Write a complete, production-ready implementation of your idea, with documentation and test cases, which can be cleanly merged into the tree.

Please by all means provide an implementation, but do write the RFC first. (Or in some cases smaller processes, such as the ACP process for a small standard-library addition.) Otherwise you may end up wasting a lot of effort, or having to rewrite the implementation. We are unlikely to accept a large feature, or even a medium feature, directly from a PR without an RFC.

hiimkeks · a year ago
I general I agree, but we are also in the paradoxical situation that generic associated constants in traits are stable, but you can't actually use them as constants. You can't use them as const generics for other types, and you can't use them for array lengths.

I'd argue that this makes them pretty useless: if you just want a value that you can use like any other, then you can define a function that returns it and be done with it. Now we have another way to do it, and in theory it could do more, but that RFC has been stale for several years, nobody seems to be working on it, and I believe it's not even in nightly.

If the support would actually be good, we could just get rid of all the support crates we have in cryptography libraries (like the generic_array and typenum crates).

That said, I agree that the Rust team should be careful about adding features.

nicce · a year ago
> Now we have another way to do it, and in theory it could do more, but that RFC has been stale for several years, nobody seems to be working on it, and I believe it's not even in nightly.

What is this way? I have been fighting with this problem for quite some time recently.

formerly_proven · a year ago
> Please keep Rust as lean as possible.

Alternatively: Rust is already the Wagyu of somewhat-mainstream PLs, don't keep adding fat until it's inedible.

Deleted Comment

goodpoint · a year ago
> Good ideas are "dime a dozen". Please keep Rust as lean as possible.

Good ideas are rare and precious by definition.

gary17the · a year ago
I think it would be helpful to clearly distinguish between "good ideas" and "excellent ideas". It's relatively easy in the complex art of programming to come up with a dozen good ideas. It seems very hard in the complex art of programming to come up with even one truly excellent idea.
dist1ll · a year ago
I think the dependency situation is pretty rough, and very few folks want to admit it. An example I recently stumbled upon: the cargo-watch[0] crate.

At its core its a pretty simple app. I watches for file changes, and re-runs the compiler. The implementation is less than 1000 lines of code. But what happens if I vendor the dependencies? It turns out, the deps add up to almost 4 million lines of Rust code, spread across 8000+ files. For a simple file-watcher.

[0] https://crates.io/crates/cargo-watch

alexvitkov · a year ago
That's what inevitably happens when you make transitive dependencies easy and you have a culture of "if there's a library for it you must use it!"

C/C++ are the only widely used languages without a popular npm-style package manager, and as a result most libraries are self-contained or have minimal, and often optional dependencies. efsw [1] is a 7000 lines (wc -l on the src directory) C++ FS watcher without dependencies.

The single-header libraries that are popular in the game programming space (stb_* [2], cgltf [3], etc) as well as of course Dear ImGui [4] have been some of the most pleasant ones I've ever worked with.

At this point I'm convinced that new package managers forbidding transitive dependencies would be an overall net gain. The biggest issue are large libraries that other ones justifiably depend on - OpenSSL, zlib, HTTP servers/clients, maybe even async runtimes. It's by no means an unsolvable problem, e.g. instead of having zlib as a transitive dependency, it could:

1. a library can still hard-depend on zlib, and just force the user to install it manually.

2. a library can provide generic compress/decompress callbacks, that the user can implement with whatever.

3. the compress/decompress functionality can be make standard

[1] https://github.com/SpartanJ/efsw

[2] https://github.com/nothings/stb

[3] https://github.com/jkuhlmann/cgltf

[4] https://github.com/ocornut/imgui

lifthrasiir · a year ago
> The single-header libraries that are popular in the game programming space (stb_* [2], cgltf [3], etc) as well as of course Dear ImGui have been some of the most pleasant ones I've ever worked with.

The mainstream game programming doesn't use C at all. (Source: I had been a gamedev for almost a decade, and I mostly dealt with C# and sometimes C++ for low-level stuffs.) Even C++ is now out of fashion for at least a decade, anyone claiming that C++ is necessary for game programming is likely either an engine developer---a required, but very small portion of all gamedevs---or whoever haven't done significant game programming recently.

Also, the reason that single-header libraries are rather popular in C is that otherwise they will be so, SO painful to use by the modern standard. As a result, those libraries have to be much more carefully designed than normal libraries either in C or other languages and contribute to their seemingly higher qualities. (Source: Again, I have written sizable single-header libraries in C and am aware of many issues from doing so.) I don't think this approach is scalable in general.

mike_hearn · a year ago
> as a result most libraries are self-contained or have minimal, and often optional dependencies

If you ignore the OS, then sure. Most C/C++ codebases aren't really portable however. They're tied to UNIX, Windows or macOS, and often some specific version range of those, because they use so many APIs from the base OS. Include those and you're up to millions of lines too.

xpe · a year ago
> That's what inevitably happens when you make transitive dependencies easy and you have a culture of "if there's a library for it you must use it!"

1. This doesn't mean that C++'s fragmented hellscape of package management is a good thing.

2. "inevitably"? No. This confuses the causation.

3. This comment conflates culture with tooling. Sure, they are related, but not perfectly so.

the_gipsy · a year ago
> a library can provide generic compress/decompress callbacks, that the user can implement with whatever.

This only works for extremely simple cases. Beyond toy example, you have to glue together two whole blown APIs with a bunch of stuff not aligning at all.

Measter · a year ago
Having a quick look at efsw, it depends on both libc and the windows API, both are huge dependencies. The Rust bindings for libc come to about 122 thousand lines, while the winapi crate is about 180 thousand lines.

[Edit] And for completeness, Microsoft's Windows crate is 630 thousand lines, though that goes way beyond simple bindings, and actually provides wrappers to make its use more idiomatic.

xpe · a year ago
> At this point I'm convinced that new package managers forbidding transitive dependencies would be an overall net gain.

Composition is an essential part of software development, and it crosses package boundaries.

How would banishing inter-package composition be a net gain?

DanielHB · a year ago
The fact is that dependency jungle is the prevalent way to get shit done these days. The best the runtime can do is embrace it, make it as performant and safe as possible and try to support minimum-dependency projects by having a broad std library.

Also I am no expert, but I think file-watchers are definitely not simple at all, especially if they are multi-platform.

kreyenborgi · a year ago
https://github.com/eradman/entr is

    Language                     files          blank        comment           code
    -------------------------------------------------------------------------------
    C                                4            154            163            880
    Bourne Shell                     2             74             28            536
    C/C++ Header                     4             21             66             70
    Markdown                         1             21              0             37
    YAML                             1              0              0             14
    -------------------------------------------------------------------------------
    SUM:                            12            270            257           1537
    -------------------------------------------------------------------------------
including a well-designed CLI.

entr supports BSD, Mac OS, and Linux (even WSL). So that's several platforms in <2k lines of code. By using MATHEMATICS and EXTRAPOLATION we find that non-WSL Windows file-watching must take four million minus two thousand equals calculate calculate 3998000 lines of code. Ahem.

Though to be fair, cargo watch probably does more than just file-watching. (Should it? Is it worth the complexity? I guess that depends on where you land on the worse-is-better discussion.)

dist1ll · a year ago
That's the usual response I get when I bring this issue up. "file watching is actually very complicated" or "if you avoided deps, you'd just reimplement millions of loc yourself.

Forgive me if I'm making a very bold claim, but I think cross-platform file watching should not require this much code. It's 32x larger than the Linux memory management subsystem.

SkiFire13 · a year ago
> try to support minimum-dependency projects by having a broad std library.

Since everyone depends on the standard library this will just mean everyone will depend on even more lines of code. You are decreasing the number of nominal dependencies but increasing of much code those amount to.

Moreover the moment the stdlib's bundled dependency is not enough there are two problems:

- it can't be changed because that would be a breaking change, so you're stuck with the old bad implementation;

- you will have to use an alternative implementation in another crate, so now you're back at the starting situation except with another dependency bundled in the stdlib.

Just look at the dependency situation with the python stdlib, e.g. how many versions of urllib there are.

chillfox · a year ago
"I think file-watchers are definitely not simple at all"

I don't really know much about Rust, but I got curious and had a look at the file watching apis for windows/linux/macos and it really didn't seem that complicated. Maybe a bit fiddly, but I have a hard time imagining how it could take more than 500 lines of code.

I would love to know where the hard part is if anyone knows of a good blog post or video about it.

0x000xca0xfe · a year ago
If you lose track of your dependencies you are just asking for supply chain attacks.

And since xz we know resourceful and patient attackers are reality and not just "it might happen".

Sorry but sprawling transitive micro-dependencies are not sustainable. It's convenient and many modern projects right now utilize it but they require a high-trust environment and we don't have that anymore, unfortunately.

cogman10 · a year ago
This is a natural and not really scary thing.

All code is built on mountains of dependencies that by their nature will do more than what you are using them for. For example, part of cargo watch is to bring in a win32 API wrapper library (which is just autogenerated bindings for win32 calls). Of course that thing is going to be massive while watch is using only a sliver of it in the case it's built for windows.

The standard library for pretty much any language will have millions of lines of code, that's not scary even though your apps likely only use a fraction of what's offered.

And have you ever glanced at C++'s boost library? That thing is monstrously big yet most devs using it are going to really only grab a few of the extensions.

The alternative is the npm hellscape where you have a package for "isOdd" and a package for "is even" that can break the entire ecosystem if the owner is disgruntled because everything depends on them.

Having fewer larger dependencies maintained and relied on by multiple people is much more ideal and where rust mostly finds itself.

throwitaway1123 · a year ago
> The alternative is the npm hellscape where you have a package for "isOdd" and a package for "is even" that can break the entire ecosystem if the owner is disgruntled because everything depends on them.

The is-odd and is-even packages are in no way situated to break the ecosystem. They're helper functions that their author (Jon Schlinkert) used as dependencies in one of his other packages (micromatch) 10 years ago, and consequently show up as transitive dependencies in antiquated versions of micromatch. No one actually depends on this package indirectly in 2024 (not even the author himself), and very few packages ever depended on it directly. Micromatch is largely obsolete given the fact that Node has built in globbing support now [1][2]. We have to let some of these NPM memes go.

[1] https://nodejs.org/docs/latest-v22.x/api/path.html#pathmatch...

[2] https://nodejs.org/docs/latest-v22.x/api/fs.html#fspromisesg...

preommr · a year ago
> The alternative is the npm hellscape where you have a package for "isOdd" and a package for "is even" that can break the entire ecosystem if the owner is disgruntled because everything depends on them.

This used to be true 5-10 years ago. The js ecosystem moves fast and much has been done to fix the dependency sprawl.

lifthrasiir · a year ago
I consciously remove and rewrite various dependencies at work, but I feel it's only a half of the whole story because either 1K or 4M lines of code seem to be equally inaccurate estimates for the appropriate number of LoC for this project.

It seems that most dependencies of cargo-watch are pulled from three direct requirements: clap, cargo_metadata and watchexec. Clap would pull lots of CLI things that would be naturally platform-dependent, while cargo_metadata will surely pull most serde stuffs. Watchexec does have a room for improvement though, because it depends on command-group (maintained in the same org) which unconditionally requires Tokio! Who would have expected that? Once watchexec got improved on that aspect however, I think these requirements are indeed necessary for the project's goal and any further dependency removal will probably come with some downsides.

A bigger problem here is that you can't easily fix other crates' excessive dependencies. Watchexec can be surely improved, but what if other crates are stuck at the older version of watchexec? There are some cases where you can just tweak Cargo.lock to get things aligned, but generally you can't do that. You have to live with excessive and/or duplicate dependencies (not a huge problem by itself, so it's default for most people) or work around with `[patch]` sections. (Cargo is actually in a better shape given that the second option is even possible at all!) In my opinion there should be some easy way to define a "stand-in" for given version of crate, so that such dependency issues can be more systematically worked around. But any such solution would be a huge research problem for any existing package manager.

cmrdporcupine · a year ago
It's frustrating because the grand-daddy of build systems with automatic transitive dependency management -- Maven -- already had tools from day one to handle this kind of thing through excluded dependencies (a blunt instrument, but sometimes necessary). In my experience, [patch] doesn't cut it or compare.

That, and the maven repository is moderated. Unlike crates.io.

Crates.io is a real problem. No namespaces, basically unmoderated, tons of abandoned stuff. Version hell like you're talking about.

I have a hard time taking it at all seriously as a professional tool. And it's only going to get worse.

If I were starting a Rust project from scratch inside a commercial company at this point, I'd use Bazel or Buck or GN/Ninja and vendored dependencies. No Cargo, no crates.io.

conradludgate · a year ago
I bet most of those lines are from the generated windows api crates. They are notoriously monstrous
dist1ll · a year ago
You're right, the windows crate alone contributes 2.2M. I wonder if there's a way to deal with this issue.
jeroenhd · a year ago
I love and hate the Windows API crates. They're amazing in that they bring pretty much the entire modern Windows API into the language without needing to touch FFI generators yourself, but the Windows API is about as large as almost every package that comes with a desktop Linux install.

I wish crates that used Windows stuff wouldn't enable it by default.

nullifidian · a year ago
Some amount of the risk from the "dependency jungle" situation could be alleviated by instituting "trusted" set of crates that are selected based on some popularity threshold, and with a rolling-release linux-distro-like stabilization chain, graduating from "testing" to "stable". If the Rust Foundation raised more money from the large companies, and hired devs to work as additional maintainers for these key crates, adding their signed-offs, it would be highly beneficial. That would have been a naturally evolving and changing equivalent to an extensive standard library. Mandating at least two maintainer sign offs for such critical set of crates would have been a good policy. Instead the large companies that use rust prefer to vet the crates on their own individually, duplicating the work the other companies do.

The fact that nothing has changed in the NPM and Python worlds indicates that market forces pressure the decision makers to prefer the more risky approach, which prioritizes growth and fast iteration.

tbillington · a year ago
vendor + linecount unfortunately doesn't represent an accurate number of what cargo-watch would actually use. It includes all platform specific code behind compile time toggles even though only one would be used at any particular time, and doesn't account for the code not included because the feature wasn't enabled. https://doc.rust-lang.org/cargo/reference/features.html

whether those factors impact how you view the result of linecount is subjective

also as one of the other commenters mentioned, cargo watch does more than just file watching

the_clarence · a year ago
Agree. That was always my major gripe with Rust: it's not battery included. The big selling point of golang was the battery included part and I think that's really what is missing in Rust. I hope that with time more stuff can't get into the rust stdlib
umanwizard · a year ago
Why, concretely, does this matter?

Other than people who care about relatively obscure concerns like distro packaging, nobody is impeded in their work in any practical way by crates having a lot of transitive dependencies.

josephg · a year ago
Author here. If I compile a package which has 1000 transitive dependencies written by different authors, there's ~1000 people who can execute arbitrary code on my computer, with my full user permissions. I wouldn't even know if they did.

That sounds like a massive security problem to me. All it would take is one popular crate to get hacked / bribed / taken over and we're all done for. Giving thousands of strangers the ability to run arbitrary code on my computer is a profoundly stupid risk.

Especially given its unnecessary. 99% of crates don't need the ability to execute arbitrary syscalls. Why allow that by default?

zifpanachr23 · a year ago
Because for a lot of companies, especially ones in industries that Rust is supposedly hoping to displace C and C++ in, dependencies are a much larger concern than memory safety. They slow down velocity way more than running massive amounts of static and dynamic analysis tools to detect memory issues does in C. Every dependency is going to need explicit approval. And frankly, most crates would never receive that approval given the typical quality of a lot of the small utility crates and other transitive dependencies. Not to mention, the amount of transitive dependencies and their size in a lot of popular crates makes them functionally unauditable.

This more than any other issue is I think what prevents Rust adoption outside of more liberal w.r.t dependencies companies in big tech and web parts of the economy.

This is actually one positive in my view behind the rather unwieldy process of using dependencies and building C/C++ projects. There's a much bigger culture of care and minimalism w.r.t. choosing to take on a dependency in open source projects.

Fwiw, the capabilities feature described in the post would go a very long way towards alleviating this issue.

goodpoint · a year ago
There's been many massive supply chain attacks happening.

And people are still calling it "obscure concerns"...

hawski · a year ago
The friction in C and C++ library ecosystem is sometimes a feature for this sole reason. Many libraries try to pull as little as possible and other things as optional.
moss2 · a year ago
Same problem with JavaScript's NPM. And Python's PIP.
jwr · a year ago
This isn't necessarily a language problem, though, more of a "culture" problem, I think.

I write in Clojure and I take great pains to avoid introducing dependencies. Contrary to the popular mantra, I will sometimes implement functionality instead of using a library, when the functionality is simple, or when the intersection area with the application is large (e.g. the library doesn't bring as many benefits as just using a "black box"). I will work to reduce my dependencies, and I will also carefully check if a library isn't just simple "glue code" (for example, for underlying Java functionality).

This approach can be used with any language, it just needs to be pervasive in the culture.

iforgotpassword · a year ago
Maybe they can learn from the Javascript folks, I heard they're very good at this.
pjmlp · a year ago
I think the interaction between both communities is exactly the reason of the current state.
teaearlgraycold · a year ago
Not sure if you're serious and talking about tree-shaking - or joking and talking about left-pad.
mseepgood · a year ago
No, they are the worst perpetrators re dependency hell.
M4Luc · a year ago
The Javascript folks are at least aware and self critical of this. In the Rust community it's sold as a great idea.
wiseowise · a year ago
Yes, unironically they’re now.

Node has improved greatly in last two years. They always had native JSON support. Now have native test runner, watch, fetch, working on permission system à la deno, added WebSockets and working on native SQLite driver. All of this makes it a really attractive platform for prototyping which scales from hello world without any dependencies to production.

Good luck experimenting with Rust without pulling half the internet with it.

E: and they’re working on native TS support

M4Luc · a year ago
Another example is Axum. Using Go, C#, Deno or Node you don't even need any third party provided more or less secure and maintained lib. It all comes from the core teams.
olalonde · a year ago
Why do you care how many lines of code the dependencies are? Compile time? Lack of disk space?
ptsneves · a year ago
Think of the problem as a bill of materials. Knowing the origin and that all the components of a part are fit for purpose is important for some applications.

If I am making a small greenhouse i can buy steel profiles and not care about what steel are they from. If I am building a house I actually want a specific standardized profile because my structure's calculations rely on that. My house will collapse if they dont. If I am building a jet engine part I want a specific alloy and all the component metals and foundry details, and will reject if the provenance is not known or suitable[1].

If i am doing my own small script for personal purposes I dont care much about packaging and libraries, just that it accomplishes my immediate task on my environment. If I have a small tetris application I also dont care much about libraries, or their reliability. If I have a business selling my application and I am liable for its performance and security I damn sure want to know all about my potential liabilities and mitigate them.

[1] https://www.usatoday.com/story/travel/airline-news/2024/06/1...

M4Luc · a year ago
Security and maintenance. That's what's so compelling about Go. The std lib is not a pleasure to use. Or esp. fast and featureful. But you can rely on it. You don't depend on 1000 strangers on the internet that might have abandoned their Rust crate for 3 years and nobody noticed.
cmrdporcupine · a year ago
Some of us like to understand what's happening in the software we work on, and don't appreciate unnecessary complexity or unknown paths in the codebase that come through third party transitive dependencies.

Some of us have licensing restrictions we have to adhere to.

Some of us are very concerned about security and the potential problems of unaudited or unmoderated code that comes in through a long dependency chain.

Hard learned lessons through years of dealing with this kind of thing: good software projects try to minimize the size of their impact crater.

armitron · a year ago
This is the main reason we have banned Rust across my Org. Every third party library needs to be audited before being introduced as a vendored dependency which is not easy to do with the bloated dependency chains that Cargo promotes.
skywal_l · a year ago
The dependency hell issue is not directly related to Rust. The Rust language can be used without using any dependency. Have you banned javascript and python too?
OtomotO · a year ago
Good on you, this approach will keep you employed for a looooooooong time, because someone has to write all that code then, right? ;)
cmrdporcupine · a year ago
Why ban Rust instead of just banning Cargo?

It's entirely possible to use Rust with other build systems, with vendored dependencies.

Crates.io is a blight. But the language is fine.

j-krieger · a year ago
How do you solve this for other languages you use?
simonask · a year ago
I'm sorry, but that feels like an incredibly poorly informed decision.

One thing is to decide to vendor everything - that's your prerogative - but it's very likely that pulling everything in also pulls in tons of stuff that you aren't using, because recursively vendoring dependencies means you are also pulling in dev-dependencies, optional dependencies (including default-off features), and so on.

For the things you do use, is it the number of crates that is the problem, or the amount of code? Because if the alternative is to develop it in-house, then...

The alternative here is to include a lot of things in the standard library that doesn't belong there, because people seem to exclude standard libraries from their auditing, which is reasonable. Why is it not just as reasonable to exclude certain widespread ecosystem crates from auditing?

joatmon-snoo · a year ago
This is what lockfiles are for.
dathinab · a year ago
> It turns out, the deps add up to almost 4 million lines of Rust code, spread across 8000+ files

(Putting aside the question weather or not that pulls in dev dependencies and that watchin files can easily have OS specific aspecects so you might have different dependencies on different OSes and that neither lines and even less files are a good measurement of complexity and that this dependencies involve a lot of code from features of dependencies which aren't used and due to rust being complied in a reasonable way are reliable not included in the final binary in most cases. Also ignoring that cargo-watch isn't implementing file watching itself it's in many aspects a wrapper around watchexec which makes it much "thiner" then it would be otherwise.)

What if that is needed for a reliable robust ecosystem?

I mean, I know, it sound absurd but give it some thought.

I wouldn't want every library to reinvent the wheel again and again for all kinds of things, so I would want them to use dependencies, I also would want them to use robust, tested, mature and maintained dependencies. Naturally this applies transitively. But what libraries become "robust, tested, mature and maintained" such which just provide a small for you good enough subset of a functionality or such which support the full functionality making it usable for a wider range of use-case?

And with that in mind let's look at cargo-watch.

First it's a CLI tool, so with the points above in mind you would need a good choice of a CLI parser, so you use e.g. clap. But at this point you already are pulling in a _huge_ number of lines of code from which the majority will be dead code eliminated. Through you don't have much choice, you don't want to reinvent the wheel and for a CLI libary to be widely successful (often needed it to be long term tested, maintained and e.g. forked if the maintainers disappear etc.) it needs to cover all widely needed CLI libary features, not just the subset you use.

Then you need to handle configs, so you include dotenvy. You have a desktop notification sending feature again not reason to reinvent that so you pull in rust-notify. Handling path in a cross platform manner has tricky edge cases so camino and shell-escape get pulled in. You do log warnings so log+stderrlog get pulled in, which for message coloring and similar pull in atty and termcolor even through they probably just need a small subset of atty. But again no reason to reinvent the wheel especially for things so iffy/bug prone as reliably tty handling across many different ttys. Lastly watching files is harder then it seems and the notify library already implements it so we use that, wait it's quite low level and there is watchexec which provides exactly the interface we need so we use that (and if we would not we still would use most or all of watchexecs dependencies).

And ignoring watchexec (around which the discussion would become more complex) with the standards above you wouldn't want to reimplement the functionality of any of this libraries yourself it's not even about implementation effort but stuff like overlooking edge cases, maintainability etc.

And while you definitely can make a point that in some aspects you can and maybe should reduce some dependnecies etc. this isn't IMHO changing the general conclusion: You need most of this dependencies if you want to conform with standards pointed out above.

And tbh. I have seen way way way to many cases of projects shaving of dependencies, adding "more compact wheel reinventions" for their subset and then ran into all kinds of bugs half a year later. Sometimes leading to the partial reimplementations becoming bigger and bigger until they weren't much smaller then the original project.

Don't get me wrong there definitely are cases of (things you use from) dependencies being too small to make it worth it (e.g. left pad) or more common it takes more time (short term) to find a good library and review it then to reimplement it yourself (but long term it's quite often a bad idea).

So idk. the issue is transitive dependencies or too many dependencies like at all.

BUT I think there are issues wrt. handling software supply chain aspects. But that is a different kind of problem with different solutions. And sure not having dependencies avoid that problem, somewhat, but it's just replacing it IMHO with a different as bad problem.

wiseowise · a year ago
What do you propose? To include it as part of std? Are you insane? That would bloat your binaries! (Still don’t understand how the smart compiler isn’t smart enough to remove dead code) And imagine if there’s an update that makes cargo-watch not BlAzInGlY fAsT™ but uLtRa BlAzInGlY fAsT™? /s
willvarfar · a year ago
How does Go compare?

I'm curious as I don't know Go but it often gets mentioned here on HN as very lightweight.

(A quick googling finds https://pkg.go.dev/search?q=watch which makes me think that it's not any different?)

TechDebtDevin · a year ago
You fuck around...
bjackman · a year ago
Rust isn't an Exciting New Language any more. It's in the "work towards widespread adoption" phase. Slower feature development is natural and healthy, the stakes are high, mistaken design choices are much more harmful than low velocity at this point.

I'm not excited about Rust because of cool features, I'm excited because it's a whole new CLASS of language (memory safe, no GC, production ready). Actually getting it into the places that matter is way more interesting to me than making it a better language. That's easier to achieve if people are comfortable that the project is being steered with a degree of caution.

josephg · a year ago
Maybe. But javascript is arguably in that phase of its life as well, and JS has had a oodles of wonderful new features added in the last decade. Features like the spread operator, generator functions, async, arrow functions, leftpad, a new Date, and so on. The list of significant new features is endless.

All that, despite JS being much older than rust, and much more widely used. Javascript also has several production implementations - which presumably all need to agree to implement any new features.

Javascript had a period of stagnation around ES5. The difference seems to be that the ecmascript standards committee got their act together.

tinco · a year ago
They got their act together because there was a language built on top of Javascript that fixed all its problems, and it was quickly gaining wide adoption. If they hadn't done anything, we'd probably still be transpiling CoffeeScript.

History repeated itself, and now Typescript has even more popularity than CoffeeScript ever did, so if the ecma committee is still on their act, they're probably working on figuring out how to adopt types into Javascript as well.

More relevant to this argument, is the question if a similar endeavor would work for Rust. Are the features you're describing so life changing that people would work in a transpiled language that had them? For CoffeeScript, from my perspective at least, it was just the arrow functions. All the sugar on top just sealed the deal.

gary17the · a year ago
Javascript has a quite different use-case audience than Rust. As an example, try to convince a guy like Linus Torvalds to officially support a particular PL for Linux kernel development, when his absolute priority (quite rightly so) is predicable, performant and portable code generation on the same level as raw C, with ease-of-use of a PL not being even a distant second, if considered at all. JavaScript does not really have to live up to those kinds of challenges.

The assumption that "[Rust] stagnation" is due to some kind of "Rust committee inefficiencies" might be incorrect.

johnisgood · a year ago
Is it really a new class of language considering we had Ada / SPARK for ages? It takes safety further, too, with formal verification.
thesuperbigfrog · a year ago
>> Is it really a new class of language considering we had Ada / SPARK for ages? It takes safety further, too, with formal verification.

Rust and Ada have similar goals and target use cases, but different advantages and strengths.

In my opinion, Rust's biggest innovations are 1) borrow checking and "mutation XOR sharing" built into the language, effectively removing the need for manual memory management or garbage collection, 2) Async/Await in a low-level systems language, and 3) Superb tooling via cargo, clippy, built-in unit tests, and the crates ecosystem (in a systems programming language!) Rust may not have been the first with these features, but it did make them popular together in a way that works amazingly well. It is a new class of language due to the use of the borrow checker to avoid memory safety problems.

Ada's strengths are its 1) powerful type system (custom integer types, use of any enumerated type as an index, etc.), 2) perfect fit for embedded programming with representation clauses, the real-time systems annex, and the high integrity systems annex, 3) built-in Design-by-Contract preconditions, postconditions, and invariants, and 4) Tasking built into the language / run-time. Compared to Rust, Ada feels a bit clunky and the tooling varies greatly from one Ada implementation to another. However, for some work, Ada is the only choice because Rust does not have sufficently qualified toolchains yet. (Hopefully soon . . .)

Both languages have great foreign function interfaces and are relatively easy to use with C compared to some other programming languages. Having done a fair bit of C programming in the past, today I would always choose Rust over C or C++ when given the choice.

sacado2 · a year ago
It also has range types, avoiding a whole class of bugs.

Dead Comment

knighthack · a year ago
Since Rustaceans are so neurotic about rewriting everything in Rust, I genuinely thought that an article about rewriting Rust (in Rust) had to be a meta-satirical joke.
zokier · a year ago
That happened already way back in the prehistory :)

Originally Rust was written in OCaml, but eventually it got rewritten in Rust

josephg · a year ago
Author here. That was the reference I was going for with the title :D
nineteen999 · a year ago
They want you (us?) to rewrite everything in Rust. Not them.
simonask · a year ago
Who is "they"? Seriously, who?
lopatin · a year ago
PL people also like bootstrapping languages. Writing Rust in Rust might not be that far fetched?
robinsonrc · a year ago
It’s long since happened
dathinab · a year ago
It's kinda strange how he complains first about a slow decision making process and then lists features which are not stabilized for reasons fully unrelated to the decision making.

E.g. corutines are stuck because they have some quite hard to correctly resolve corner cases, i.e. in the compiler there isn't a full implementation you could "just turn on" but a incomplete implementation which works okay for many cases but you really can't turn on on stable. (At least this was the case last time I checked.) Similar function traits have been explicitly decided to not be stabilized like that for various technical reasons but also due to them changing if you involve future features. (Like async corotines.) Sure the part about return values not being associated types is mostly for backward compatibility but it's also in nearly all situations just a small ergonomics drawback.

And sure there are some backward compatibility related designs which people have loved to do differently if they had more time and resources at the point the decision was made. But also most of them are related to the very early rust times when the team still was much smaller and there where less resources for evaluating important decisions.

And sure having a break which changes a bunch of older decisions now that different choices can be made and people are more experienced would be nice. BUT after how catastrophic bad python2->python3 went and similar experiences in other languages many people agree that having some rough corners is probably better and making a rust 2.0. (And many of this things can't be done through rust editions!)

In general if you follow the rust weekly newletter you can see that decisions for RFC acceptance, including for stabilization are handled every week.

And sure sometimes (quite too often) things take too long, but people/coordination/limited-time problems are often harder to solve then technical problem.

And sure some old features are stuck (corotines) and some but also many "feature gates" aren't "implemented stuck features" (but e.g. things which aren't meant to be ever stabilized, abandoned features, some features have multiple different feature gates etc.)

mplanchard · a year ago
Shouldn’t read this without also reading Josh Triplett’s comment in response on reddit. One of the core examples in this post is just plain wrong (mutexes), for example: https://old.reddit.com/r/rust/comments/1fpomvp/rewriting_rus...

Edit: nevermind, comment is here too: https://news.ycombinator.com/item?id=41655268

gyre007 · a year ago
One of the things that hit me when I was picking up Rust was that I felt like it had every imaginable feature one could think of - I dont know if Rust team said no to anything (yes I know they obviously must’ve done) - and yet people wanted more and more (some justifiably, others less so) as the language “felt” incomplete or that the features thatd be used by 2% of devs are totally necessary in the language that is “understood” by 1% of developer populace. I’m not saying the author is wrong here, just pointing out how a complex language somehow needs to be even more complicated. Spoiler: it doesn’t. Zig is simpler, arguably faster, with much less drama in the community. I wish more funding went to Zig.
SkiFire13 · a year ago
You'll be surprised by the amount of features that are often proposed by random people and are then rejected by the Rust community. Rust is definitely not trying to add all possible features, though you might get that feeling when you look at some feature like GATs and TAITs without having a clear idea of what problems they solve.

Also, Zig might be a nice modern language, but it is not an option if you're aiming for memory safety.

pa7ch · a year ago
I think any replacement for c/c++ will not be strictly safe from memory safety vulnerabilities, but I think both Rust and Zig go far enough to effectively nearly eliminate that entire class of vulns in production software. Rust achieves further memory safety than most with its borrow checker but in many cases that seems to be more about safety from crashing than vulns. For example, Go is not memory safe under concurrency, but there have been no memory safety vulns related to its concurrency ever.

One could also argue Rust's unsafe blocks will be harder to reason about bugs in than Zig code. And if you don't need any unsafe blocks it might not be an application best suited to Zig or Rust.

pas · a year ago
GAT solves typing problems (by making a subset of HKT possible)
josephg · a year ago
Author here.

> I’m not saying the author is wrong here, just pointing out how a complex language somehow needs to be even more complicated. Spoiler: it doesn’t.

True. But I think a lot of rust's complexity budget is spent in the wrong places. For example, the way Pin & futures interact adds a crazy amount of complexity to the language. And I think at least some of that complexity is unnecessary. As an example, I'd like a rust-like language which doesn't have Pin at all.

I suspect there's also ways the borrow checker could be simplified, in both syntax and implementation. But I haven't thought enough about it to have anything concrete.

I don't think there's much we can do about any of that now short of forking the language. But I can certainly dream.

Rust won't be the last language invented which uses a borrow checker. I look forward to the next generation of these ideas. I think there's probably a lot of ways to improve things without making a bigger language.

arresin · a year ago
> I wish more funding went to Zig.

Unfortunately that attracts the worst types. And their crapness and damage potential is sometimes not realised until it’s way too late.

simonask · a year ago
I'm curious, what drama in the Rust community are you referring to?

I see some drama associated with Rust, but it's usually around people resisting its usage or adoption (the recent kerfuffle about Rust for Linux, for example), and not really that common within the community. But I could be missing something?

Zig is great, but it just isn't production ready.

anonfordays · a year ago
>I'm curious, what drama in the Rust community are you referring to?

https://news.ycombinator.com/item?id=36122270https://news.ycombinator.com/item?id=29343573https://news.ycombinator.com/item?id=29351837

The Ashley "Kill All Men" Williams drama was pretty bad. She had a relationship with a core Rust board member at the time so they added her on just because. Any discussion about her addition to the board was censored immediately, reddit mods removed and banned any topics and users mentioning her, etc.

chrisco255 · a year ago
On drama: https://users.rust-lang.org/t/why-is-there-so-much-mismanage...

Also, Zig is set to release 1.0 beta in November.

throwup238 · a year ago
The graveyard of features in nightly is actually pretty big. Important stuff like specialization is forever stuck there.
SkiFire13 · a year ago
AFAIK many of those language features (specialization included) are blocked by the rewrite of the trait solver.
Ygg2 · a year ago
While there are trucks of nightly only features, some are stuck there for a good reason.

Specializations allow unsound behavior in safe Rust, which is exactly what nightly was supposed to catch.

nemothekid · a year ago
>with much less drama in the community

There are only two kinds of languages: the ones people complain about and the ones nobody uses.

Much of Rust's (and almost every other large programming language) drama are problems of scale, not implementation. The more funding you wish for will indubitably create more drama.

lifthrasiir · a year ago
Zig is already far more complex than what was originally presented anyway, while Rust 1.0 and the current Rust are mostly identical. (Pre-1.0 versions of Rust were heavily changing and overwent at least two or three extreme changes that make them essentially different languages with the same name.) Zig should be funded more for other reasons, but I don't think Zig would be safe from this eventual complexity problem.
OtomotO · a year ago
I wish Zig had a borrow checker... then we could see how much better it would fare.

(This is not a diss on Zig at all, I love its approach!)

rapnie · a year ago
> I felt like it had every imaginable feature one could think of - I dont know if Rust team said no to anything

Ah, like Scala you mean?

JoshTriplett · a year ago
> Now, there are issue threads like this, in which 25 smart, well meaning people spent 2 years and over 200 comments trying to figure out how to improve Mutex. And as far as I can tell, in the end they more or less gave up.

The author of the linked comment did extensive analysis on the synchronization primitives in various languages, then rewrote Rust's synchronization primitives like Mutex and RwLock on every major OS to use the underlying operating system primitives directly (like futex on Linux), making them faster and smaller and all-around better, and in the process, literally wrote a book on parallel programming in Rust (which is useful for non-Rust parallel programming as well): https://www.oreilly.com/library/view/rust-atomics-and/978109...

> Features like Coroutines. This RFC is 7 years old now.

We haven't been idling around for 7 years (either on that feature or in general). We've added asynchronous functions (which whole ecosystems and frameworks have arisen around), traits that can include asynchronous functions (which required extensive work), and many other features that are both useful in their own right and needed to get to more complex things like generators. Some of these features are also critical for being able to standardize things like `AsyncWrite` and `AsyncRead`. And we now have an implementation of generators available in nightly.

(There's some debate about whether we want the complexity of fully general coroutines, or if we want to stop at generators.)

Some features have progressed slower than others; for instance, we still have a lot of discussion ongoing for how to design the AsyncIterator trait (sometimes also referred to as Stream). There have absolutely been features that stalled out. But there's a lot of active work going on.

I always find it amusing to see, simultaneously, people complaining that the language isn't moving fast enough and other people complaining that the language is moving too fast.

> Function traits (effects)

We had a huge design exploration of these quite recently, right before RustConf this year. There's a challenging balance here between usability (fully general effect systems are complicated) and power (not having to write multiple different versions of functions for combinations of async/try/etc). We're enthusiastic about shipping a solution in this area, though. I don't know if we'll end up shipping an extensible effect system, but I think we're very likely to ship a system that allows you to write e.g. one function accepting a closure that works for every combination of async, try, and possibly const.

> Compile-time Capabilities

Sandboxing against malicious crates is an out-of-scope problem. You can't do this at the language level; you need some combination of a verifier and runtime sandbox. WebAssembly components are a much more likely solution here. But there's lots of interest in having capabilities for other reasons, for things like "what allocator should I use" or "what async runtime should I use" or "can I assume the platform is 64-bit" or similar. And we do want sandboxing of things like proc macros, not because of malice but to allow accurate caching that knows everything the proc macro depends on - with a sandbox, you know (for instance) exactly what files the proc macro read, so you can avoid re-running it if those files haven't changed.

> Rust doesn't have syntax to mark a struct field as being in a borrowed state. And we can't express the lifetime of y.

> Lets just extend the borrow checker and fix that!

> I don't know what the ideal syntax would be, but I'm sure we can come up with something.

This has never been a problem of syntax. It's a remarkably hard problem to make the borrow checker able to handle self-referential structures. We've had a couple of iterations of the borrow checker, each of which made it capable of understanding more and more things. At this point, I think the experts in this area have ideas of how to make the borrow checker understand self-referential structures, but it's still going to take a substantial amount of effort.

> This syntax could also be adapted to support partial borrows

We've known how to do partial borrows for quite a while, and we already support partial borrows in closure captures. The main blocker for supporting partial borrows in public APIs has been how to expose that to the type system in a forwards-compatible way that supports maintaining stable semantic versioning:

If you have a struct with private fields, how can you say "this method and that method can borrow from the struct at the same time" without exposing details that might break if you add a new private field?

Right now, leading candidates include some idea of named "borrow groups", so that you can define your own subsets of your struct without exposing what private fields those correspond to, and so that you can change the fields as long as you don't change which combinations of methods can hold borrows at the same time.

> Comptime

We're actively working on this in many different ways. It's not trivial, but there are many things we can and will do better here.

I recently wrote two RFCs in this area, to make macro_rules more powerful so you don't need proc macros as often.

And we're already talking about how to go even further and do more programmatic parsing using something closer to Rust constant evaluation. That's a very hard problem, though, particularly if you want the same flexibility of macro_rules that lets you write a macro and use it in the same crate. (Proc macros, by contrast, require you to write a separate crate, for a variety of reasons.)

> impl<T: Copy> for Range<T>.

This is already in progress. This is tied to a backwards-incompatible change to the range types, so it can only occur over an edition. (It would be possible to do it without that, but having Range implement both Iterator and Copy leads to some easy programming mistakes.)

> Make if-let expressions support logical AND

We have an unstable feature for this already, and we're close to stabilizing it. We need to settle which one or both of two related features we want to ship, but otherwise, this is ready to go.

    > But if I have a pointer, rust insists that I write (*myptr).x or, worse: (*(*myptr).p).y.
We've had multiple syntax proposals to improve this, including a postfix dereference operator and an operator to navigate from "pointer to struct" to "pointer to field of that struct". We don't currently have someone championing one of those proposals, but many of us are fairly enthusiastic about seeing one of them happen.

That said, there's also a danger of spending too much language weirdness budget here to buy more ergonomics, versus having people continue using the less ergonomic but more straightforward raw-pointer syntaxes we currently have. It's an open question whether adding more language surface area here would on balance be a win or a loss.

> Unfortunately, most of these changes would be incompatible with existing rust.

One of the wonderful things about Rust editions is that there's very little we can't change, if we have a sufficiently compelling design that people will want to adopt over an edition.

> The rust "unstable book" lists 700 different unstable features - which presumably are all implemented, but which have yet to be enabled in stable rust.

This is absolutely an issue; one of the big open projects we need to work on is going through all the existing unstable features and removing many that aren't likely to ever reach stabilization (typically either because nobody is working on them anymore or because they've been superseded).

xgb84j · a year ago
What you describe is how development of basic packages that are part or on the level of the standard library should be done. The languages we are currently using will still be used decades from now. Slow good decisions now save much more time later on.
agersant · a year ago
Thanks for taking the time to write this reply. Happy to hear a lot of this is in motion!
epage · a year ago
> Sandboxing against malicious crates is an out-of-scope problem. You can't do this at the language level; you need some combination of a verifier and runtime sandbox. WebAssembly components are a much more likely solution here. But there's lots of interest in having capabilities for other reasons, for things like "what allocator should I use" or "what async runtime should I use" or "can I assume the platform is 64-bit" or similar. And we do want sandboxing of things like proc macros, not because of malice but to allow accurate caching that knows everything the proc macro depends on - with a sandbox, you know (for instance) exactly what files the proc macro read, so you can avoid re-running it if those files haven't changed.

We've had a lot of talk about sandboxing of proc-macros and build scripts. Of course, more declarative macros, delegating `-sys` crate logic to a shared library, and `cfg(version)` / `cfg(accessible)` will remove a lot of the need for user versions of these. However, that all ignores runtime. The more I think about it, the more cackle's "ACLs" [0] seem like the way to go as a way for extensible tracking of operations and auditing their use in your dependency tree, whether through a proc-macro, a build script, or runtime code.

I heard that `cargo-redpen` is developing into a tool to audit calls though I'm imagining something higher level like cackle.

[0]: https://github.com/cackle-rs/cackle

josephg · a year ago
Author here. Thanks for the in depth response. I appreciate hearing an insider's perspective.

> I always find it amusing to see, simultaneously, people complaining that the language isn't moving fast enough and other people complaining that the language is moving too fast.

I think people complain that rust is a big language, and they don't want it to be bigger. But keeping the current half-baked async implementation doesn't make the language smaller or simpler. It just makes the language worse.

> The main blocker for supporting partial borrows in public APIs has been how to expose that to the type system in a forwards-compatible way that supports maintaining stable semantic versioning

I'd love it if this feature shipped, even if it only works (for now) within a single crate. I've never had this be a problem in my crate's public API. But it comes up constantly while programming.

> Sandboxing against malicious crates is an out-of-scope problem. You can't do this at the language level; you need some combination of a verifier and runtime sandbox.

Why not?

If I call a function that contains no unsafe 3rd party code in its call tree, and which doesn't issue any syscalls, that function can already only access & interact with passed parameters, local variables and locally in-scope globals. Am I missing something? Because that already looks like a sandbox, of sorts, to me.

Is there any reason we couldn't harden the walls of that sandbox and make it usable as a security boundary? Most crates in my dependency tree are small, and made entirely of safe code. And the functions in those libraries I call don't issue any syscalls already anyway. Seems to me like adding some compile-time checks to enforce that going forward would be easy. And it would dramatically reduce the supply chain security risk.

Mind explaining your disagreement a little more? It seems like a clear win to me.

dgroshev · a year ago
> But keeping the current half-baked async implementation doesn't make the language smaller or simpler. It just makes the language worse.

I can't disagree more.

In fact, I think that the current state of async Rust is the best implementation of async in any language.

To get Pin stuff out of the way: it is indeed more complicated than it could be (because reverse compatibility etc), but when was the last time you needed to write a poll implementation manually? Between runtime (tokio/embassy) and utility crates, there is very little need to write raw futures. Combinators, task, and channels are more than enough for the overwhelming majority of problems, and even in their current state they give us more power than Python or JS ecosystems.

But then there's everything else.

Async Rust is correct and well-defined. The way cancellation, concurrent awaiting, and exceptions work in languages like JS and Python is incredibly messy (eg [1]) and there are very few people who even think about that. Rust in its typical fashion frontloads this complexity, which leads to more people thinking and talking about it, but that's a good thing.

Async Rust is clearly separated from sync Rust (probably an extension of the previous point). This is good because it lets us reason about IO and write code that won't be preempted in an observable way, unlike with Go or Erlang. For example, having a sync function we can stuff things into thread locals and be sure that they won't leak into another future.

Async Rust has already enabled incredibly performant systems. Cloudflare's Pingora runs on Tokio, processing a large fraction of internet traffic while being much safer and better defined than nginx-style async. Same abstractions work in Datadog's glommio, a completely different runtime architecture.

Async Rust made Embassy possible, a genuine breakthrough in embedded programming. Zero overhead, safe, predictable async on microcontrollers is something that was almost impossible before and was solved with much heavier and more complex RTOSes.

"Async Rust bad" feels like a meme at this point, a meme with not much behind it. Async Rust is already incredibly powerful and well-designed.

[1]: https://neopythonic.blogspot.com/2022/10/reasoning-about-asy...

lifthrasiir · a year ago
> Why not?

I believe you are proposing a language-based security (langsec), which seemed very promising at first but the current consensus is that it still has to be accompanied with other measures. One big reason is that virtually no practical language implementation is fully specified.

As an example, let's say that we only have fixed-size integer variables and simple functions with no other control constructs. Integers wrap around and division by zero yields zero, so no integer operation can trap. So it should be easy to check for the infinite recursion and declare that the program would never trap otherwise, right? No! A large enough number of nested but otherwise distinct function calls would eventually overflow the stack and cause a trap or anything else. But this notion of "stack" is highly specific to the implementation, so the provable safety essentially implies that you have formalized all such implementation-specific notions in advance. Possible but extremely difficult in practice.

The "verifier and runtime sandbox" mentioned here is one solution to get around this difficulty. Instead of being able to understand the full language, the verifier is only able to understand a very reduced subset and the compiler is expected (but not guaranteed) to return something that would pass the verifier. A complex enough verifier would be able to guarantee that it is safe to execute even without a sandbox, but a verifier combined with a runtime sandbox is much simpler and more practical.

skavi · a year ago
You really should update your post wrt the Mutex changes.