Readit News logoReadit News
newpavlov · 5 years ago
A bigger problem in my opinion is that Rust has chosen to follow the poll-based model (you can say that it was effectively designed around epoll), while the completion-based one (e.g. io-uring and IOCP) with high probability will be the way of doing async in future (especially in the light of Spectre and Meltdown).

Instead of carefully weighing advantages and disadvantages of both models, the decision was effectively made on the ground of "we want to ship async support as soon as possible" [1]. Unfortunately, because of this rush, Rust got stuck with a poll-based model with a whole bunch of problems without a clear solution in sight (async drop anyone?). And instead of a proper solution for self-referencing structs (yes, a really hard problem), we did end up with the hack-ish Pin solution, which has already caused a number of problems since stabilization and now may block enabling of noalias by default [2].

Many believe that Rust async story was unnecessarily rushed. While it may have helped to increase Rust adoption in the mid term, I believe it will cause serious issues in the longer term.

[1]: https://github.com/rust-lang/rust/issues/62149#issuecomment-... [2]: https://github.com/rust-lang/rust/issues/63818

zamalek · 5 years ago
> A bigger problem in my opinion is that Rust has chosen to follow the poll-based model

This is an inaccurate simplification that, admittedly, their own literature has perpetuated. Rust uses informed polling: the resource can wake the scheduler at any time and tell it to poll. When this occurs it is virtually identical to completion-based async (sans some small implementation details).

What informed polling brings to the picture is opportunistic sync: a scheduler may choose to poll before suspending a task. This helps when e.g. there is data already in IO buffers (there often is).

There's also some fancy stuff you can do with informed polling, that you can't with completion (such as stateless informed polling).

Everything else I agree with, especially Pin, but informed polling is really elegant.

pas · 5 years ago
Could you explain what is informed and stateless informed polling? I haven't really found anything on the web. Thanks!
amelius · 5 years ago
> the resource can wake the scheduler at any time and tell it to poll

Isn't that called interrupting?

The terminology seems a little off here, but perhaps that is only my perception.

comex · 5 years ago
> Instead of carefully weighing advantages and disadvantages of both models, the decision was effectively made on the ground of "we want to ship async support as soon as possible" [1].

That is not an accurate summary of that comment. withoutboats may have been complaining about someone trying to revisit the decision made in 2015-2016, but as the comment itself points out, there were good reasons for that decision.

Mainly two reasons, as far as I know.

First, Rust prefers unique ownership and acyclic data structures. You can make cyclic structures work if you use RefCell and Rc and Weak, but you're giving up the static guarantees that the borrow checker gives you in favor of a bunch of dynamic checks for 'is this in use' and 'is this still alive', which are easy to get wrong. But a completion-based model essentially requires a cyclic data structure: a parent future creates a child future (and can then cancel it), which then calls back to the parent future when it's complete. You might be able to minimize cyclicity by having the child own the parent and treating cancellation as a special case, but then you lose uniqueness if one parent has multiple children.

(Actually, even the polling model has a bit of cyclicity with Wakers, but it's kept to an absolute minimum.)

Second, a completion-based model makes it hard to avoid giving each future its own dynamic allocation, whereas Rust likes to minimize dynamic allocations. (It also requires indirect calls, which is a micro-inefficiency, although I'm not convinced that matters very much; current Rust futures have some significant micro-inefficiencies of their own.) The 2016 blog post linked in the comment goes into more detail about this.

As you might guess, I find those reasons compelling, and I think a polling-based model would still be the right choice even if Rust's async model was being redesigned from scratch today. Edit: Though to be fair, the YouTube video linked from withoutboats' comment does mention that mio decided on polling simply because that's what worked best on Linux at the time (pre-io_uring), and that had some influence on how Futures ended up. But only some.

…That said, I do agree Pin was rushed and has serious problems.

newpavlov · 5 years ago
>That is not an accurate summary of that comment.

How is to so, if he explicitly writes:

> Suggestions that we should revisit our underlying futures model are suggestions that we should revert back to the state we were in 3 or 4 years ago, and start over from that point. <..> Trying to provide answers to these questions would be off-topic for this thread; the point is that answering them, and proving the answers correct, is work. What amounts to a solid decade of labor-years between the different contributors so far would have to be redone again.

How should I read it except like "we did the work on the poll-based model, so we don't want for the results to go down the drain in the case if the completion-based model will turn to be superior"?

I don't agree with your assertion regarding cyclic structures and the need of dynamic allocations in the completion-based model. Both models result in approximately the same cyclisity of task states, no wonders, since task states are effectively size-bound stacks. In both models you have more or less the same finite state machines. The only difference is in how those FSMs interact with runtime and in the fact that in the completion-based model you usually pass ownership of a task state part to runtime during task suspension. So you can not simply drop a task if you no longer need its results, you have to explicitly request its cancellation from runtime.

yxhuvud · 5 years ago
> First, Rust prefers unique ownership and acyclic data structures. You can make cyclic structures work if you use RefCell and Rc and Weak, but you're giving up the static guarantees that the borrow checker gives you in favor of a bunch of dynamic checks for 'is this in use' and 'is this still alive',

One way to get around that is to instead of doing it like the very structureless async way actually impose lifetime restrictions on the lifetimes of async entities. For example, if you use the ideas of the structured concurrency movement ([x], for example but it has since been picked up by kotlin, swift and other projects), then the parent is guaranteed to live longer than any child thus solving most of the problem that way.

[x] https://vorpus.org/blog/notes-on-structured-concurrency-or-g...

the_duke · 5 years ago
I agree that Rust async is currently in a somewhat awkward state.

Don't get me wrong, it's usable and many projects use it to great effect.

But there are a few important features like async trait methods (blocked by HKT), async closures, async drop, and (potentially) existential types, that seem to linger. The unresolved problems around Pin are the most worrying aspect.

The ecosystem is somewhat fractured, partially due to a lack of commonly agreed abstractions, partially due to language limitations.

There also sadly seems to be a lack of leadership and drive to push things forward.

I'm ambivalent about the rushing aspect. Yes, async was pushed out the door. Partially due to heavy pressure from Google/Fuchsia and a large part of the userbase eagerly .awaiting stabilization.

Without stabilizing when they did, we very well might still not have async on stable for years to come. At some point you have to ship, and the benefits for the ecosystem can not be denied. It remains to be seen if the design is boxed into a suboptimal corner; I'm cautiously optimistic.

But what I disagree with is that polling was a mistake. It is what distinguishes Rusts implementation, and provides significant benefits. A completion model would require a heavier, standardized runtime and associated inefficiencies like extra allocations and indirection, and prevent efficiencies that emerge with polling. Being able to just locally poll futures without handing them off to a runtime, or cheaply dropping them, are big benefits.

Completion is the right choice for languages with a heavy runtime. But I don't see how having the Rust dictate completion would make io_uring wrapping more efficient than implementing the same patterns in libraries.

UX and convenience is a different topic. Rust async will never be as easy to use as Go, or async in languages like Javascript/C#. To me the whole point of Rust is providing as high-level, safe abstractions as possible, without constraining the ability to achieve maximum efficiency . (how well that goal is achieved, or hindered by certain design patterns that are more or less dictated by the language design is debatable, though)

newpavlov · 5 years ago
>A completion model would require a heavier, standardized runtime and associated inefficiencies like extra allocations and indirection, and prevent efficiencies that emerge with polling.

You are not the first person who uses such arguments, but I don't see why they would be true. In my understanding both models would use approximately the same FSMs, but which would interact differently with a runtime (i.e. instead of registering a waker, you would register an operation on a buffer which is part of the task state). Maybe I am missing something, so please correct me if I am wrong in a reply to this comment: https://news.ycombinator.com/item?id=26407824

ash · 5 years ago
Is there a good explanation on the difference between polling model and completion model? (not Rust-specific)
devwastaken · 5 years ago
Correct me if I'm wrong, but isnt any sort of async support non integral to rust? For example in something like Javscript you can't impliment your own async. But in C, C++ or Rust you can do pretty much anything you want.

So if in the future io-uring and friends become the standard can't that just be a library you could then use?

Similar to how in C you don't need the standard library to do threads or async.

wahern · 5 years ago
I agree. Completion-based APIs are more high level, and not a good abstraction at the systems language level. IOCP and io_uring use poll-based interfaces internally. In io_uring's case, the interfaces are basically the same ones available in user space. In Windows case IOCP uses interfaces that are private, but some projects have figured out the details well enough to implement decent epoll and kqueue compatibility libraries.

Application developers of course want much higher level interfaces. They don't want to do a series of reads; they want "fetch_url". But if "fetch_url" is the lowest-level API available, good luck implementing an efficient streaming media server. (Sometimes we end up with things like HTTP Live Streaming, a horrendously inefficient protocol designed for ease of use in programming environments, client- and server-side, that effectively only offer the equivalent of "fetch_url".)

Plus, IOCP models tend to heavily rely on callbacks and closures. And as demonstrated in the article, low-level languages suck at providing ergonomic first-class functions, especially if they lack GC. (It's a stretch to say that Rust even supports first-class functions.) If I were writing an asynchronous library in Rust, I'd do it the same way I'd do it in C--a low-level core that is non-blocking and stateful. For example, you repeatedly invoke something like "url_fetch_event", which returns a series of events (method, header, body chunk) or EAGAIN/EWOULDBLOCK. (It may not even pull from a socket directly, but rely on application to write source data into an internal buffer.) Then you can wrap that low-level core in progressively higher-level APIs, including alternative APIs suited to different async event models, as well as fully blocking interfaces. And if a high-level API isn't to some application developer's liking, they can create their own API around the low-level core API. This also permits easier cross-language integration. You can easily use such a low-level core API for bindings to Python, Lua, or even Go, including plugging into whatever event systems they offer, without losing functional utility.

It's the same principle with OS and systems language interfaces--you provide mechanisms that can be built upon. But so many Rust developers come from high-level application environments, including scripting language environments, where this composition discipline is less common and less relevant.

newpavlov · 5 years ago
Yes, you can encode state machines manually, but it will be FAR less ergonomic than the async syntax. Rust has started with a library-based approach, but it was... not great. Async code was littered with and_then methods and it was really close to the infamous JS callback hell. The ergonomic improvements which async/await brings is essentially a raison d'être for incorporating this functionality into the language.
junon · 5 years ago
FWIW I've done a fair bit of researching with io_uring. For file operations it's fast, bit over epoll the speedups are negligible. The creator is a good guy but they're having issues with the performance numbers being skewed due to various deficiencies in the benchmark code, such as skipping error checks in the past.

Also, io_uring can certainly be used via polling. Once the shared rings are set up, no syscalls are necessary afterward.

ben0x539 · 5 years ago
We've briefly been playing with io_uring (in async rust) for a network service that is CPU-bound and seems to be bottlenecked in context switches. In a very synthetic comparison, the io_uring version seemed very promising (as in "it may be worth rewriting a production service targeting an experimental io setup"), we ran out of the allocated time before we got to something closer to a real-world benchmark but I'm fairly optimistic that even for non-file operations there are real performance gains in io_uring for us.

I'm not sure io_uring polling counts as polling since you're really just polling for completions, you still have all the completion-based-IO things like the in-flight operations essentially owning their buffers.

withoutboats2 · 5 years ago
This post is completely and totally wrong. At least you got to ruin my day, I hope that's a consolation prize for you.

There is NO meaningful connection between the completion vs polling futures model and the epoll vs io-uring IO models. comex's comments regarding this fact are mostly accurate. The polling model that Rust chose is the only approach that has been able to achieve single allocation state machines in Rust. It was 100% the right choice.

After designing async/await, I went on to investigate io-uring and how it would be integrated into Rust's system. I have a whole blog series about it on my website: https://without.boats/tags/io-uring/. I assure you, the problems it present are not related to Rust's polling model AT ALL. They arise from the limits of Rust's borrow system to describe dynamic loans across the syscall boundary (i.e. that it cannot describe this). A completion model would not have made it possible to pass a lifetime-bound reference into the kernel and guarantee no aliasing. But all of them have fine solutions building on work that already exists.

Pin is not a hack any more than Box is. It is the only way to fit the desired ownership expression into the language that already exists, squaring these requirements with other desireable primitives we had already committed to shared ownership pointers, mem::swap, etc. It is simply FUD - frankly, a lie - to say that it will block "noalias," following that link shows Niko and Ralf having a fruitful discussion about how to incorporate self-referential types into our aliasing model. We were aware of this wrinkle before we stabilized Pin, I had conversations with Ralf about it, its just that now that we want to support self-referential types in some cases, we need to do more work to incorporate it into our memory model. None of this is unusual.

And none of this was rushed. Ignoring the long prehistory, a period of 3 and a half years stands between the development of futures 0.1 and the async/await release. The feature went through a grueling public design process that burned out everyone involved, including me. It's not finished yet, but we have an MVP that, contrary to this blog post, does work just fine, in production, at a great many companies you care about. Moreover, getting a usable async/await MVP was absolutely essential to getting Rust the escape velocity to survive the ejection from Mozilla - every other funder of the Rust Foundation finds async/await core to their adoption of Rust, as does every company that is now employing teams to work on Rust.

Async/await was, both technically and strategically, as well executed as possible under the circumstances of Rust when I took on the project in December 2017. I have no regrets about how it turned out.

Everyone who reads Hacker News should understand that the content your consuming is usually from one of these kinds of people: a) dilettantes, who don't have a deep understanding of the technology; b) cranks, who have some axe to grind regarding the technology; c) evangelists, who are here to promote some other technology. The people who actually drive the technologies that shape our industry don't usually have the time and energy to post on these kinds of things, unless they get so angry about how their work is being discussed, as I am here.

francoisp · 5 years ago
Thank you for this post. I have been interested in rust because of matrix, and although I found it a bit more intimidating than go to toy with, I was inclined to try it on a real project over go because it felt like the closest to the hardware while not having the memory risks of C. The co-routines/async was/is the most daunting aspect of Rust, and a post with a sensational title like the grand-parent could have swayed me the other way.

As an aside, It would be great to have some sort of federated cred(meritocratic in some way) in hackernews, instead of a flat democratic populist point system; it would lower the potential eternal September effect.

I would love to see a personal meta-pointing system, it could be on wrapping site: if I downvote a "waste of hackers daytime" article (say a long form article about what is life) in my "daytime" profile, I get a weighted downvoted feed by other users that also downvoted this item--basically using peers that vote like you as a pre-filter. I could have multiple filters, one for quick daytime hacker scan, and one for leisure factoid. One could even meta-meta-vote and give some other hackers' handle a heavier weight...

matdehaast · 5 years ago
To hopefully make your day better.

I for one, amongst many people I am sure, am deeply grateful for the work of you and your peers in getting this out!

kristoff_it · 5 years ago
For what it's worth, I agree 100% with the premise of withoutboats' post, based on the experience of having worked a little on Zig's event loop.

My recommendation to people that don't see how ridiculous the original post was, is to write more code and look more into things.

newpavlov · 5 years ago
Please, calm down. I do appreciate your work on Rust, but people do make mistakes and I strongly belive that in the long term the async stabilization was one of them. It's debatable whether async was essential or not for Rust, I agree it gave Rust a noticeable boost in popularity, but personally I don't think it was worth the long term cost. I do not intend to change your opinion, but I will keep mine and reserve the right to speak about this opinion publicly.

In this thread [1] we have a more technicall discussion about those models, I suggest to continue that thread.

>I assure you, the problems it present are not related to Rust's polling model AT ALL.

I do not agree about all problems, but my OP was indeed worded somewhat poorly, as I've admitted here [2].

>Pin is not a hack any more than Box is. It is the only way to fit the desired ownership expression into the language that already exists

I can agree that it was the easiest solution, but I strongly disagree about the only one. And frankly it's quite disheartening to hear from a tech leader such absolutistic statements.

>It is simply FUD - frankly, a lie - to say that it will block "noalias,

Where did I say "will"? I think you will agree that it at the very least will it will cause a delay. Also the issue shows that Pin was not proprely thought out, especially in the light of other safety issues it has caused. And as uou can see by other comments, I am not the only one who thinks so.

>the content your consuming is usually from one of these kinds of people:

Please, satisfy my curiosity. To which category do I belong in your opinion?

[1]: https://news.ycombinator.com/item?id=26410359

[2]: https://news.ycombinator.com/item?id=26407565

hu3 · 5 years ago
Is it just me or you're supporting your parent's point of:

> ...the decision was effectively made on the ground of "we want to ship async support as soon as possible" [1].

When you write:

> Moreover, getting a usable async/await MVP was absolutely essential to getting Rust the escape velocity to survive the ejection from Mozilla...

This whole situation saddens me. I wish Mozilla could have given you guys more breathing room to work on such critical parts. Regardless, thank you for your dedication.

dataangel · 5 years ago
Please keep going, Rust is awesome and one of the few language projects trying to push the efficient frontier and not just rolling a new permutation of the trade-off dice.
api · 5 years ago
I've jumped on the Rust bandwagon as part of ZeroTier 2.0 (not rewriting its core, but rewriting some service stuff in Rust and considering the core eventually). I've used a bit of async and while it's not as easy as Go (nothing is!) it's pretty damn ingenious for language-native async in a systems programming language.

I personally would have just chickened out on language native async in Rust and told people to roll their own async with promise patterns or something.

Ownership semantics are hairy in Rust and require some forethought, but that's also true in C and C++ and in those languages if you get it wrong there you just blow your foot off. Rust instead tells you that the footgun is dangerously close to going off and more or less prohibits you from doing really dangerous things.

My opinion on Rust async is that it its warts are as much the fault of libraries as they are of the language itself. Async libraries are overly clever, falling into the trap of favoring code brevity over code clarity. I would rather have them force me to write just a little more boilerplate but have a clearer idea of what's going on than to rely on magic voodoo closure tricks like:

https://github.com/hyperium/hyper/issues/2446

Compare that (which was the result of hours of hacking) to their example:

https://hyper.rs/guides/server/hello-world/

WUT? I'm still not totally 100% sure why mine works and theirs works, and I don't blame Rust. I'd rather have seen this interface (in hyper) implemented with traits and interfaces. Yes it would force me to write something like a "factory," but I would have spent 30 minutes doing that instead of three hours figuring out how the fuck make_service_fn() and service_fn() are supposed to be used and how to get a f'ing Arc<> in there. It would also result in code that someone else could load up and easily understand what the hell it was doing without a half page of comments.

The rest of the Rust code in ZT 2.0 is much clearer than this. It only gets ugly when I have to interface with hyper. Tokio itself is even a lot better.

Oh, and Arc<> gets around a lot of issues in Rust. It's not as zero-cost as Rc<> and Box<> and friends but the cost is really low. While async workers are not threads, it can make things easier to treat them that way and use Arc<> with them (as long as you avoid cyclic structures). So if async ownership is really giving you headaches try chickening out and using Arc<>. It costs very very little CPU/RAM and if it saves you hours of coding it's worth it.

Oh, and to remind people: this is a systems language designed to replace C/C++, not a higher level language, and I don't expect it to ever be as simple and productive as Go or as YOLO as JavaScript. I love Go too but it's not a systems language and it imposes costs and constraints that are really problematic when trying to write (in my case) a network virtualization service that's shooting (in v2.0) for tens of gigabits performance on big machines.

serverholic · 5 years ago
You are awesome. Thank you for clarifying these things.
folex · 5 years ago
Thank you for your tremendous work!
johnnycerberus · 5 years ago
In all this time, maestro Andrei Alexandrescu was right when he said Rust feels like it "skipped leg day" when it comes to concurrency and metaprogramming capabilities. Tim Sweeney was complaining about similar things, saying about Rust that is one step forward, one step backward. These problems will be evident at a later time, when it will be already too late. I will continue experimenting with Rust, but Zig seems to have some great things going on, especially the colourless functions and the comptime thingy. Its safety story does not dissapoint also, even if it is not at Rust's level of guarantees.
aw1621107 · 5 years ago
In case anyone else was interested in the original sources for the quotes:

> Andrei Alexandrescu was right when he said Rust feels like it "skipped leg day" when it comes to concurrency and metaprogramming capabilities.

https://archive.is/hbBte (the original answer appears to have been deleted [0])

> Tim Sweeney was complaining about similar things, saying about Rust that is one step forward, two steps backward.

https://twitter.com/timsweeneyepic/status/121381423618448588...

(He said "Kind of one step backward and one step forward", but close enough)

[0]: https://www.quora.com/Which-language-has-the-brightest-futur...

jedisct1 · 5 years ago
And Zap (scheduler for Zig) is already faster than Tokio.

Zig and other recent languages have been invented after Rust and Go, so they could learn from them, while Rust had to experiment a lot in order to combine async with borrow checking.

So, yes, the async situation in Rust is very awkward, and doing something beyond a Ping server is more complicated than it could be. But that’s what it takes to be a pioneer.

pcwalton · 5 years ago
D and Zig have dynamically typed generics (templates/"comptime thingy"), while Rust has statically typed generics. A lot of people confuse this for Rust having less powerful generics. It's simply a different approach: the dynamic vs. static types distinction, at the type level instead of the value level.
vlovich123 · 5 years ago
Since you clearly have expertise, I'm curious if you might provide some insight into what would roughly be different in an async completion-based model & why that might be at a fundamental odds with the event-based one? Like is it an incompatibility with the runtime or does it change the actual semantics of async/await in a fundamental way to the point where you can't just swap out the runtime & reuse existing async code?
newpavlov · 5 years ago
It's certainly possible to pave over the difference between models to a certain extent, but the resulting solution will not be zero-cost.

Yes, there is a fundamental difference between those models (otherwise we would not have two separate models).

In a poll-based model interactions between task and runtime look roughly like this:

- task to runtime: I want to read data on this file descriptor.

- runtime: FD is ready, I'll wake-up the task.

- task: great, FD is ready! I will read data from FD and then will process it.

While in a completion based model it looks roughly like this:

- task to runtime: I want data to be read into this buffer which is part of my state.

- runtime: the requested buffer is filled, I'll wake-up the task.

- task: great requested data is in the buffer! I can process it.

As you can see the primary difference is that in the latter model the buffer becomes "owned" by runtime/OS while task is suspended. It means that you can not simply drop a task if you no longer need its results, like Rust currently assumes. You have either wait for the data read request to complete or to (possibly asynchronously) request cancellation of this request. With the current Rust async if you want to integrate with io-uring you would have to use awkward buffers managed by runtime, instead of simple buffers which are part of the task state.

Even outside of integration with io-uring/IOCP we have use-cases which require async Drop and we currently don't have a good solution for it. So I don't think that the decision to allow dropping tasks without an explicit cancellation was a good one, even despite the convenience which it brings.

stormbrew · 5 years ago
I'm also curious about this. Boats wrote some about rust async and io-uring a while ago that's interesting[1], but also points out a very clear path forward that's not actually outside the framework of rust's Future or async implementation: using interfaces that treat the kernel as the owner of the buffers being read into/out of, and that seems in line with my expectations of what should work for this.

But I haven't touched IOCP in nearly 20 years and haven't gotten into io-uring yet, so maybe I'm missing something.

Really the biggest problem might be that switching out backends is currently very difficult in rust, even the 0.x to 1.x jump of tokio is painful. Switching from Async[Reader|Writer] to AsyncBuf[Reader|Writer] might be even harder.

[1] https://boats.gitlab.io/blog/post/io-uring/

KingOfCoders · 5 years ago
The poll model has the advantage that you have control when async work starts and therefor is the more predictable model.

I guess that way it fits more the Rust philosophy.

eximius · 5 years ago
Could Rust switch? More importantly, would a completion based model alleviate the problems mentioned?
newpavlov · 5 years ago
Without introducing Rust 2? Highly unlikely.

I should have worded my message more carefully. Completion-based model is not a silver bullet which would magically solve all problems (though I think it would help a bit with the async Drop problem). The problem is that Rust async was rushed without careful deliberation, which causes a number of problems without a clear solution in sight.

Matthias247 · 5 years ago
I think there are 2 separate findings in it:

First of all yes, Rust futures use a poll model, where any state changes from different tasks don't directly call completions, but instead just schedule the original task to wake up again. I still think this is a good fit, and makes a lot of sense. It avoids a lot of errors on having a variety of state on the call stack before calling the continuation, which then gets invalidated. The model by itself also doesn't automatically make using completion based IO impossible.

However the polling model in Rust is combined with the model of always being able to drop a Future in order to cancel a task. This doesn't allow to use lower level libraries which require to do this without applying any additional workarounds.

However that part of Rusts model could be enhanced if there is enough interest in it. e.g. [1] discusses a proposal for it.

[1] https://rust-lang.zulipchat.com/#narrow/stream/187312-wg-asy...

Deleted Comment

Deleted Comment

zelly · 5 years ago
Why did polling have to be baked into the language? Seems bizarre for a supposedly portable language to assume the functionality of an OS feature which could change in the future.

Meanwhile C and C++ can easily adopt any async system call style because it made no assumptions in the standards about how that would be done.

Rust also didn't solve the colored functions problem. Most people think that's an impossible problem to solve without a VM/runtime (like Java Loom), but people also thought garbage collection was impossible in a systems language until Rust solved it. It could have been a great opportunity for them.

moonchild · 5 years ago
> people also thought garbage collection was impossible in a systems language until Rust solved it

No, they didn't. Linear typing for systems languages had already been done in ats, cyclone, and clean, the latter two of which were a major inspiration for rust.

Venturing further into gc territory: long before rust was even a twinkle in graydon hoare's eye, smart pointers were happening in c++, and apple was experimenting with objective c for drivers.

kibwen · 5 years ago
> Meanwhile C and C++ can easily adopt any async system call style because it made no assumptions in the standards about how that would be done.

This is comparing apples to oranges; Rust's general, no-assumptions-baked-in coroutines feature is called "generators", and are not yet stable. It is this feature that is internally used to implement async/await. https://github.com/rust-lang/rust/issues/43122

pjmlp · 5 years ago
> but people also thought garbage collection was impossible in a systems language until Rust solved it?

What?

Several OSes have proven their value written in GC enabled systems programming languages.

They aren't as mainstream as they should due to UNIX cargo cult and anti-GC Luddites.

Rust only proved that affine types can be easier than cyclone and ATS.

newpavlov · 5 years ago
>Why did polling have to be baked into the language?

See this comment: https://news.ycombinator.com/item?id=26407440

>Meanwhile C and C++ can easily adopt any async system call style because it made no assumptions in the standards about how that would be done.

Do you know about co_await in C++20? AFAIK (I only have a very cursory knowledge about it, so I may be wrong) it also makes some trade-offs, e.g. it requires allocations, while in Rust async tasks can live on stack or statically allocated regions of memory.

Also do not forget that Rust has to ensure memory safety at compile time, while C++ can be much more relaxed about it.

ilammy · 5 years ago
> people also thought garbage collection was impossible in a systems language until Rust solved it

Only if you understand "garbage collection" in a narrow sense of memory safety, no explicit free() calls, a relatively readable syntax for passing objects around, and acceptable amount of unused memory. This comes with a non-negligible amount of small text for Rust when compared to garbage-collected languages.

thombles · 5 years ago
I'm not totally sure what the author is asking for, apart from refcounting and heap allocations that happen behind your back. In my experience async Rust is heavily characterised by tasks (Futures) which own their data. They have to - when you spawn it, you're offloading ownership of its state to an executor that will keep it alive for some period of time that is out of the spawning code's control. That means all data/state brought in at spawn time must be enclosed by move closures and/or shared references backed by an owned type (Arc) rather than & or &mut.

If you want to, nothing is stopping you emulating a higher-level language - wrap all your data in Arc<_> or Arc<Mutex<_>> and store all your functions as trait objects like Box<dyn Fn(...)>. You pay for extra heap allocations and indirection, but avoid specifying generics that spread up your type hierarchy, and no longer need to play by the borrow checker's rules.

What Rust gives us is the option to _not_ pay all the costs I mentioned in the last paragraph, which is pretty cool if you're prepared to code for it.

ragnese · 5 years ago
I'm a giant Rust fanboy and have been since about 2016. So, for context, this was literally before Futures existed in Rust.

But, I only work on Rust code sporadically, so I definitely feel the pros and cons of it when I switch back and forth to/from Rust and other languages.

The problem, IMO, isn't about allocations or ownership.

In fact, I think that a lot of the complaints about async Rust aren't even about async Rust or Futures.

The article brings up the legitimate awkwardness of passing functions/closures around in Rust. But it's perfectly fair to say that idiomatic Rust is not a functional language, and passing functions around is just not the first tool to grab from your toolbelt.

I think the actual complaint is not about "async", but actually about traits. Traits are paradoxically one of Rust's best features and also a super leaky and incomplete abstraction.

Let's say you know a bit of Rust and you're kind of working through some problem. You write a couple of async functions with the fancy `async fn foo(&x: Bar) -> Foo` syntax. Now you want to abstract the implementation by wrapping those functions in a trait. So you try just copy+pasting the signature into the trait. The compiler complains that async trait methods aren't allowed. So now you try to desugar the signature into `fn foo(&x: Bar) -> impl Future<Foo>` (did you forget Send or Unpin? How do you know if you need or want those bounds?). That doesn't work either because now you find out that `impl Trait` syntax isn't supported in traits. So now you might try an associated type, which is what you usually do for a trait with "generic" return values. That works okay, except that now your implementation has to wrap its return value in Box::pin, which is extra overhead that wasn't there when you just had the standalone functions with no abstraction. You could theoretically let the compiler bitch at you until it prints the true return value and copy+paste that into the trait implementation's associated type, but realistically, that's probably a mistake because you'd have to redo that every time you tweak the function for any reason.

IMO, most of the pain isn't really caused by async/await. It's actually caused by traits.

jstrong · 5 years ago
also a long-time rust user, and I buy this. one of the things it took me longest to realize when writing rust is to reach for traits carefully/reluctantly. they can be amazing (e.g. serde), but I've wasted tons of time trying to make some elegant trait system work when I could have solved the problem much more quickly otherwise.
vmchale · 5 years ago
> The article brings up the legitimate awkwardness of passing functions/closures around in Rust.

That's a hard problem when you have linear/affine types! Closures don't work so neatly as in Haskell; currying has to be different.

mathw · 5 years ago
I wonder if most of the pain is actually caused by Rust async being an MVP, so things like async trait functions (which would be very nice) don't exist... yet.

I don't know if anybody has shown that they can't ever exist, it's just that they weren't considered necessary to get the initial async features out of the door. Rather like how you can't use impl Trait in trait method signatures either (there's definitely some generics-implications-complexity going on with that one).

valand · 5 years ago
^this

IMO, Rust already provides a decent amount of way to simplify and skip things. Reference counting, async, proc_macro, etc.

In my experience, programming stuffs at a higher-level-language where things are heavily abstracted, like (cough) NodeJS, is easy and simple up to a certain point where I have to do a certain low-level things fast (e.g. file/byte patching) or do a system call which is not provided by the runtime API.

Often times I have to resort to making a native module or helper exe just to mitigate this. That feels like reinventing the wheel because the actual wheel I need is deep under an impenetrable layer of abstraction.

pas · 5 years ago
This is where Scala (and JVM based languages) would shine in theory, the JVM has a well defined memory model, provides great low-level tools, etc. (But JVM-based software is always very bulky to deploy, both in terms of memory and size, so this shining is rarely seen in practice.)
gsserge · 5 years ago
I like to joke that the best way to encounter the ugliest parts of Rust is to implement an HTTP router. Hours and days of boxing and pinning, Futures transformations, no async fn in traits, closures not being real first class citizens, T: Send + Sync + 'static, etc.

I call this The Dispatch Tax. Because any time you want more flexibility than the preferred static dispatch via generics can give you - oh, so you just want to store all these async fn(HttpRequest) -> Result<HttpResponse>, right? - you immediately start feeling the inconvenience and bulkiness of dynamic dispatch. It's like Rust is punishing you for using it. And with async/await taking this to a new level altogether, because you are immediately forced to understand how async funcs are transformed into ones that return Futures, how Futures are transformed into anon state machine structs; how closures are also transformed to anon structs. It's like there's no type system anymore, only structs.

That's one of the reasons, I think, why Go has won the Control Plane. Sure, projects like K8s, Docker, the whole HashiCorp suite are old news. But it's interesting and telling that even solid Rust shops like PingCAP are using Go for their control plane. It seems to me that there's some fundamental connection between flexibility of convenient dynamic dispatch and control plane tasks. And of course having the default runtime and building blocks like net and http in the standard library is a huge win.

That said, after almost three months of daily Rust it does get better. To the point when you can actually feel that some intuition and genuine understanding is there, and you can finally work on your problems instead of fighting with the language. I just wish that the initial learning curve wasn't so high.

ragnese · 5 years ago
Definitely dynamic dispatch + async brings out a lot of pain points.

But I only agree that the async part of that is unfortunate. Making dynamic dispatch have a little extra friction is a feature, not a bug, so to speak. Rust's raison d'être is "zero cost abstraction" and to be a systems language that should be viable in the same spaces as C++. Heap allocating needs to be explicit, just like in C and C++.

But, I agree that async is really unergonomic once you go beyond the most trivial examples (some of which the article doesn't even cover).

Some of it is the choices made around the async/await design (The Futures, themselves, and the "async model" is fine, IMO).

But the async syntax falls REALLY flat when you want an async trait method (because of a combination-and-overlap of no-HKTs, no-GATs, no `impl Trait` syntax for trait methods) or an async destructor (which isn't a huge deal- I think you can just use future::executor::block_on() and/or use something like the defer-drop crate for expensive drops).

Then it's compounded by the fact that Rust has these "implicit" traits that are usually implemented automatically, like Send, Sync, Unpin. It's great until you write a bunch of code that compiles just fine in the module, but you go to plug it in to some other code and realize that you actually needed it to be Send and it's not. Crap- gotta go back and massage it until it's Send or Unpin or whatever.

Some of these things will improve (GATs are coming), but I think that Rust kind of did itself a disservice with stabilizing the async/await stuff, because now they'll never be able to break it and the Pin/Unpin FUD makes me nervous. I also think that Rust should have embraced HKTs/Monads even though it's a big can of worms and invites Rust devs to turn into Scala/Haskell weenies (said with love because I'm one of them).

gsserge · 5 years ago
Oh yeah, I can totally relate to the Send-Sync-Unpin massaging, plus 'static bound for me. It's so weird that individually each of them kinda makes sense, but often you need to combine then and all of a sudden the understanding of combinations just does not.. combine. After a minute or two of trying to figure out what should actually go into that bound I give up, remove all of them and start adding them back one by one until the compiler is happy.
pas · 5 years ago
Is there any chance of fixing Pin/Unpin via a later Rust edition?
ironmagma · 5 years ago
In a systems context, where performance and memory ostensibly matter, why wouldn’t you want to be made aware of those inefficiencies?

Sure, Go hides all that, but as a result it’s also possible to have memory leaks and spend extra time/memory on dynamic dispatch without being (fully) aware of it.

gsserge · 5 years ago
I think Rust is also able to hide certain things. Without async things are fine:

    type Handler = fn(Request<Body>) -> Result<Response<Body>, Error>; 
    let mut map: HashMap<&str, Handler> = HashMap::new(); 
    map.insert("/", |req| { Ok(Response::new("hello".into())) }); 
    map.insert("/about", |req| { Ok(Response::new("about".into())) });
Sure, using function pointer `fn` instead of one of the Fn traits is a bit of a cheating, but realistically you wouldn't want a handler to be a capturing closure anyway.

But of course you want to use async and hyper and tokio and your favorite async db connection pool. And the moment you add `async` to the Handler type definition - well, welcome to what author was describing in the original blog post. You'll end up with something like this

    type Handler = Box<dyn Fn(Request) -> BoxFuture + Send + Sync>; 
    type BoxFuture = Pin<Box<dyn Future<Output = Result> + Send>>;
plus type params with trait bounds infecting every method you want pass your handler to, think get, post, put, patch, etc.

    pub fn add<H, F>(&mut self, path: &str, handler: H)
    where
        H: Fn(Request) -> F + Send + Sync + 'static,
        F: Future<Output = Result> + Send + 'static,
And for what reason? I mean, look at the definitions

    fn(Request<Body>) -> Result<Response<Body>, Error>;
    async fn(Request<Body>) -> Result<Response<Body>, Error>;
It would be reasonable to suggest that if the first one is flexible enough to be stored in a container without any fuss, then the second one should as well. As a user of the language, especially in the beginning, I do not want to know of and be penalized by all the crazy transformations that the compiler is doing behind the scene.

And for the record, you can have memory leaks in Rust too. But that's besides the point.

blazzy · 5 years ago
In this example rust doesn't just make me aware of the tradeoffs. It almost feels like the language is actively standing in the way of making the trade offs I want to make. At least as the language is today. I think a bunch of upcoming features like unsized rvalues and async fns in traits will help.
amelius · 5 years ago
> In a systems context, where performance and memory ostensibly matter, why wouldn’t you want to be made aware of those inefficiencies?

Perhaps, but a bigger problem is that lots of folks are using Rust in a non-systems context (see HN frontpage on any random day).

jlrubin · 5 years ago
I've been working on a reasonably complicated project that is using rust, and I think about de-asyncing my code somewhat often because there are some nasty side effects you can see when you try to connect async & non async code. E.g., tokio makes it very difficult to correctly & safely (in a way the compiler, not runtime) catches launch an async task in a blocking function (with no runtime) that itself may be inside of a async routine. It makes using libraries kind of tough, and I think you end up with a model where you have a thread-per-library so the library knows it has a valid runtime, which is totally weird.

All that said, the author's article reads as a bit daft. I think anyone who has tried building something complicated in C++ / Go will look at those examples and marvel at how awesome Rust's ability to understand lifetimes is (i.e. better than your own) and keep you from using resources in an unintended way. E.g., you want to keep some data alive for a closure and locally? Arc. Both need to writable? Arc Mutex. You are a genius and can guarantee this Fn will never leak and it's safe to have it not capture something by value that is used later in the program and you really need the performance of not using Arc? Cast it to a ptr and read in an unsafe block in the closure. Rust doesn't stop you from doing whatever you want in this regard, it just makes you explicitly ask for what you want rather than doing something stupid automatically and making you have a hard to find bug later down the line.

ironmagma · 5 years ago
> The thing I really want to try and get across here is that *Rust is not a language where first-class functions are ergonomic.*

So... don’t use first-class functions so much? It’s a systems language, not a functional language for describing algorithms in CS whitepapers. Or use `move` (the article does mention this).

There are easy paths in most programming languages, and harder paths. Rust is no exception. The fact that passing around async closures with captured variables and also verifying nothing gets dropped prematurely and without resorting to runtime GC is bleeding-edge technology, so it should not be surprising that it has some caveats and isn’t always easy to do. The same could be said of trying to do reflection in Go, or garbage collection in C. These aren’t really the main use case for the tool.

lmm · 5 years ago
> So... don’t use first-class functions so much? It’s a systems language, not a functional language for describing algorithms in CS whitepapers.

Then maybe it was a mistake to adopt an async paradigm from functional languages that relies heavily on the idea that first-class functions are cheap and easy?

(FWIW I think Rust was right to pick Scala-style async; it's really the only nice way of working with async that I've seen, in any language. I think the mistake was not realising the importance of first-class functions and prioritising them higher)

stormbrew · 5 years ago
> maybe it was a mistake to adopt an async paradigm from functional languages

> I think Rust was right to pick Scala-style async

I'm confused by this assertion. I'm more aware of procedural language origins of syntactic async/await than functional? The scala proposal in 2016 for async/await even cites C#'s design (which came in C# 5.0 in 2012) as an inspiration[1].

From there, it appears python and typescript added their equivalents in 2015 [2].

If anything, async-await feels like an extremely non-functional thing to begin with, in the sense that in a functional language it should generally be easier to treat the execution model itself as abstracted away.

[1] https://docs.scala-lang.org/sips/async.html

[2] https://en.wikipedia.org/wiki/Async/await

kelnos · 5 years ago
> Rust was right to pick Scala-style async

Huh, I actually find Rust and Scala to do async quite differently. The only thing in common to me is the monadic nature of Future itself. Otherwise I find there to be a big fundamental difference in how Scala's async is threadpool/execution-context based, while Rust's async is polling-based.

Then there are the syntactic differences around Rust having async/await and Scala... not.

ironmagma · 5 years ago
It's always possible there's a better way. If you know of one, maybe you can write a proposal? There is always Rust v2. Rust lacks a BDFL and so it's almost like the language grows itself. Chances are, the async model that was used was picked because it was arrived upon via consensus.
marshray · 5 years ago
> maybe it was a mistake to adopt an async paradigm from functional languages

I always thought of async as higher language sugar for coroutine patterns that I'd seen in use in assembler from the 1980's.

gkya · 5 years ago
> It’s a systems language, not a functional language for describing algorithms in CS whitepapers.

This kind of toxicity is why I left programming behind as a career.

ironmagma · 5 years ago
How is saying Rust is one thing but not another thing toxic? I never said it’s the author’s fault Rust is broken or anything like that. It just has some goals, and being a functional programming language isn’t one of them (as far as I know).
TheCoelacanth · 5 years ago
Yeah, a struct holding four different closures is not a pattern that makes much sense in Rust.

That would look a lot better as a Trait with four methods.

viktorcode · 5 years ago
I think your implication that a system programming language can't have nice stuff is based on system languages of old. Unless a language is absolutely must have manual memory allocations, it can have 1st class functions and closures.
ironmagma · 5 years ago
I never implied it can’t be nice, just that it currently isn’t and that a solution to this blog post is, “don’t do that, at least for now.” I mean, the title is that async Rust doesn’t work, yet I use several successful codebases that use it, so it actually does work, just not the way the author wants it to. Another solution could be to submit a proposal to improve async support in the desired way.
de6u99er · 5 years ago
Wanted to write something similar, just not as elaborate as you did (native German speaker here).

Another good example is Java, and languages running on top of the JVM.

xiphias2 · 5 years ago
As I read through the database example, I saw that the compiler just caught a multi-threading bug for the author, and instead of being thankful, he’s complaining that Rust is bad.

I think he should use a higher level framework, or wait a few years for them to mature, and use a garbage collected language until then.

Deukhoofd · 5 years ago
He's just pointing out that it isn't very ergonomic, not that Rust is bad (in fact he states the opposite multiple times).

Pointing out weaknesses and things that might be done better is what helps something mature, not just praising it.

filleduchaos · 5 years ago
"Ergonomic" is such a nebulous word as to be nearly useless honestly.

I don't see how it is [unduly] inefficient or uncomfortable for a language at Rust's level to ensure that the programmer actually thinks through the execution of the code they are writing. If one doesn't want to think about such things - and there's absolutely nothing wrong with that! - there are plenty of higher level languages that can do that legwork for them with more than good enough performance.

To me it feels a bit like pointing out that the process of baking a cake from scratch isn't very ergonomic, what with all the careful choosing and measuring of ingredients and combining them in the right order using different techniques (whisking, folding, creaming, etc). That is simply what goes into creating baked goods. If that process doesn't work for you, you can buy cake mix instead and save yourself quite a bit of time and energy - but that doesn't necessarily mean there's anything to be done better about the process of making a cake from scratch.

EugeneOZ · 5 years ago
You are to find fancy runtime-only errors in async functions. Their existence outweighs any other compiler’s benefits.
joubert · 5 years ago
> caught a multi-threading bug

The compiler complained: “error[E0308]: mismatched types”

xiphias2 · 5 years ago
That’s why it’s amazing, type safe efficient multi-threading without dynamic memory allocation with a nice syntax...it’s the holy grail of server programming.
agumonkey · 5 years ago
I believe haskellers would love (and maybe did) to encode commutativity and thread-safety in the type system :)
redis_mlc · 5 years ago
Everything is "mismatched types" in Rust, literally it doesn't do any automatic type conversion (casting), so it's not the right language for most people.
gohbgl · 5 years ago
What is the bug? I don't see it.
Cyph0n · 5 years ago
The bug is that the closure is mutating the same data structure (a Vec in this case) in a different thread - i.e., a data race.

In this specific case, only the spawned thread is mutating the Vec, but the Rust compiler is usually conservative, so it marked this as a bug.

The actual bug is one or both of the following:

1. One of the threads could cause the underlying Vec to reallocate/resize while other threads are accessing it.

2. One of the threads could drop (free) the Vec while other threads are using it.

In Rust, only one thread can “own” a data structure. This is enforced through the Send trait (edit: this is probably wrong, will defer to a Rust expert here).

In addition, you cannot share a mutable reference (pointer) to the same data across threads without synchronization. This is enforced through the Sync trait.

There are two common solutions here:

1. Clone the Vec and pass that to the thread. In other words, each thread gets its own copy of the data.

2. Wrap the Vec in a Mutex and a Arc - your type becomes a Arc<Mutex<Vec<String>>>. You can then clone() the data and pass it to the new thread. Under the hood, this maps to an atomic increment instead of a deep clone of the underlying data like in (1).

The Mutex implements the Sync trait, which allows multiple threads to mutate the data. The Arc (atomic ref count) allows the compiler to guarantee that the Vec is dropped (freed) exactly once.

fulafel · 5 years ago
There wasn't any multi-threading in that example I think.
Cyph0n · 5 years ago
Yes, there was: the function was calling the passed in closure from a different thread.

It is just one thread that’s actually appending to the Vec, but it’s still a multi-threaded example.

mav3rick · 5 years ago
You created a DB on one thread and accessed / modified it on another.
lmm · 5 years ago
Async isn't really the problem - the same issue pops up with error handling, with resource management, with anything where you want to pass functions around. The real problem is that Rust's ownership semantics and limited abstractions mean it doesn't really have first-class functions: there are three different function types and the language lacks the power to abstract over them, so you can't generally take an expression and turn it into a function. NLL was actually a major step backwards that has made this worse in the medium term: it papers over a bunch of ownership problems if the code is written inline, but as soon as you try to turn that inlined code into a closure all those ownership problems come back.

IMO the only viable way out is through: Rust needs to get the ability to properly abstract over lifetimes and functions that it currently lacks (HKT is a necessary part of doing this cleanly). A good litmus test would be reimplementing all the language control flow keywords as plain old functions; done right this would obviate NLL.

But yeah that's going to take a while, and in the meantime for 99% of things you should just write OCaml. Everyone thinks they need a non-GC language because "muh performance" but those justifications rarely if ever hold up.

kibwen · 5 years ago
> mean it doesn't really have first-class functions: there are three different function types

Rust absolutely does have first-class functions, though. Their type is `fn(T)->U`. The "three different function types" that you refer to are traits for closures. And note that closures with no state (lambdas) coerce into function types.

    higher_order(is_zero);
    higher_order(|n| n == 0);
    
    fn higher_order(f: fn(i32)->bool) {
        f(42);
    }

    fn is_zero(n: i32) -> bool {
        n == 0
    }
It's true that when you have a closure with state then Rust forces you to reason about the ownership of that state, but that's par for the course in Rust.

lmm · 5 years ago
If function literals don't have access to the usual idioms of the language - which, in the case of Rust, means state - then functions are not first-class.

> It's true that when you have a closure with state then Rust forces you to reason about the ownership of that state, but that's par for the course in Rust.

The problem isn't that you have to reason about the state ownership, it's that you can't abstract over it properly.

hctaw · 5 years ago
Nit, you can already abstract over lifetimes. But I really agree with the HKT comment, in principle. In practice it's not so bad. Abstracting over async, mutability, and borrows would be more important to smooth over a few of these issues (there are ways around both, but it's not always obvious).

Most of these issues arise for library authors. I think a lot of folks in the ecosystem today are writing libraries instead of applications, which is why we get these blog posts lamenting the lack of HKTs or difficult syntax for doing difficult things that other languages hide from you. I don't think users have the same experience when building things with the libraries.

That said, there's a bit of leakage of complexity. Sometimes it's really hard to write a convenient API (like abstracting over closures) and that leads to gnarly errors for users of those APIs. Closures are particularly bad about this (which some, including myself, would say is good and also easy to work around), and there's always room for improvement. I don't think a lack of HKTs is really holding anything essential back right now.

lmm · 5 years ago
> In practice it's not so bad. Abstracting over async, mutability, and borrows would be more important to smooth over a few of these issues (there are ways around both, but it's not always obvious).

My point is that absent some horrible specific hacks, HKT is a necessary part of doing that. A type that abstracts over Fn, FnMut, or FnOnce is a higher-kinded type.

zelly · 5 years ago
Rust programs can theoretically be fast, but most of the ones I've used are slow. I tried two high profile implementations of the same type of software, one in Rust and one in Java. The Java one was faster and used less memory.

Rust programmers tend to do all kinds of little hacks here and there to make the borrow checker happy. It can add up. The borrow checker is perfectly happy when you copy everything.

Rust is becoming one giant antipattern.

(I'm sure highly experienced Rust programmers can get it to work, but there are probably less than 1000 people on this planet that can write good Rust that outperforms C++ so does it really count.)

doteka · 5 years ago
Interesting, could you elaborate on this software and what it does? At work, we also offer two backends for the same kind of functionality. One is the industry standard Java implementation, one is a homegrown Rust implementation. We find that for this usecase (essentially text processing, string manipulation and statistical algorithms), the rust version is much faster while using less memory. However, it is not as feature-rich.
jokethrowaway · 5 years ago
> Rust programmers tend to do all kinds of little hacks here and there to make the borrow checker happy. It can add up. The borrow checker is perfectly happy when you copy everything.

Do you mind sharing an example? If you're talking about using to_owned or clone without reason, then it's fully on the developer. Some more pitfalls to avodi: https://llogiq.github.io/2017/06/01/perf-pitfalls.html

What you say definitely doesn't match my experience. I would say Java code and Rust code are roughly in the same order of processing speed but the JVM "wastes" some memory. You also have garbage collection complicating performance in some scenarios.

I'm pretty sure you can get in the same processing speed ballpark with careful programming in both languages.

Outperforming C++ is definitely harder but Java should be doable.

imtringued · 5 years ago
I can't reduce JVM memory usage below 100MB. It simply is impossible. Meanwhile with rust I can easily write 20 times more memory efficient applications.
the_mitsuhiko · 5 years ago
I will agree that Rust programs can be surprisingly slow. That said, in most cases I have experienced this it came down to very simple to detect situations that are usually resolved by enclosing some large things in smart pointers.

I consider this a plus because that's a pretty simple change.

janjones · 5 years ago
> Async isn't really the problem - the same issue pops up with error handling, with resource management

There are ways to handle that, though. It's called algebraic effects and they are nicely implemented in Unison language[1] (although there they're called abilities[2]). It's very interesting language; I recommend reading [3] for good overview.

[1] https://www.unisonweb.org/

[2] https://www.unisonweb.org/docs/abilities

[3] https://jaredforsyth.com/posts/whats-cool-about-unison/

Deleted Comment

_zhqs · 5 years ago
> anything where you want to pass functions around.

You should read the article! The author goes into this in fascinating detail.

lmm · 5 years ago
I did read the article; it's overly focused on async (especially the headline). The body quite correctly analyses the problem with functions, but misses that this is much more general than async; just adopting a different async model isn't a solution.
jstrong · 5 years ago
I've been primarily coding in rust since 2018. I never cared for async/await, and I've never used it. (at some point, coding event loops became very natural/comfortable for me, and I have no trouble writing "manual" epoll code with mio/mio_httpc).

one nice thing about rust's async/await is, you don't have to use it, and if you don't, you don't pay for it in any way. sure, I run into crates that expect me to bring in a giant runtime to make a http request or two (e.g. rusoto), but for the most part I have not had any issues productively using sans-async rust during the last few years.

so, if you don't like async/await, just don't use it! it doesn't change anything about the rest of rust.

boulos · 5 years ago
Maybe it’s just the hyper and related crates universe, but my perception (see my sibling comment) is that more and more of the ecosystem will be “async only”.

Like you, I think that’s sort of fine, because you can always roll your own. But it’s also kind of sad: one of the best things about the Rust ecosystem is that you can so effortlessly use a small crate in a way you really can’t in C++ (historically, maybe modules will finally improve this).

lalaithion · 5 years ago
I mean, you can always just use an async crate and use async_std::task::block_on.
ntr-- · 5 years ago
I don't know how anybody can say this with a straight face.

Even in a systems context I think it's pretty reasonable to want to either perform or receive a HTTP request, as soon as you do that in Rust you are funneled into Hyper or something built on top of it (like reqwest) and instantly are dependent on tokio/mio.

The very first example in the reqwest readme^1 has tokio attributes, async functions AND trait objects. It's impossible that a beginner attempting to use the language to do anything related to networking won't be guided into the async nightmare before they have even come to grips with the borrow checker.

1. https://crates.io/crates/reqwest

ogoffart · 5 years ago
Reqwest lets you choose between async or not. It has a "blocking" module with a similar API, but no async functions.

https://docs.rs/reqwest/0.11.2/reqwest/blocking/index.html

(Maybe this uses async rust under the hood, but you don't have to care about it)

iso8859-1 · 5 years ago
jstrong didn't mention anything about beginners or how a typical user is nudged. It's just a description of how they work. They even pointed out which HTTP library they use. There is nothing in their post that requires a curved face.
thijsc · 5 years ago
Try https://github.com/algesten/ureq, it’s very nice and not async.
rapsey · 5 years ago
You can avoid async ecosystem with mio_httpc for http client. tiny_http for http server. Both work well.
_xrjp · 5 years ago
+2

Yes! Using `sans-async` must NOT diminish our productivity in comparison with `async` counterpart.

> at some point, coding event loops became very natural/comfortable for me

In fact it is, I'm very happy with it too.

> one nice thing about rust's async/await is, you don't have to use it, and if you don't, you don't pay for it in any way.

You've said it!

theopsguy · 5 years ago
+1000 in this. You can just use rust like c/c++ if you don’t want to buy into async.
omginternets · 5 years ago
Mmmyeah, they said this about python too...
jstrong · 5 years ago
rust has avoided many pitfalls that python fell into. one example is the way that rust 2015 edition code works seamlessly with rust 2018 edition code, unlike the python2 -> python3 debacle. another is cargo vs pip/poetry/this month's python package manager. so, using python as an example isn't very convincing to me.