Readit News logoReadit News
Posted by u/willothy a year ago
Show HN: Whirlwind – Async concurrent hashmap for Rustgithub.com/fortress-build...
Hey HN, this is Will and David from Fortress (https://news.ycombinator.com/item?id=41426998).

We use a lot of async Rust internally, and created this library out of a need for an async-aware concurrent hashmap since there weren’t many available in the Rust ecosystem.

Whirlwind is a sharded HashMap with a fully asynchronous API. Just as dashmap is a replacement for std::sync::RwLock<HashMap>, whirlwind aims to be a replacement for tokio::sync::RwLock<HashMap>. It has a similar design and performance characteristics to dashmap, but seems to perform better in read-heavy workloads with tokio's green threading.

Benchmarks are in the readme! We used an asyncified version of dashmap's benchmark suite. The project is in a pretty early stage and I'm sure there are flaws, but I'm pretty happy with the performance.

There is some unsafe involved, but we run Miri in ci to (hopefully) catch undefined behavior well before it's in an actual release.

We'd appreciate any feedback! Thanks in advance :)

judofyr · a year ago
One thing which wasn’t obvious to me from the benchmark: What’s the key distribution? A sharded map will probably have great performance on uniform keys, but in my experience it’s far more common to have power law distribution in real life scenarios.

It would be good to see a benchmark where it only touches a _single_ key. If Whirlwind is still fast than the others I would be far more convinced to use it unconditionally.

EDIT: I also see that you're benchmarking on a M3 Max. Note that this has 6 performance cores and 6 efficiency cores. This means that if you're running at <6 cores it will most likely start out running it at the efficiency core. From my experience it's quite important to do warmup phases in order to get stable results at low thread count. And even then I find it hard to reason about the results since you're running in mixed set of cores…

willothy · a year ago
Definitely a good point. I used dashmap's benchmark suite because it was already setup to bench many popular libraries, but I definitely want to get this tested in more varied scenarios. I'll try to add a benchmark for a single key only this week.

Regarding your edit: damn I hadn't thought of that. I'll rerun the benchmarks on my Linux desktop with a Ryzen chip and update the readme.

judofyr · a year ago
Very fascinating results though! The sharding approach is quite a simple way of doing more fine-grained locking, making sure that writes don't block all the reads. Pretty cool to see that it actually pays off even with the overhead of Tokie scheduling everything!

There might be some fairness concerns here? Since we're polling the lock (instead of adding ourselves to a queue) it could be the case that some requests are constantly "too late" to acquire it? Could be interesting to see the min/max/median/P99 of the requests themselves. It seems that Bustle only reports the average latency[1] which honestly doesn't tell us much more than the throughputs.

[1]: https://docs.rs/bustle/latest/bustle/struct.Measurement.html

olix0r · a year ago
> Just as dashmap is a replacement for std::sync::RwLock<HashMap>, whirlwind aims to be a replacement for tokio::sync::RwLock<HashMap>.

I'm curious about the practical benefits of handling a HashMap with an async interface. My long-standing understanding is that `tokio::sync::RwLock<HashMap>` is only useful when you'd want to hold the lock guard across another `await` operation; but when the lock guard is not held across an await, it is always preferable to use the synchronous version.

This would lead me to assume that same applies for dashmap--it should be sufficient for async use cases and doesn't need an async API unless we expect to be blocked, but the benchmarks indicate that whirlwind outperforms dashmap in various situations. Do you have a sense of where this blocking occurs in dashmap?

willothy · a year ago
The blocking mainly occurs due to contention - imo most of the performance gain comes from being able to poll the lock instead of blocking until it's available when acquiring locks on shards.

In all honesty I was quite surprised by the benchmarks as well though, I wouldn't expect that much performance gain, but in high-contention scenarios it definitely makes sense.

conradludgate · a year ago
Benchmarks will always look good when using a spin-lock like you seem to be using here https://github.com/fortress-build/whirlwind/blob/0e4ae5a2aba...
phlip9 · a year ago
Benches look promising! My main concern is validating correctness; implementing good concurrency primitives is always challenging. Have you looked into testing against a purpose-built concurrency model checker like tokio-rs/loom [1] or awslabs/shuttle [2]? IMO that would go a long way towards building trust in this impl.

[1] https://github.com/tokio-rs/loom [2] https://github.com/awslabs/shuttle

willothy · a year ago
Yep, someone suggested loom on our Reddit r/rust post as well - I'm actively working on that. Somehow I'd just never heard of loom before this.
conradludgate · a year ago
I don't think I'd recommend using this in production. The benchmarks look good, but by immediately waking the waker[0], you've effectively created a spin-lock. They may work in some very specific circumstances, but they will most likely in practice be more costly to your scheduler (which likely uses locks btw) than just using locks

[0]: https://github.com/fortress-build/whirlwind/blob/0e4ae5a2aba...

sujayakar · a year ago
+1. I'd be curious how much of a pessimization to uncontended workloads it'd be to just use `tokio::sync::RwLock`.

and, if we want to keep it as a spinlock, I'm curious how much the immediate wakeup compares to using `tokio::task::yield_now`: https://docs.rs/tokio/latest/tokio/task/fn.yield_now.html

willothy · a year ago
This is an interesting idea. I am gonna try this out - especially with dashmap, I think that could perform very well.
willothy · a year ago
I don't believe that waking the waker in `poll` synchronously waits / runs poll again immediately. I think it is more likely just adding the future back to the global queue to be polled. I could be wrong though, I'll look into this more. Thanks for the info!
conradludgate · a year ago
It does immediately put itself into the queue to be polled again. But that's no different in effect to a spin-lock. If you have other tasks in your runtime, this will be putting excess pressure on your scheduler
gsliepen · a year ago
I'm sure this library does the thing it claims to do very nicely. However, as a programmer I am saddened that so many things that should have been in the standard library by now need to come from external repositories and have a weird and unintuitive names.
lesuorac · a year ago
I kind of like the "thin" stdlib approach tbh.

Rather than having 3-5 ways to do a say a GUI from the stdlib you have 0 and instead you can pick a dependency that suits your way.

That then boils down to even data structures, there's a bunch of trade-offs to be made where one is better than the other and if Rust had to maintain 30 different types of maps for each use case that seems like a waste of their time. Push that work off to the people that need those maps.

seanw444 · a year ago
I'm torn. I don't write Rust much at all, but I have been playing around with it a little here and there for the last couple years. When I first tried it out, I was shocked that almost nothing I was looking for could be found in the stdlib. I thought I was looking in the wrong place. Coming mostly from the Go world, it was jarring to me when it turned out that out no, it really just barely has a stdlib.

Now after on some pondering though, there is a strength in that the Rust team doesn't have to focus on maintaining such a large stdlib, as you said. I find myself using external libraries for many things in Go already, so why not just extend that to slightly more fundamental things with lots of variance too.

I think the largest downside with this approach, however, is that it's hard for me to know what the de facto, collectively-agreed-upon standard is for a given task.

willothy · a year ago
I agree with this take a lot. I think having lots of custom implementations is inevitable for systems languages - the only reason why Rust is different is because Cargo/crates.io makes things available for everyone, where in C/C++ land you will often just DIY everything to avoid managing a bunch of dependencies by hand or with cmake/similar.
ghotli · a year ago
I try to tell all new programmers that ask me for advice that keeping abreast of the words of tools that are available for use is a big part of the work and shouldn't be left out. If I quit my daily / weekly trawl of what's out there, I'd surely start to atrophy.
bryanlarsen · a year ago
tokio itself is not mature/stable enough to be in the standard library, IMO, let alone anything based on tokio.
rsanders · a year ago
I'm pretty new to Rust so forgive me if I'm mistaken, but it seems to me that this crate doesn't require the use of tokio.
willothy · a year ago
Hey, I think the name is cool!

Fair point though, there would definitely be some benefit to having some of these things in the stdlib.

schainks · a year ago
The teams who compile the standard libraries are still made up of people, and sometimes better libraries get written outside of those teams. It's just a natural consequence of the intense collaboration we have in this industry. In fact, I am GLAD things happen this way, because it shows how flexible and open minded folks are when it comes to such libraries.

Naming things is... well, you know...

James_K · a year ago
> This crate is in development, and breaking changes may be made up until a 1.0 release.

When will this happen? I imagine a lot of people who want use it might just forget about it if you say "do not use this until some unspecified point in the future".

willothy · a year ago
I think there's a pretty big difference between committing to semantic versioning and saying "do not use this until some unspecified point in the future." Maybe I'm just not clear enough in the note - I just mean that the API could change. But as long as a consumer doesn't use `version = "*"` in their Cargo.toml, breaking changes will always be opt-in and builds won't start failing if I release something with a big API change.
James_K · a year ago
Maybe I'm a bit weird, but I would never commit to using something if the person making it wasn't providing a consistent interface. It could well be different in your case, but as a general rule, a sub 1.0 version is software that isn't yet ready to be used. The vast vast majority of projects that say "you can use this, but we won't provide a consistent interface yet" end up either dying before they get to v1 or causing so much pain they weren't worth using. I can see this issue being especially bad in Rust, where small API changes can create big issues with lifetimes and such.
Sytten · a year ago
Looks interesting! We used quick-cache [1] for that purpose right now, might be interesting to add comparison with those types of Key-Value caching crates.

[1] https://github.com/arthurprs/quick-cache

willothy · a year ago
Good point, thanks! I'll look into adding that crate to the benchmarks.
shepardrtc · a year ago
How do you determine the number of shards to use?
willothy · a year ago
I use a multiple of `std::thread::available_paralellism()`. Tbh I borrowed the strategy from dashmap, but I tested others and this seemed to work quite well. Considering making that configurable in the future so that it can be specialized for different use-cases or used in single-threaded but cooperatively scheduled contexts.
cogman10 · a year ago
(std::thread::available_parallelism().map_or(1, usize::from) * 4).next_power_of_two()