Holding a lock while waiting for IO can destroy a system's performance. With async Rust, we can prevent this by making the MutexGuard !Send, so it cannot be held across an await. Specifically, because it is !Send, it cannot be stored in the Future [2], so it must be dropped immediately, freeing the lock. This also prevents Futurelock deadlock.
This is how I wrote safina::sync::Mutex [0]. I did try to make it Send, like Tokio's MutexGuard, but stopped when I realized that it would become very complicated or require unsafe.
> You could imagine an unfair Mutex that always woke up all waiters and let them race to grab the lock again. That would not suffer from risk of futurelock, but it would have the thundering herd problem plus all the liveness issues associated with unfair synchronization primitives.
Thundering herd is when clients overload servers. This simple Mutex has O(n^2) runtime: every task must acquire and release the mutex, which adds all waiting tasks to the scheduler queue. In practice, scheduling a task is very fast (~600ns). As long as polling the lock-mutex-future is fast and you have <500 waiting tasks, then the O(n^2) runtime is fine.
Performance is hard to predict. I wrote Safina using the simplest possible implementations and assumed they would be slow. Then I wrote some micro-benchmarks and found that some parts (like the async Mutex) actually outperform Tokio's complicated versions [1]. I spent days coding optimizations that did not improve performance (work stealing) or even reduced performance (thread affinity). Now I'm hesitant to believe assumptions and predictions about performance, even if they are based on profiling data.
[0] https://docs.rs/safina/latest/safina/sync/struct.MutexGuard....
[1] https://docs.rs/safina/latest/safina/index.html#benchmark
[2] Multi-threaded async executors require futures to be Send.
The second one is more insidious: If they solved the problem they address, they would no longer need to exist. They have no incentive to succeed. So they go around addressing individual problems, taking sad pictures, and avoid addressing systemic problems.
And if the systemic problems are insoluble? Then there is again an argument that the NGO should not exist. If the problem is truly insoluble, then likely the money could be better spent elsewhere.
NGOs must constantly raise money to fund their operations. The money that an NGO spends on fund-raising & administration is called "overhead". The percentage of annual revenue spent on overhead is the overhead percentage. Most NGOs publish this metric.
When a big donor stops contributing, the NGO must cut pay or lay off people and cut projects. I've never heard of an NGO "succumbing to excessive staff costs" like a startup running out of money. Financial mismanagement does occasionally happen and boards do replace CEOs. Board members are mostly donors, so they tend to donate more to help the NGO recover from mismanagement, instead of walking away.
NGOs pay less than other organizations, so they mostly attract workers who care about the NGO's mission. These are people with intrinsic motivation to make the NGO succeed in its mission. Financial incentives are a small part of their motivations. For example, my supervisor at Opportunity International refused several raises.
> So they go around addressing individual problems, taking sad pictures, and avoid addressing systemic problems.
Work on individual problems is valuable. For example, the Carter Center has prevented many millions of people from going blind from onchocerciasis and trachoma [0].
The Carter Center is not directly addressing the systemic problems of poverty and ineffective government health programs. That would take different expertise and different kinds of donors.
The world is extremely complicated and interconnected. The Carter Center's work preventing blindness directly supports worker productivity in many poor countries. Productivity helps economic growth and reduces poverty. And with more resources, government health programs run better.
Being effective in charity work requires humility and diligence to understand what can be done now, with the available resources. And then it requires tenacity to work in dangerous and backward places. It's an extremely hard job. People burn out. And we are all better off because of the work they do.
When we ignore the value of work on individual problems, because it doesn't address systemic problems, we practice binary thinking [1]. It's good to avoid binary thinking.
[0] https://en.wikipedia.org/wiki/Carter_Center#Implementing_dis...
If I could go back and do it again, I would rent a single machine and deploy with ssh (git pull & docker-compose up) and backup to my laptop.
There was only one threaded web server, https://lib.rs/crates/rouille . It has 1.1M lines of code (including deps). Its hello-world example reaches only 26Krps on my machine (Apple M4 Pro). It also has a bug that makes it problematic to use in production: https://github.com/tiny-http/tiny-http/issues/221 .
I wrote https://lib.rs/crates/servlin threaded web server. It uses async internally. It has 221K lines of code. Its hello-world example reaches 102Krps on my machine.
https://lib.rs/crates/ehttpd is another one but it has no tests and it seems abandoned. It does an impressive 113Krps without async, using only 8K lines of code.
For comparison, the popular Axum async web server has 4.3M lines of code and its hello-world example reaches 190Krps on my machine.
The popular threaded Postgres client uses Tokio internally and has 1M lines of code: http://lib.rs/postgres .
Recently a threaded Postgres client was released. It has 500K lines of code: https://lib.rs/crates/postgres_sync .
There was no ergonomic way to signal cancellation to threads, so I wrote one: https://crates.io/crates/permit .
Rust's threaded libraries are starting to catch up to the async libraries!
---
I measured lines of code with `rm -rf deps.filtered && cargo vendor-filterer --platform=aarch64-apple-darwin --exclude-crate-path='*#tests' deps.filtered && tokei deps.filtered`.
I ran web servers with `cargo run --release --example hello-world` and measured throughput with `rewrk -c 1000 -d 10s -h http://127.0.0.1:3000/`.
Why have one layer of NAT when you can have four or five? Why not invent a bespoke addressing scheme? Why not cargo cult the documentation and/or scripts and config files from Stack Overflow or ChatGPT?
Under-engineering is an easier problem to solve than over-engineering.
To handle errors in HTMX, I like to use config from [0] to swap responses into error dialogs and `hx-on-htmx-send-error` [1] and `hx-on-htmx-response-error` [2] to show the dialogs. For some components, I also use an `on-htmx-error` attribute handler:
// https://htmx.org/events/
document.body.addEventListener('htmx:error', function (event: any) {
const elt = event.detail.elt as HTMLElement
const handlerString = elt.getAttribute('on-htmx-error')
console.log('htmx:error evt.detail.elt.id=' + elt.getAttribute('id') + ' handler=' + handlerString)
if (handlerString) {
eval(handlerString)
}
});
This gives very good UX on network and server errors.[0]: https://htmx.org/quirks/#by-default-4xx-5xx-responses-do-not...
I think you’ll find that most people who love HTMX don’t ever want something that feels like writing an SPA.
There's no integration with routers, state stores, or rpc handlers. There are no DTOs shared between the frontend and backend. It has low coupling.
High cohesion and low coupling bring benefits in engineering productivity.
That and water. Electricity: Google made a post about scaling k8s to 135.000 nodes yesterday, mentioning how each node has multiple GPUs taking up 2700 watts max.
Water, well this is a personal beef, but Microsoft built a datacenter which used potable / drinking water for backup cooling, using up millions of liters during a warm summer. They treat the water and dump it in the river again. This was in 2021, I can imagine it's only gotten worse again: https://www.aquatechtrade.com/news/industrial-water/microsof...
Why do you think this is a lot of water? What are the alternatives to pulling from the local water utility and are those alternatives preferable?
[0] https://en.wikipedia.org/wiki/North_Holland
[1] https://en.wikipedia.org/wiki/Water_supply_and_sanitation_in...
[2] https://www.dutchdatacenters.nl/en/statistics-2/