Readit News logoReadit News
mleonhard commented on PS5 now costs less than 64GB of DDR5 memory. RAM jumps to $600 due to shortage   tomshardware.com/pc-compo... · Posted by u/speckx
Cthulhu_ · 21 days ago
> What's next? Electricity?

That and water. Electricity: Google made a post about scaling k8s to 135.000 nodes yesterday, mentioning how each node has multiple GPUs taking up 2700 watts max.

Water, well this is a personal beef, but Microsoft built a datacenter which used potable / drinking water for backup cooling, using up millions of liters during a warm summer. They treat the water and dump it in the river again. This was in 2021, I can imagine it's only gotten worse again: https://www.aquatechtrade.com/news/industrial-water/microsof...

mleonhard · 21 days ago
Is any datacenter's water use significant compared to other industrial installations? According to that article, all datacenters in North Holland use 550 Ml/yr. North Holland has 2.95M residents [0], who use 129 l/person-day [1], 47 Kl/person-year, 139,000 Ml/year for the whole region. So the data centers use an estimated 0.4% of the region's water. Data centers use about 3% of the Netherlands' electricity.

Why do you think this is a lot of water? What are the alternatives to pulling from the local water utility and are those alternatives preferable?

[0] https://en.wikipedia.org/wiki/North_Holland

[1] https://en.wikipedia.org/wiki/Water_supply_and_sanitation_in...

[2] https://www.dutchdatacenters.nl/en/statistics-2/

mleonhard commented on Futurelock: A subtle risk in async Rust   rfd.shared.oxide.computer... · Posted by u/bcantrill
mleonhard · 2 months ago
> // Start a background task that takes the lock and holds it for a few seconds.

Holding a lock while waiting for IO can destroy a system's performance. With async Rust, we can prevent this by making the MutexGuard !Send, so it cannot be held across an await. Specifically, because it is !Send, it cannot be stored in the Future [2], so it must be dropped immediately, freeing the lock. This also prevents Futurelock deadlock.

This is how I wrote safina::sync::Mutex [0]. I did try to make it Send, like Tokio's MutexGuard, but stopped when I realized that it would become very complicated or require unsafe.

> You could imagine an unfair Mutex that always woke up all waiters and let them race to grab the lock again. That would not suffer from risk of futurelock, but it would have the thundering herd problem plus all the liveness issues associated with unfair synchronization primitives.

Thundering herd is when clients overload servers. This simple Mutex has O(n^2) runtime: every task must acquire and release the mutex, which adds all waiting tasks to the scheduler queue. In practice, scheduling a task is very fast (~600ns). As long as polling the lock-mutex-future is fast and you have <500 waiting tasks, then the O(n^2) runtime is fine.

Performance is hard to predict. I wrote Safina using the simplest possible implementations and assumed they would be slow. Then I wrote some micro-benchmarks and found that some parts (like the async Mutex) actually outperform Tokio's complicated versions [1]. I spent days coding optimizations that did not improve performance (work stealing) or even reduced performance (thread affinity). Now I'm hesitant to believe assumptions and predictions about performance, even if they are based on profiling data.

[0] https://docs.rs/safina/latest/safina/sync/struct.MutexGuard....

[1] https://docs.rs/safina/latest/safina/index.html#benchmark

[2] Multi-threaded async executors require futures to be Send.

mleonhard commented on AI-generated 'poverty porn' fake images being used by aid agencies   theguardian.com/global-de... · Posted by u/KolmogorovComp
bradley13 · 2 months ago
I'm sure that some (few) of these NGOs do good work. However, sooner or later, they all seem to succumb to two problems: (1) excessive staff costs, and (2) a failure of incentives.

The second one is more insidious: If they solved the problem they address, they would no longer need to exist. They have no incentive to succeed. So they go around addressing individual problems, taking sad pictures, and avoid addressing systemic problems.

And if the systemic problems are insoluble? Then there is again an argument that the NGO should not exist. If the problem is truly insoluble, then likely the money could be better spent elsewhere.

mleonhard · 2 months ago
I spent some years working for a large NGO (Opportunity International) and living with people who work for NGOs.

NGOs must constantly raise money to fund their operations. The money that an NGO spends on fund-raising & administration is called "overhead". The percentage of annual revenue spent on overhead is the overhead percentage. Most NGOs publish this metric.

When a big donor stops contributing, the NGO must cut pay or lay off people and cut projects. I've never heard of an NGO "succumbing to excessive staff costs" like a startup running out of money. Financial mismanagement does occasionally happen and boards do replace CEOs. Board members are mostly donors, so they tend to donate more to help the NGO recover from mismanagement, instead of walking away.

NGOs pay less than other organizations, so they mostly attract workers who care about the NGO's mission. These are people with intrinsic motivation to make the NGO succeed in its mission. Financial incentives are a small part of their motivations. For example, my supervisor at Opportunity International refused several raises.

> So they go around addressing individual problems, taking sad pictures, and avoid addressing systemic problems.

Work on individual problems is valuable. For example, the Carter Center has prevented many millions of people from going blind from onchocerciasis and trachoma [0].

The Carter Center is not directly addressing the systemic problems of poverty and ineffective government health programs. That would take different expertise and different kinds of donors.

The world is extremely complicated and interconnected. The Carter Center's work preventing blindness directly supports worker productivity in many poor countries. Productivity helps economic growth and reduces poverty. And with more resources, government health programs run better.

Being effective in charity work requires humility and diligence to understand what can be done now, with the available resources. And then it requires tenacity to work in dangerous and backward places. It's an extremely hard job. People burn out. And we are all better off because of the work they do.

When we ignore the value of work on individual problems, because it doesn't address systemic problems, we practice binary thinking [1]. It's good to avoid binary thinking.

[0] https://en.wikipedia.org/wiki/Carter_Center#Implementing_dis...

[1] https://en.wikipedia.org/wiki/Splitting_(psychology)

mleonhard commented on Migrating from AWS to Hetzner   digitalsociety.coop/posts... · Posted by u/pingoo101010
mleonhard · 2 months ago
When I used AWS startup credits in 2019, the AWS console made it very difficult to estimate the bill after the credits ran out. I lost a lot of trust in AWS. Also, there were buried mines in the APIs, like the risk of bad logging running up a $70,000/day bill with CloudWatch Logs.

If I could go back and do it again, I would rent a single machine and deploy with ssh (git pull & docker-compose up) and backup to my laptop.

mleonhard commented on Cancellations in async Rust   sunshowers.io/posts/cance... · Posted by u/todsacerdoti
mleonhard · 2 months ago
I think that async in Rust has a significant devex/velocity cost. Unfortunately, nearly all of the effort in Rust libraries has gone into async code, so the async libraries have outpaced the threaded libraries.

There was only one threaded web server, https://lib.rs/crates/rouille . It has 1.1M lines of code (including deps). Its hello-world example reaches only 26Krps on my machine (Apple M4 Pro). It also has a bug that makes it problematic to use in production: https://github.com/tiny-http/tiny-http/issues/221 .

I wrote https://lib.rs/crates/servlin threaded web server. It uses async internally. It has 221K lines of code. Its hello-world example reaches 102Krps on my machine.

https://lib.rs/crates/ehttpd is another one but it has no tests and it seems abandoned. It does an impressive 113Krps without async, using only 8K lines of code.

For comparison, the popular Axum async web server has 4.3M lines of code and its hello-world example reaches 190Krps on my machine.

The popular threaded Postgres client uses Tokio internally and has 1M lines of code: http://lib.rs/postgres .

Recently a threaded Postgres client was released. It has 500K lines of code: https://lib.rs/crates/postgres_sync .

There was no ergonomic way to signal cancellation to threads, so I wrote one: https://crates.io/crates/permit .

Rust's threaded libraries are starting to catch up to the async libraries!

---

I measured lines of code with `rm -rf deps.filtered && cargo vendor-filterer --platform=aarch64-apple-darwin --exclude-crate-path='*#tests' deps.filtered && tokei deps.filtered`.

I ran web servers with `cargo run --release --example hello-world` and measured throughput with `rewrk -c 1000 -d 10s -h http://127.0.0.1:3000/`.

mleonhard commented on As many as 2M Cisco devices affected by actively exploited 0-day   arstechnica.com/security/... · Posted by u/duxup
mleonhard · 3 months ago
I think Cisco SNMP vulnerabilities have been appearing for 20 years or more. I wish someone would add a fuzzer to their release testing script.
mleonhard commented on Some interesting stuff I found on IX LANs   blog.benjojo.co.uk/post/i... · Posted by u/todsacerdoti
api · 3 months ago
That's not great, but it's better than the opposite. I've been in networking for ages and have observed that most networking people will, as a rule, make networks as complicated as possible.

Why have one layer of NAT when you can have four or five? Why not invent a bespoke addressing scheme? Why not cargo cult the documentation and/or scripts and config files from Stack Overflow or ChatGPT?

Under-engineering is an easier problem to solve than over-engineering.

mleonhard · 3 months ago
I took an "Architecting on AWS" class and half of the content was how to replicate complicated physical networking architectures on AWS's software-defined network: layers of VPCs, VPC peering, gateways, NATs, and impossible-to-debug firewall rules. AWS knows their customers tho. Without this, a lot of network engineers would block migrations from on-prem to AWS.
mleonhard commented on Mesh: I tried Htmx, then ditched it   ajmoon.com/posts/mesh-i-t... · Posted by u/alex-moon
mleonhard · 3 months ago
How does one handle errors with MESH?

To handle errors in HTMX, I like to use config from [0] to swap responses into error dialogs and `hx-on-htmx-send-error` [1] and `hx-on-htmx-response-error` [2] to show the dialogs. For some components, I also use an `on-htmx-error` attribute handler:

    // https://htmx.org/events/
    document.body.addEventListener('htmx:error', function (event: any) {
        const elt = event.detail.elt as HTMLElement
        const handlerString = elt.getAttribute('on-htmx-error')
        console.log('htmx:error evt.detail.elt.id=' + elt.getAttribute('id') + ' handler=' + handlerString)
        if (handlerString) {
            eval(handlerString)
        }
    });
This gives very good UX on network and server errors.

[0]: https://htmx.org/quirks/#by-default-4xx-5xx-responses-do-not...

[1]: https://htmx.org/events/#htmx:sendError

[2]: https://htmx.org/events/#htmx:responseError

mleonhard commented on Mesh: I tried Htmx, then ditched it   ajmoon.com/posts/mesh-i-t... · Posted by u/alex-moon
lo_fye · 3 months ago
>> it allows us to write an HTML-first back-end in such a way that it feels like writing an SPA.

I think you’ll find that most people who love HTMX don’t ever want something that feels like writing an SPA.

mleonhard · 3 months ago
Yes. With HTMX, one can put a page definition and its endpoints in one file. It has high cohesion.

There's no integration with routers, state stores, or rpc handlers. There are no DTOs shared between the frontend and backend. It has low coupling.

High cohesion and low coupling bring benefits in engineering productivity.

u/mleonhard

KarmaCake day4062February 26, 2008
About
Michael Leonhard. San Francisco. Software engineer and entrepreneur. Interested in many things.

michael206@gmail.com

http://www.tamale.net/

View Original