Most profiling tools technically work, but they expect you to reason clearly, correlate timelines, and keep a full mental model in your head. That’s fine at 2pm. At 2am, it’s hopeless. hud’s goal is to reduce cognitive load as much as possible: show something visual you can understand almost immediately.
The UI is modeled after a trans-Pacific night cockpit: dark, dense, and built to be readable when you’re exhausted.
Under the hood, hud uses eBPF to track scheduling latency—how long worker threads are runnable but not running. This correlates well with blocking (though it’s not a direct measurement). You can attach to a live process with no code changes and get a live TUI that highlights latency hotspots grouped by stack trace.
The usual suspects so far: std::fs, CPU-heavy crypto (bcrypt, argon2), compression (flate2, zstd), DNS via ToSocketAddrs, and mutexes held during expensive work.
Tokio-specific (worker threads identified by name). Linux 5.8+, root, and debug symbols required.
Very open to feedback—especially around false positives or flawed assumptions. Happy to answer questions.
But I'm curious what people think the equilibrium looks like. If the "two-tier system" (core revenue teams + disposable experimental teams) becomes the norm, what does that mean for the future of SWE as a career?
A few scenarios I keep turning over:
1. Bifurcation - A small elite of "10x engineers" command premium comp while the majority compete for increasingly commoditized roles
2. Craftsmanship revival - Companies learn that the "disposable workforce" model ships garbage, and there's renewed appreciation for experienced engineers who stick around
3. Consulting/contractor becomes default - Full-time employment becomes rare; most devs work project-to-project like other creative industries
The article argues AI isn't the cause, but it seems like it could accelerate whatever trend is already in motion. If companies are already treating engineers as interchangeable inventory, AI tooling gives them cover to reduce headcount further.For those of you 10+ years into your careers: are you optimistic about staying in IC roles long-term, or does management/entrepreneurship feel like the only sustainable path?
The real innovation was convincing us this was inevitable.
But for a constant like φ, you’re right—(1 + sqrt(5)) / 2 is trivial and stable. No clever construction needed.
I’m curious if the geometric approach has any edge-case benefits—like better numerical stability—or if it’s purely for elegance.
Most profiling tools technically work, but they expect you to reason clearly, correlate timelines, and keep a full mental model in your head. That’s fine at 2pm. At 2am, it’s hopeless. hud’s goal is to reduce cognitive load as much as possible: show something visual you can understand almost immediately.
The UI is modeled after a trans-Pacific night cockpit: dark, dense, and built to be readable when you’re exhausted.
Under the hood, hud uses eBPF to track scheduling latency—how long worker threads are runnable but not running. This correlates well with blocking (though it’s not a direct measurement). You can attach to a live process with no code changes and get a live TUI that highlights latency hotspots grouped by stack trace.
The usual suspects so far: std::fs, CPU-heavy crypto (bcrypt, argon2), compression (flate2, zstd), DNS via ToSocketAddrs, and mutexes held during expensive work.
Tokio-specific (worker threads identified by name). Linux 5.8+, root, and debug symbols required.
Very open to feedback—especially around false positives or flawed assumptions. Happy to answer questions.
https://github.com/cong-or/hud