Readit News logoReadit News
uvdn7 commented on The current state of LLM-driven development   blog.tolki.dev/posts/2025... · Posted by u/Signez
SkyPuncher · 15 days ago
> Learning how to use LLMs in a coding workflow is trivial. There is no learning curve. You can safely ignore them if they don’t fit your workflows at the moment.

That's a wild statement. I'm now extremely productive with LLMs in my core codebases, but it took a lot of practice to get it right and repeatable. There's a lot of little contextual details you need to learn how to control so the LLM makes the right choices.

Whenever I start working in a new code base, it takes a a non-trivial amount of time to ramp back up to full LLM productivity.

uvdn7 · 15 days ago
Is the non-trivial amount of time significantly less than you trying to ramp up yourself?

I am still hesitant using AI for solving problems for me. Either it hallucinates and misleads me. Or it does a great job and I worry that my ability of reasoning through complex problems with rigor will degenerate. When my ability of solving complex problems degenerated, patience diminished, attention span destroyed, I will become so reliant on a service that other entities own to perform in my daily life. Genuine question - are people comfortable with this?

uvdn7 commented on A Mental Model for C++ Coroutine   uvdn7.github.io/cpp-coro/... · Posted by u/uvdn7
Rohansi · a month ago
I don't know how it works for C++ but you're not locked down to a single implementation with how C# does it. You can have it use different executors/schedulers, different task types, etc.
uvdn7 · a month ago
You are also not locked down in C++. There are already a handful of coroutine and async runtime implementations out there.
uvdn7 commented on A Mental Model for C++ Coroutine   uvdn7.github.io/cpp-coro/... · Posted by u/uvdn7
valorzard · a month ago
Note that with std::execution, c++26 will have a default async runtime (similar to how C# has a default async runtime).

This means that c++26 is getting a default coroutine task type [1] AND a default executor [2]. You can even spawn the tasks like in Tokio/async Rust. [3]

I’m not totally sure if this is a GOOD idea to add to the c++ standard but oh well.

[1] https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2025/p35...

[2] http://wg21.link/P2079R5

[3] https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2025/p31...

uvdn7 · a month ago
> I’m not totally sure if this is a GOOD idea to add to the c++ standard

What are the downsides? Naively, it seems like a good idea to both provide a coroutine spec (for power users) and a default task type & default executor.

uvdn7 commented on A Mental Model for C++ Coroutine   uvdn7.github.io/cpp-coro/... · Posted by u/uvdn7
mog_dev · a month ago
Interesting article, but you should use a spell checker. Typos are distracting.
uvdn7 · a month ago
I am not a native speaker and I joke about my typos and grammar mistakes being the evidence that none of my code or post is AI generated. Sorry about the typos. I just fixed all the ones I can find. Hope it's better now.
uvdn7 commented on Show HN: Seven39, a social media app that is only open for 3 hours every evening   seven39.com... · Posted by u/mklyons
uvdn7 · 5 months ago
Neat. It certainly makes oncall and maintenance easier! It is likely more resource efficient by e.g. minimizing idle compute, maximizing cache hit rate, etc.
uvdn7 commented on We built a self-healing system to survive a concurrency bug at Netflix   pushtoprod.substack.com/p... · Posted by u/zdw
posix_compliant · 9 months ago
What's neat is that this is a differential equation. If you kill 5% of instances each hour, the reduction in bad instances is proportional to the current number of instances.

i.e.

if bad(t) = fraction of bad instances at time t

and

bad(0) = 0

then

d(bad(t))/dt = -0.05 * bad(t) + 0.01 * (1 - bad(t))

so

bad(t) = 0.166667 - 0.166667 e^(-0.06 t)

Which looks a mighty lot like the graph of bad instances in the blog post.

uvdn7 · 9 months ago
Love it! I wonder if the team knew this explicitly or intuitively when they deployed the strategy.

> We created a rule in our central monitoring and alerting system to randomly kill a few instances every 15 minutes. Every killed instance would be replaced with a healthy, fresh one.

It doesn't look like they worked out the numbers ahead of the time.

uvdn7 commented on The Fastest Mutexes   justine.lol/mutex/... · Posted by u/jart
pizlonator · a year ago
Always cool to see new mutex implementations and shootouts between them, but I don’t like how this one is benchmarked. Looks like a microbenchmark.

Most of us who ship fast locks use very large multithreaded programs as our primary way of testing performance. The things that make a mutex fast or slow seem to be different for complex workloads with varied critical section length, varied numbers of threads contending, and varying levels of contention.

(Source: I wrote the fast locks that WebKit uses, I’m the person who invented the ParkingLot abstraction for lock impls (now also used in Rust and Unreal Engine), and I previously did research on fast locks for Java and have a paper about that.)

uvdn7 · a year ago
I was thinking the same. There are many mutexes out there, some are better at certain workloads than the rest. DistributedMutex and SharedMutex come to mind (https://github.com/facebook/folly/blob/main/folly/synchroniz..., https://github.com/facebook/folly/blob/main/folly/SharedMute...) Just like hashmaps, it's rarely the case that a single hashmap is better under _all_ possible workloads.

u/uvdn7

KarmaCake day1057March 30, 2013
About
https://blog.the-pans.com/

Lu Pan – I work on distributed systems at Facebook. uvdn7 upside down is lupan.

View Original