Readit News logoReadit News
cleak commented on Things that helped me get out of the AI 10x engineer imposter syndrome   colton.dev/blog/curing-yo... · Posted by u/coltonv
simonw · a month ago
I found myself agreeing with quite a lot of this article.

I'm a pretty huge proponent for AI-assisted development, but I've never found those 10x claims convincing. I've estimated that LLMs make me 2-5x more productive on the parts of my job which involve typing code into a computer, which is itself a small portion of that I do as a software engineer.

That's not too far from this article's assumptions. From the article:

> I wouldn't be surprised to learn AI helps many engineers do certain tasks 20-50% faster, but the nature of software bottlenecks mean this doesn't translate to a 20% productivity increase and certainly not a 10x increase.

I think that's an under-estimation - I suspect engineers that really know how to use this stuff effectively will get more than a 0.2x increase - but I do think all of the other stuff involved in building software makes the 10x thing unrealistic in most cases.

cleak · a month ago
I’ve found I do get small bursts of 10x productivity when trying to prototype an idea - much of the research on frameworks and such just goes away. Of course that’s usually followed by struggling to make a seemingly small change for an hour or two. It seems like the 10x number is just classic engineers underestimating tasks - making estimates based on peak productivity that never materializes.

I have found for myself it helps motivate me, resulting in net productivity gain from that alone. Even when it generates bad ideas, it can get me out of a rut and give me a bias towards action. It also keeps me from procrastinating on icky legacy codebases.

cleak commented on Why quadratic funding is not optimal   jonathanwarden.com/quadra... · Posted by u/jwarden
cleak · 3 months ago
This looks interesting, but I have no idea what I’m looking at with the original paper. Could someone provide a simple summary that doesn’t rely on knowledge of Quadratic Voting?
cleak commented on As a developer, my most important tools are a pen and a notebook   hamatti.org/posts/as-a-de... · Posted by u/ingve
ednite · 3 months ago
Great discussion. In my opinion, the real takeaway isn’t about notebooks vs. digital tools, it’s about what shifts your mental gears. Every time we switch modes, it forces our brain to pay attention differently. That fresh context can boost focus, creativity, even recall.

For example, I recently stopped coding all the time and picked up a new hobby at night, writing. That simple change gave my brain a reset and actually improved the performance during the day. Same goes for planning: switching from digital to pen and paper breaks the routine and makes your brain engage differently. It’s less about the tool and more about how the change wakes you up.

cleak · 3 months ago
The book Smarter Faster Better introduced me to the concept of disfluency - the idea that extra friction such as awkward fonts, new environments, different tools, etc will pull you out of autopilot mode and force you to think in new ways. I haven’t seen references to it elsewhere, but it’s changed how I approach problems and learning the last 9 years. Switching to a notebook is one great way I use to trigger this as well.
cleak commented on Claude 4 System Card   simonwillison.net/2025/Ma... · Posted by u/pvg
smokel · 3 months ago
> Gemini's 1M token context window is really unbeatable.

How does that work in practice? Swallowing a full 1M context window would take in the order of minutes, no? Is it possible to do this for, say, an entire codebase and then cache the results?

cleak · 3 months ago
I’m curious about this as well, especially since all coding assistants I’ve used truncate long before 1M tokens.
cleak commented on Claude 4   anthropic.com/news/claude... · Posted by u/meetpateltech
sali0 · 3 months ago
I've found myself having brand loyalty to Claude. I don't really trust any of the other models with coding, the only one I even let close to my work is Claude. And this is after trying most of them. Looking forward to trying 4.
cleak · 3 months ago
Something I’ve found true of Claude, but not other models, is that when the benchmarks are better, the real world performance is better. This makes me trust them a lot more and keeps me coming back.
cleak commented on Fast Virtual Functions: Hacking the VTable for Fun and Profit   medium.com/@calebleak/fas... · Posted by u/danny00
flohofwoe · a year ago
> Both Unreal and Unity make heavy use of per-instance per-frame virtual functions. It's a paradigm that has clear value.

It's a paradigm that had (debatable) value in the late 1990's and early 2000's when Unreal Engine and Unity had been designed and OOP was all the rage ;)

In the meantime we realized that not everything should in fact be an (OOP) object and runtime polymorphism is usually not needed to build games, even if the whole "animal => mammal => dog" classification thing at first sounds like it would be a good match to model game objects.

With runtime polymorphism it's like with memory management. If memory management shows up in profiling at all, it's better to drastically reduce the frequency of allocations instead of looking for a faster general purpose allocator (and if it doesn't show up in profiling as it should be, integrating a faster allocator doesn't make much sense either because it won't make a difference).

Of course stuff like this is not easy to do late into a project because it would involve a complete redesign of the architecture and rewriting all code from scratch - and that's why dropping in a faster allocator sometimes makes sense as a compromise, but it's not a fix for the underlying problem, just a crutch).

Also, the more important problem with indirect function calls than the call overhead is usually that they present a hard optimization barrier for the compiler.

cleak · a year ago
> It's a paradigm that had (debatable) value in the late 1990's and early 2000's when Unreal Engine and Unity had been designed and OOP was all the rage ;)

For inheritance, I 100% agree. Composition all the way. I think it has value as an interface though - at least for quick bring up and fast iteration. It can of course bring scaling challenges - I recently worked on a project that had hundreds of devs and more than 50k game components. That brought all of the architectural and performance challenges you'd expect from this approach.

> Also, the more important problem with indirect function calls than the call overhead is usually that they present a hard optimization barrier for the compiler.

In the years I've had to think about this, I'd take a slightly different approach that should be more amenable to compiler optimization. I'd maintain separate lists for each concrete type and have a type aware process function (via templates) which requires all overrides to be marked final. That should allow the compiler to do inlining, avoid indirections, etc. The major downside here is handing over a footgun to the dev - forget that final keyword or pass the object in not as concrete type and performance will suffer. I'd probably still walk the vtable to see if a function has been overridden - it's unfortunate that there doesn't seem to be a way to do this without resorting to such tricks.

cleak commented on Fast Virtual Functions: Hacking the VTable for Fun and Profit   medium.com/@calebleak/fas... · Posted by u/danny00
andy_xor_andrew · a year ago
on this topic, it's interesting how my mindset changes when I'm writing Rust vs Python, with regards to vtables, dispatching, and allocation.

Writing Rust for a toy project: "I MUST avoid allocation, and the dyn keyword, at all cost!!"

Writing the same toy project in Python: "lol just copy the string who cares"

cleak · a year ago
I've actually shifted most of my personal dev to Rust now, and so this vtable hack has become less relevant. Rust makes it very easy to avoid virtual functions. If I had to redo this bit of code in Rust, a trait would boil any sort of update function down to its concrete type and give (I'd expect) great performance with all the convenience of virtual functions.
cleak commented on Fast Virtual Functions: Hacking the VTable for Fun and Profit   medium.com/@calebleak/fas... · Posted by u/danny00
aappleby · a year ago
Having spent a good chunk of my career optimizing C++ game engines - don't do this.

If virtual vs non-virtual function calls are causing you performance problems, you're calling way too many functions per frame. Batch the objects by concrete type and process them all in a loop.

cleak · a year ago
Author here. I'm a bit confused by this response. Sure, going full ECS and processing all objects in a single (or few) function calls is likely to be faster. But there's an obvious tradeoff in solution complexity.

Both Unreal and Unity make heavy use of per-instance per-frame virtual functions. It's a paradigm that has clear value. Why not make it cheaper at no cost to the dev? The option to take a systems approach to bottlenecks is the same afterwards.

cleak commented on Fast Virtual Functions: Hacking the VTable for Fun and Profit   medium.com/@calebleak/fas... · Posted by u/danny00
Dwedit · a year ago
Branch prediction of Virtual Calls has traditionally been to look at the address of the calling instruction, and assume it will be the same target as last time.

But the real penalty for using Virtual Calls is the potential loss of inlining. It depends on how smart your compiler is.

cleak · a year ago
Cache coherency and better prediction is the main reason I went down this path in the first place. Sorting by the function to be called rather than just a grab bag of arbitrarily ordered `.Update()` calls (which is what happens in Unity, being my main reference point for this) is going to give some speed gains, even with the indirection still there. Of course eliminating some indirection is usually a win.
cleak commented on Wizards of the Coast Releases SRD Under Creative Commons License (CC-BY-4.0)   dndbeyond.com/posts/1439-... · Posted by u/xaviex
SeanAnderson · 3 years ago
The community WOTC built over decades shouldn't have to give feedback that 85%+ identify with in order to achieve results that are desirable. If community sentiment is so lopsided then what was the rationale to make the decision in the first place and how was the communities' desire not implicitly understood?

There is no doubt in my mind that WOTC (let's be real, Hasbro) has enough self-awareness to have realized they were encroaching significantly on their core demographic. They chose to do so anyway and are backtracking out of an interest of self-preservation rather than a customer-first mindset.

I find this shameful enough behavior to warrant a legitimate, heartfelt apology. Instead, they present themselves as benevolent caretakers listening to their communities' response. This comes across as tone-deaf because they've already lost the trust of the community and don't seem to have learned how to take ownership of that fact.

Still, this is a better result than if they'd stayed their advertised course. So, for that, I am thankful.

cleak · 3 years ago
I’d bet they anticipated the backlash, but thought it could be managed - some short term anger followed by long term profits. It so far hasn’t worked out that way, but these are still recent events; they may just wait to roll out changes more quietly.

u/cleak

KarmaCake day57August 10, 2018View Original