For example, I recently stopped coding all the time and picked up a new hobby at night, writing. That simple change gave my brain a reset and actually improved the performance during the day. Same goes for planning: switching from digital to pen and paper breaks the routine and makes your brain engage differently. It’s less about the tool and more about how the change wakes you up.
How does that work in practice? Swallowing a full 1M context window would take in the order of minutes, no? Is it possible to do this for, say, an entire codebase and then cache the results?
It's a paradigm that had (debatable) value in the late 1990's and early 2000's when Unreal Engine and Unity had been designed and OOP was all the rage ;)
In the meantime we realized that not everything should in fact be an (OOP) object and runtime polymorphism is usually not needed to build games, even if the whole "animal => mammal => dog" classification thing at first sounds like it would be a good match to model game objects.
With runtime polymorphism it's like with memory management. If memory management shows up in profiling at all, it's better to drastically reduce the frequency of allocations instead of looking for a faster general purpose allocator (and if it doesn't show up in profiling as it should be, integrating a faster allocator doesn't make much sense either because it won't make a difference).
Of course stuff like this is not easy to do late into a project because it would involve a complete redesign of the architecture and rewriting all code from scratch - and that's why dropping in a faster allocator sometimes makes sense as a compromise, but it's not a fix for the underlying problem, just a crutch).
Also, the more important problem with indirect function calls than the call overhead is usually that they present a hard optimization barrier for the compiler.
For inheritance, I 100% agree. Composition all the way. I think it has value as an interface though - at least for quick bring up and fast iteration. It can of course bring scaling challenges - I recently worked on a project that had hundreds of devs and more than 50k game components. That brought all of the architectural and performance challenges you'd expect from this approach.
> Also, the more important problem with indirect function calls than the call overhead is usually that they present a hard optimization barrier for the compiler.
In the years I've had to think about this, I'd take a slightly different approach that should be more amenable to compiler optimization. I'd maintain separate lists for each concrete type and have a type aware process function (via templates) which requires all overrides to be marked final. That should allow the compiler to do inlining, avoid indirections, etc. The major downside here is handing over a footgun to the dev - forget that final keyword or pass the object in not as concrete type and performance will suffer. I'd probably still walk the vtable to see if a function has been overridden - it's unfortunate that there doesn't seem to be a way to do this without resorting to such tricks.
Writing Rust for a toy project: "I MUST avoid allocation, and the dyn keyword, at all cost!!"
Writing the same toy project in Python: "lol just copy the string who cares"
If virtual vs non-virtual function calls are causing you performance problems, you're calling way too many functions per frame. Batch the objects by concrete type and process them all in a loop.
Both Unreal and Unity make heavy use of per-instance per-frame virtual functions. It's a paradigm that has clear value. Why not make it cheaper at no cost to the dev? The option to take a systems approach to bottlenecks is the same afterwards.
But the real penalty for using Virtual Calls is the potential loss of inlining. It depends on how smart your compiler is.
There is no doubt in my mind that WOTC (let's be real, Hasbro) has enough self-awareness to have realized they were encroaching significantly on their core demographic. They chose to do so anyway and are backtracking out of an interest of self-preservation rather than a customer-first mindset.
I find this shameful enough behavior to warrant a legitimate, heartfelt apology. Instead, they present themselves as benevolent caretakers listening to their communities' response. This comes across as tone-deaf because they've already lost the trust of the community and don't seem to have learned how to take ownership of that fact.
Still, this is a better result than if they'd stayed their advertised course. So, for that, I am thankful.
I'm a pretty huge proponent for AI-assisted development, but I've never found those 10x claims convincing. I've estimated that LLMs make me 2-5x more productive on the parts of my job which involve typing code into a computer, which is itself a small portion of that I do as a software engineer.
That's not too far from this article's assumptions. From the article:
> I wouldn't be surprised to learn AI helps many engineers do certain tasks 20-50% faster, but the nature of software bottlenecks mean this doesn't translate to a 20% productivity increase and certainly not a 10x increase.
I think that's an under-estimation - I suspect engineers that really know how to use this stuff effectively will get more than a 0.2x increase - but I do think all of the other stuff involved in building software makes the 10x thing unrealistic in most cases.
I have found for myself it helps motivate me, resulting in net productivity gain from that alone. Even when it generates bad ideas, it can get me out of a rut and give me a bias towards action. It also keeps me from procrastinating on icky legacy codebases.