Analogous to the way I think of self-driving cars is the way I think of fusion: perpetually a few years away from a 'real' breakthrough.
There is currently no reason to believe that LLMs cannot acquire the ability to write secure code in the most prevalent use cases. However, this is contingent upon the availability of appropriate tooling, likely a Rust-like compiler. Furthermore, there's no reason to think that LLMs will become useful tools for validating the security of applications at either the model or implementation level—though they can be useful for detecting quick wins.
It seems very short sighted.
I think of it more like self driving cars. I expect the error rate to quickly become lower than humans.
Maybe in a couple of years we’ll consider it irresponsible not to write security and safety critical code with frontier LLMs.
For some reason I think we’re all drawn to the idea of working with an older language. I wonder why!
I tend to make «sourdough discard crackers» if I have leftovers. It works well timing wise, I'm in the kitchen doing the initial stretching of my loaf anyways.
Rather than 10 of a given charger, consider a smaller number of GaN chargers with multiple ports, but be aware that many of the "smart" ones will reset all ports if any port is reconnected or renegotiates. I have a "smart" charger capable of outputting 100 W on one port or some mix of wattages on multiple ports (mainly for travel), and a "dumb" multi-port charger that I use both for slow charging of phones and for powering IoT devices that I don't want to be reset. The latter simply has multiple USB-A ports, which lets me charge almost anything - either with an A-to-C cable, or A-to-whatever-that-device-needs (either Micro-USB, Mini-USB, or something proprietary).
Then maybe another slow charger for all those miscellaneous things around the house.
I might also be hyper sensitive to the cynicism. It tends to bug me more than it probably should.