Now better ways of doing strings and arrays, no need for something as modern as 1976's PL/I way of doing them.
I would not let the name distract you - it's equivalent to overloading, not "generics" (i.e. parametric polymorphism)
In that capacity I think it's quite a lot more sane than C++. Having a closed set of overloads, not having name mangling, not having complex name lookup rules are all a good thing.
C99 §6.7.2.1.13
> Within a structure object, the non-bit-field members and the units in which bit-fields reside have addresses that increase in the order in which they are declared. A pointer to a structure object, suitably converted, points to its initial member (or if that member is a bit-field, then to the unit in which it resides), and vice versa. There may be unnamed padding within a structure object, but not at its beginning
C99 §6.7.2.1.13
> An implementation may allocate any addressable storage unit large enough to hold a bit-field. If enough space remains, a bit-field that immediately follows another bit-field in a structure shall be packed into adjacent bits of the same unit. If insufficient space remains, whether a bit-field that does not fit is put into the next unit or overlaps adjacent units is implementation-defined. The order of allocation of bit-fields within a unit (high-order to low-order or low-order to high-order) is implementation-defined. The alignment of the addressable storage unit is unspecified.
Which is standardese for pretty much exactly everything you said :)
The consequence of the first rule is that there's only one sane way to lay out structs. The only way to break that rule which I can imagine would be to add extra padding - you can't swap the order of any members under these rules.
Their most "obnoxious" feature is that the layout is implementation defined, so they are not very portable between compilers and/or architectures.
Often used in an embedded setting to model hardware registers, when you can know/control what compiler implementation is used.
[1]: https://en.wikipedia.org/wiki/Bit_field#C_programming_langua...
Edit: more with the words.
C itself doesn't specify any ABI. A given platform simply uses one as a matter of convention.
The 3DS also had a note-keeping system built into the main menu and usable in any game, but I don't think many people bothered.
And it won't even be coy about it.
Family and IRL friends use WhatsApp.
The "even" makes the tone of your comment feel a tiny bit disrespectful towards AMD. By 2021, it was clear to me that AMD had their gloves off and were winning. Zen 3 was released in 2020 - the third generation of nearly flawless execution by AMD that Intel failed to respond to - outside of cutting the prices on some CPUs. For a while, Intel held onto the "fastest single-core speeds". Back in 2017, my first thought after being blown away by the performance of a first-gen Zen PC build was "I should buy shares in AMD" - AMD clearly had a superior product with an even better value proposition.
I would not say that the first gen of Zen is was a clear winner over Skylake. It took a couple iterations before AMD clearly took the lead. AMD was simply so far behind that several large generational improvements were needed to do better than Intel.
I think that's the essential point, really... It'd be hard to argue that the rest of Rust isn't overall "better" than C++, but the compromises made to flexibility and ergonomics to achieve memory safety in Rust are the biggest points of contention for Rust critics.