http://www.open-std.org/jtc1/sc22/wg14/
Since UB allows the compiler to do anything in those situations, we can reduce the amount of UB without breaking existing “legal” code. By actually defining more behavior.
If a proposal is reasonable needs to be discussed in the standardization committee. All other discussions are a nice hobby. But eventually moot.
First, they permit that some people even take notice about this situation. Few developers read the standard and even less write it or follow the discussions to change it (are they even open?) or write a compiler for it. The rationales are not even tracked [1]. It actually would be insanely hard to get a good understanding of those subjects by e.g. just reading the standard, without having those kind of discussions on forums typically used by more devs than just a few dozens of compiler writers...
[1]: but while I'm thinking about it, an impressive independent book as been written by Derek Jones: The New C Standard: An Economic and Cultural Commentary http://www.knosof.co.uk/cbook/cbook.html
A huge number of seemingly-trivial optimizations depend on assuming that undefined behavior will never happen. The number of optimizations that don't depend on UB in any way is quite small. For example, if you want to get pedantic about it, even automatic promotion of local variables to registers is exploiting undefined behavior—who's to say you didn't have a pointer that just happened to point to one of them?
There are means to apply the modern approach of optimizations beyond O0 on C without importing all kind of UB from the language level. You just have to actually proove the properties you want to rely on, instead of relying on wishful "UB => authorized by the standard => programmer fault if anything goes bad" thinking.
And promoting local variable to registers CERTAINLY NOT depends on language level UB. It would be permitted by the as-if general rule even if anything prevented it to happen in the first place, which is not the case. You don't have to have an address if nobody wants it and random pointers haver never been required to allow access to all objects, especially those who might never have an address at all. Plus nobody ever expected that anyway. People expect 2s complement. Or at least something that can not result in nasal daemons, and given C history, something that matches what the processor does. So 2s complements is at least not utterly stupid. So conflating the two is dishonest to the highest point -- except maybe if the only intended audience of the C language is now experts who e.g. write compilers. What a bright future this would be.
Hell, we dropped the hypothetical flat memory model even without strict aliasing for maybe 20 years (and probably 30, to be honest), and this had NEVER caused the kind of issues we are talking about. So don't pretend it did, just to dismiss the real issues. So ok even then it was actually probably informal as hell and in some ways worse for experts, but the amount of exploited UB was also WAY smaller. Quantity matters in this area. And context too. Do you want secure OR fast embedded systems? I would prefer reasonably secure and reasonably fast. Certainly NOT fast to execute and exploit, or more probably fast to crash pathetically.
You know very well that compiling in O0 is not going to happen in prod on tons on projects.
Don't dismiss real concerns with false "solutions", especially when mixed with proofs of your misunderstanding of the situation.