HTTP/1.1 is inherentely more resistant to centralized political and social pressure than HTTP/2 and HTTP/3 as those have baked in (to 99.9999% of user agents and libs) requirements for CA TLS. It's also far more robust over long time periods.
I understand that for business and institutional use cases HTTP/1.1 is undesirable. But for human use cases, like long lasting and robust websites that don't just become unvisitable every ~3 years (with CA cert expirations, etc, etc) HTTP+HTTPS on HTTP/1.1 is irreplacable.
Browsers, lib devs, and web developers, should consider the needs of human persons and not just corporate persons. This is a misguided declaration at best and one who's context needs to be clearly defined.
Desync attacks do not affect static and public content, which is the only form of “long lasting and robust websites” available; so it is perfectly reasonable to continue serving such content over HTTP with nothing to fear from desyncs.
There is an enormous campaign, both by companies and security enthusiasts, which promotes the view that serving static content over HTTP should, as the article says, "die".
This is James Kettle, who more or less invented HTTP/1.1 desync attacks, and has delivered several years of Black Hat talks about them; he's basically the unofficial appsec keynote at Black Hat.
I understand that for business and institutional use cases HTTP/1.1 is undesirable. But for human use cases, like long lasting and robust websites that don't just become unvisitable every ~3 years (with CA cert expirations, etc, etc) HTTP+HTTPS on HTTP/1.1 is irreplacable.
Browsers, lib devs, and web developers, should consider the needs of human persons and not just corporate persons. This is a misguided declaration at best and one who's context needs to be clearly defined.