The list goes on and on and on. No, they will not just be replaced by whatever is producing loose AI facsimiles of the real world in a smartphone.
Sure, just the bugs in the link.
Content-Length+Transfer-Encoding should be bad request.
RFC is also not respected: "Proxies/gateways MUST remove any transfer-coding prior to forwarding a message via a"
Content-Lenght: \r\n7 is also a bad request.
Just those mean whoever wrote the parser didn't even bother read the RFC...
No parsing failure checks either...
That kind of person will mess up HTTP/2 as well.
It's not a protocol issue if you can't even be bothered to read the spec.
> The post makes the case that HTTP/2 is systematically less vulnerable than HTTP/1 to the kinds of vulnerabilities it's talking about.
Fair enough, I disagree with that conclusion. I'm really curious what kind of bugs the engineers above would add with HTTP/2, will be fun.
Some things must be encrypted well enough so that even if NSA records them now, even 10 years or 20 years later they will not be able to decipher them.
Other things must be encrypted only well enough so that nobody will be able to decipher them close to real time. If the adversaries decipher them by brute force after a week, the data will become useless by that time.
Lightweight cryptography is for this latter use case.
Global Foundries, Micron, and Texas Instruments all come to mind