[0] https://chromium.googlesource.com/chromium/src/+/HEAD/base/m...
> In addition to the low roundtrip time, the connections between your load balancer and application server likely have a very long lifetime, hence don’t suffer from TCP slow start as much, and that’s assuming your operating system hasn’t been tuned to disable slow start entirely, which is very common on servers.
A single HTTP/1.1 connection can only process one request at a time (unless you attempt HTTP pipelining), so if you have N persistent TCP connections to the backend, you can only handle N concurrent requests. Since all of those connections are long-lived and are sending at the same time, if you make N very large, you will eventually run into TCP congestion control convergence issues.
Also, I don't understand why the author believes HTTP/2 is less debuggable than HTTP/1; curl and Wireshark work equally well with both.
Both JSON (as defined in the RFC) and JSON5 have a nice property of being well-defined, meaning that you can use different libraries in different languages on different platforms to parse them, and expect the same result. "JSON but parser behaves reasonably (as defined by the speaker)" does not have this property.
The most up-to-date version of HTTP/1.1 spec is RFC 9112, which says:
> Although the line terminator for the start-line and fields is the sequence CRLF, a recipient MAY recognize a single LF as a line terminator and ignore any preceding CR.
"MAY", of course, is different from "MUST" or "SHOULD", so I feel like the author's claim that implementations rejecting bare NLs are broken is at odds with the specification.
"This is the next step in the standardization process, and it comes with stronger guarantees of stability and intellectual property protection."
I understand stability, and in the general sense I see that people feel they need to protect their IP, but in this specific case what is meant by "intellectual property protection"?
[0] https://www.w3.org/policies/patent-policy/#sec-Requirements
I can imagine noisy RF, industrial, congested links, new queueing at the extremes in densely loaded switches, but the thing is: usually out there are strategies to reduce the congestion. External noise, factory/industrial/adversarial, sure. This is going to exist.
Error correction codes on L4 level are generally only useful for very low latency situations, since if you can wait for one RTT, you can just have the original sender retransmit the exact packets that got lost, which is inherently more efficient than any ECC.