Readit News logoReadit News
rwmj · 2 years ago
TCP is also concerned with fairness (after Van Jacobson's famous paper[1]). We've long known that you can batter data through the network at speed if you don't care about other users. How does QUIC preserve fairness?

[1] https://inst.eecs.berkeley.edu/~cs162/fa23/static/readings/j...

Veserv · 2 years ago
Congestion control is largely orthogonal to transport protocol design. You can basically slap the general shape of any congestion control algorithm onto whatever transport protocol you want.

Their interaction is largely at the level of: “How easy does the protocol make it to estimate the data channel parameters?”, “What happens in the event of congestion-related failures (packet loss, delay, reorder, etc)?”, and “How do you efficiently recover and adapt to the new channel parameters?”

In all of these regards QUIC is quite a bit better than TCP.

londons_explore · 2 years ago
Buuuut.... If you're running a big web-app (eg. tiktok) you have a business incentive to not be fair.

If your servers can push more data down the users slow internet connection, your service will get better load times, while competitors services sharing the same network link (ie. your brother in the next door bedroom on youtube) will stutter and lose users.

actionfromafar · 2 years ago
All this QUIC talk makes me think of tape backups. Wonder if a tape backup could be devised which was inspired by network congestion algorithms - i.e. if a section of bad tape was discovered, the same data would get "resent" - to tape! Until it read back correctly (by a read head just behind the write head).

This way, the overall tape data density could be increased even if it would mean occasional dropouts.

Maybe this is already how modern tape drives work?

bheadmaster · 2 years ago
According to this random document I found on the internet [0], in section 7., QUIC implements a congestion control system similar to TCP New Reno.

[0] https://quicwg.org/base-drafts/rfc9002.html

hlandau · 2 years ago
More precisely - it suggests one. Just like for TCP (see the myriad number of algorithms available for Linux's TCP stack) you can choose whatever congestion control strategy you want.
01HNNWZ0MV43FF · 2 years ago
It seems like a bug in the middlebox if it requires all end points to cooperate and doesn't throttle anyone.

I thought the point of congestion control was to slow down pointless sending when an overloaded middle box is dropping our packets and saying "buddy take the hint", not to actually yield to other traffic? The network tells us how much capacity we have, trying to push more would just increase our own stream's packet loss. ... Right?

Borg3 · 2 years ago
Fairness? It seems noone gives a shit about it. Most consumers want their data now. I have ETTH 100Mbit (with is very fast to my standards) and I can tell the difference between day, evening and weekend just by pinging my nodes. RTT is stable offhours, but in normal day jitter kicks in. It sad to see RTT jump from 9 to 90ms over major IX. Everything seems to be overbooked horribly.

I know that Internet is best effort network (ATM, anyone remember it?) but Im sure I would preffer slower but more stable internet.

mort96 · 2 years ago
ISP overprovisioning is a completely different issue from fair congestion control algorithms... every device involved can be using totally fair congestion control algorithms and an overprovisioned network will still see degraded performance. I mean that's literally the point of fair congestion control algorithms, make it so that the overprovisioned network still works and everyone's performance degrades roughly evenly!
ithkuil · 2 years ago
Slower by how much?

Would 90ms be ok for you?

There you have it!

If you don't want to accidentally get used to 9ms, just add a traffic shaper on your end to slow down your network.

You can't change the fact that during peak activity there are so many people who send so many packets and saturate the bandwidth in such a way that the current fair congestion protocols allocate you the bandwidth and latency profile you observe.

ajb · 2 years ago
That's probably due to the absence of an effective AQM as the bottleneck, rather than the transport protocols being unfair. See https://datatracker.ietf.org/doc/html/rfc7567 for the current best practice
lxgr · 2 years ago
ISPs usually don't perform congestion control by simply dropping packets (and hoping that the affected flows will scale down fairly).

At the edge (where congestion is most likely), they'll usually enforce some type of QoS or just round-robin scheduling, which means whatever greedy congestion control algorithm you use, you'll at most be able to hog 1/N of the available throughput.

If the congestion happens further inside the ISPs backbone, the solution is usually to upgrade that component instead.

Where fairness can matter is inside a home network: If the upstream router doesn’t do anything about it, it is possible for a greedy flow to force out other competing flows. But you don’t need any fancy non-TCP protocols for that: Just open tons of TCP flows. The solution there is to yell at your roommates to stop the torrenting immediately, you have an important Zoom call, damnit!

flumpcakes · 2 years ago
I don't think this article was very good and seems to be written by someone that doesn't really know what they're talking about, or by someone that does but "dumbed down" to a degree where it's conclusion inaccurate.

I think HTTP3 is probably doomed for a IPv6 like existence for a long while. While everyone claims that TCP is apparently "too slow", the vast majority of corporate/enterprise settings will just block it.

It seems like a technology built by the big players who want to shave a few cycles off each connection and save $millions rather than a practical standard.

Do I want to use HTTP3 at home? Yeah, sounds cool.

Will I be able to use it at work? Probably not for 5+ years.

01HNNWZ0MV43FF · 2 years ago
If you use Chromium or Firefox, you may have used HTTP/3 and not realized. It happened to me!

https://ifconfig.net/ uses HTTP/3 by virtue of Cloudflare's proxy using it. I'm on Firefox 115, half a year old, and the network inspector says I connect to ifconfig over HTTP/3.

The rollout's been really smooth and quiet. Browsers do the same "happy eyeballs" optimization as they did for HTTP/2 and SPDY, racing QUIC with TCP and using which-ever one connects first, or maybe dropping TCP if QUIC connects a couple milliseconds later.

Hell, CF actually rolled out HTTP/3 before the pandemic, in 2019 https://blog.cloudflare.com/http3-the-past-present-and-futur...

jeroenhd · 2 years ago
HTTP3 works fine. It's used by Cloudflare, which (unfortunately) means that half the websites you visit probably use it. It's available in nginx, caddy, and a bunch of other servers.

Sure, most servers will just support the protocol at the reverse proxy level and lack the optimisations that make HTTP3 faster for the big cloud providers that stand to gain most, but you can just set up HTTP3 on your own server if you want.

I don't know what you're using at work that prevents you from using HTTP3. I'm guessing you're referring to one of those awful middleboxes or some kind of firewall that blocks outgoing UDP traffic. Luckily, that stuff isn't relevant for most connections on the internet.

williamcotton · 2 years ago
If you put an HTTP3 nginx server in front of HTTP1 applications you get a lot of the UX benefits… parallel requests that make bundling, and to an extent tools like GraphQL and Falcon, somewhat obsolete and in turn applications much less complicated.
api · 2 years ago
HTTP3 doesn't have the same chicken-or-egg problem as IPv6. It runs over UDP. IPv6 is a full protocol rev which takes a lot longer to accomplish. Once a large multi-vendor network is deployed revving the base protocol is really hard.
Spivak · 2 years ago
Yeah but

> Palo Alto Networks recommends creating a security policy in the firewall to block the QUIC application. With the QUIC traffic getting blocked by the Firewall, the Chrome browser will fall back to using traditional TLS/SSL. Note that this will not cause the user to lose any functionality on their browser. Firewall gains better visibility and control of Google applications with or without the SSL decryption enabled.

https://docs.paloaltonetworks.com/best-practices/10-2/decryp...

https://knowledgebase.paloaltonetworks.com/KCSArticleDetail?...

Fortinet also recommends blocking it.

> However, due to the protocol is still in an experimental stage, it is not supported by Fortinet and causes some issues when SSL inspection profiles are needed to block specific websites or applications provided by Google itself.

https://community.fortinet.com/t5/FortiGate/Technical-Note-D...

And given how trivial and likely udp/443 is already blocked because the only reason prior to QUIC you would be hitting udp/443 is some malware trying to get past the firewall and hoping it's open without inspection turned on you're fighting an uphill battle.

simiones · 2 years ago
Even if HTTP/3 had gone the route of registering QUIC as a new transport protocol instead of UDP, I think it would have still been at least one or two orders of magnitude easier to deploy than IPv6 was (though still much harder than going over UDP is, I'm not suggesting this was a bad choice in any way!).

They would have had problems with kernel updates and with devices that make assumptions and more paranoid security policies, but they would have probably been able to convince most people to allow their packets to flow after a few years. Changing IPv6 is a much more fundamental change to the Internet as a whole.

apitman · 2 years ago
> IPv6 is a full protocol rev which takes a lot longer to accomplish

And maybe will never happen. There are alternatives for a lot of what you get from IPv6, such as NAT (terrible but works for most people's needs) and SNI routing (I'm more bullish on this).

Are there any examples of protocols that took a similarly long time to achieve adoption but eventually did saturate the market? That would be hopeful.

ronsor · 2 years ago
Not to mention the companies backing it are so large and critical they can simply mandate it and enterprises will just have to deal with it, somehow.

Deleted Comment

hot_gril · 2 years ago
I'm annoyed that the comment above yours dismisses the entire article based on the false assumption that QUIC is its own L4 protocol. I was fooled for a second, then I actually read the article, where it clearly says QUIC runs over UDP. So there's no chicken-and-egg.
foobiekr · 2 years ago
ipv6 is very, very widely deployed in mobile networks. Download some net tools and check out your phone's interface table.

Also v6 did a bunch of things right. NDP and link local addresses for v6 actually work. The fragmentation changes were 100% the right thing to do. v6 extensions like SR and uSID are both solid and consolidate a lot of stupid stuff into something clean.

v6 had a bad, rocky start. A lot of that was completely inadequate attention to the transition, and the vendors fucked up almost everything for years, on their own gear and interop, but at this point v6 is fine.

Would I launch a v6 only service? No, because at this point the SPs are the problem. Verizon as of a few years ago and maybe now still didn't do v6 to any of their home users. So many things like that.

unethical_ban · 2 years ago
I finally got IPv6 working on my home network this week. /60 prefix from the ISP, Slaac with PD for addresses, RA for DNS, unbound DNS working just fine.

OPNsense works great! This is something I've wanted to get working for years.

lxgr · 2 years ago
> Verizon as of a few years ago and maybe now still didn't do v6 to any of their home users. So many things like that.

My FIOS home connection has had IPv6 since some time last year. Consumer ISPs are actually quite motivated to offer IPv6 since it gives them a path forward without an IPv4 address per customer better than CG-NAT. Many consumers also just use their ISP's CPE, which makes this shift easier as well.

I think the bigger obstacle will be managed corporate connections (for Wi-Fi or other client access connectivity) that are only upgraded once every few years and just don't support IPv6.

hot_gril · 2 years ago
Ipv6 isn't just an upgraded protocol over ipv4. It could've been, but it's not. It's a totally separate network with its own routing. So just changing all hardware along the way to support ipv6 doesn't mean it's adopted.
1vuio0pswjnm7 · 2 years ago
HTTP/3 may have good speed for webpages that trigger many HTTP requests to source ads and support tracking/telemetry from multiple, disparate servers controlled by different entitites. (Makes sense since HTTP/3 was designed by an advertising company.) But HTTP/3 speed is not any better than HTTP/1.1 speed for downloading a file or a series of files from the same server in a single TCP connection. Some have said it's worse. (HTTP/1.1 was not designed by an advertising company so if it sucks for ads/tracking/telemetry then that makes perfect sense.)
mike_hock · 2 years ago
Yes, it saves server resources at extreme scales. The client-side savings are purely theoretical. If their JS bloatware crap takes 2s to respond to a click, the 3x 30ms network roundtrip to fetch a resource with a new HTTPS connection hardly matters.
unethical_ban · 2 years ago
The write that the tcp three way handshake is syn-ack-synack.

I don't trust it.

Deleted Comment

Slix · 2 years ago
I don't think this article is very well-written. It gets confused about whether QUIC has a handshake or not (it does). And it conflates zero round-trip time with combining the TCP/TLS handshakes together.
ktzar · 2 years ago
I came to mention the same. Diagrams have redundant information, examples are badly picked, there's sentences with little to no value... I don't know if it's lack of care, the author not writing in his native language, or excess of GPT.

> QUIC works on top of UDP. It overcomes the limitations of TCP by using UDP. It’s just a layer or wrapper on top of UDP.

Makes me want to stop reading.

animesh371g · 2 years ago
QUIC doesn't have a handshake which is another reason for it being fast
dsr_ · 2 years ago
The article says this repeatedly, but it's also wrong about that. QUIC doesn't use exactly the same three-way handshake at the beginning of the session -- but it uses a handshake that lasts at least 3 packets.

https://quic.xargs.org/

Ragnarork · 2 years ago
It's a bit perplexing that an article that make some claims about QUIC speed over TCP has exactly zero benchmarks, numbers, anything to back that up besides theory.

I could be inclined to believe it but I'd like to know by which factor, in which circumstances, with real examples and numbers.

jeffbee · 2 years ago
What would you want to compare? Google Quiche vs. Linux? Chrome vs. Windows TCP? What version of Linux, Windows, or Chrome? Under how much delay, loss, etc? There's a large parameter space to sweep.
hot_gril · 2 years ago
Real-world test. Load Facebook on an iPhone in HTTP3 vs 2 or 1. Load YouTube in Chrome on Windows. At least tell me if the page ends up being rendered more quickly. This won't capture things like server load, though.
unethical_ban · 2 years ago
Uh, how about "something" vs. "nothing".

Start with QUIC vs. TCP. Same OS, same client/server software.

Edit: since the system says "I'm posting too quickly, slow down" despite not posting for an hour because they don't use accurate error messages here, let me say :

I find it ironic that so many keystrokes below are used in a meta-discussion about the quality of comments here, when others are posed a legitimate question about the topic. Nevermind the article they seem to defend gets the order of a TCP 3 way handshake wrong.

And since I can't respond to them, I can only d*wnv*te or fl*g them.

mannyv · 2 years ago
QUIC: let's build an application-specific protocol on top of UDP and call it a day.

So really, is it faster? Does it reduce the amount of load on network devices? Does it allow a server to serve more connections more quickly than the equivalent HTTP/2.0 stack? Does it make my web app faster?

I mean, for 99% of Wordpress sites the problem is Wordpress, not the transport protocol. For a lot of the web the problem is client-side rendering issues.

QUIC may solve a problem, but is it a problem in real life or a thought experiment that got folded into a standard?

lxgr · 2 years ago
> Does it reduce the amount of load on network devices?

This is not a goal of QUIC: Ideally, network devices between two TCP hosts don't do anything to TCP segments that they wouldn't also do to UDP datagrams, so there shouldn't be much difference.

> I mean, for 99% of Wordpress sites the problem is Wordpress, not the transport protocol.

Addressing Wordpress problems or rendering issues is also not really a stated goal of QUIC.

It does solve some real TCP inefficiencies though, especially when used on older OS/kernel versions that'll probably never be updated to make use of some of the newer developments there (Bufferbloat avoidance is a big one, for example).

These might not be very high on your list, but they absolutely are on someone's. And as long as you don't host your own webserver, you don't need to do anything to benefit from the results, since your hoster/CDN/proxy will just transparently provide them for you.

winstonprivacy · 2 years ago
I don't have benchmarks but I built a protocol that QUIC later turned out to be eerily similar to. It was a direct replacement for TCP and we used it to provide an additional layer of encryption for all traffic on a given network between two points.

Latency was exceptionally improved. Web pages felt like the loaded faster and at the very least, users could not tell that they were using an encrypted connection.

The protocol essentially worked using a fast-ACK protocol that would preemptively request retransmits (and was occasionally wrong). This enabled it to use connectionless UDP protocol as the underlying transport mechanism. There is, of course, a cost for reduced latency. That cost was slightly higher bandwidth utilization on the network. This was suboptimal for long-lived streams (media and other downloads) so we tried to fault over to ordinary TCP in these instances.

contravariant · 2 years ago
From the diagram they draw QUIC is supposed to solve transport layer problems (reliability, congestion, security). Perhaps some session layer issues slip in as well, but it looks like you don't have to put HTTP on top of it.

I have no clue why it's built on UDP though, do we just accept that the time has passed for truly new network protocols?

standardly · 2 years ago
Combining the TLS and TCP handshakes into a single request amounts to probably less than 50ms latency. For a major web hosting provider, this might add up. For most just cases, it's trivial.
miohtama · 2 years ago
Most of the web traffic is BigTech and big 5 and they see significant cost and energy reductions using QUIC. Nobody cares about Wordpress sites, as it does not make a dent.
kd913 · 2 years ago
I feel the main limitation over here is hardware optimisation support.

With TCP you have the congestion control algo baked in hardware, tcp segment and checksum offload. You can pass things directly to the NIC for massive latency, bandwidth and offloading processing away from the cpu.

A properly tuned system with a user space networking stack with ctpio, hardware offload and proper tuning beats the pants of QUIC for latency.

It is possible to get some of the same benefits I guess with GSO. In any case, the slow hardware support here I suspect is a bottleneck. You may not get much benefit given more layers are binary/encrypted and not visible to hardware.

The above is not relevant for hyperscalers like Google I imagine where they can make the hardware and the sheer amount of customers offer bandwidth benefits.

jabart · 2 years ago
The internet itself is not properly tuned and distance is an issue. On a local lan, sure a TCP tuned system will do amazing. On the public internet where who knows where your packet is going it's an optimal but not tuned system with latency TCP wasn't designed for. Also a modern CPU with SIMD/AVX can handle a lot of traffic in user space. UDP also has a checksum in it's packet header.
mannyv · 2 years ago
Uh, the fact is that the internet works and has been working pretty well for the last few decades.

While you may have issues with certain aspects of TCP and its behavior over WAN, your issues have no practical significance in real life.

kd913 · 2 years ago
For hyperscalers in the cloud and low latency finance, tcp is likely still king.

You do not want things running on CPU as that is compute that could be sold to someone else.

This needs an easy way to offload to a specialised hardware accelerator.

wmf · 2 years ago
With TCP you have the congestion control algo baked in hardware

This part isn't correct, and you wouldn't want e.g. NewReno baked in to your NIC and preventing you from using CUBIC or BBR. It's true that TCP benefits more from NIC offloads than QUIC but most places (besides Netflix) aren't driving enough WAN traffic per server to matter.

jeffbee · 2 years ago
In my experience, congestion control in hardware would be the very last thing I would want. Everything needs to be pushed as far toward the edges of the system as possible. This is what quic offers.
kd913 · 2 years ago
For what? It has been done for a while now.

The fastest, lowest latency mechanisms will always offload to accelerator cards in hardware.

01HNNWZ0MV43FF · 2 years ago
I believe you but as a networking noob, could someone tell me how segmentation and old checksum algorithms are significant compared to TLS overhead?

Is it because TLS is hardware accelerated with AES instructions or something already?

kd913 · 2 years ago
TLS offload also exists and is normally implemented on NIC too.

Basic goal here is not to process this on CPU as that is slow and compute that could be used for user apps/customers.

b112 · 2 years ago
With TCP you have the congestion control algo baked in hardware, tcp segment and checksum offload. You can pass things directly to the NIC for massive latency, bandwidth and offloading processing away from the cpu.

A lot of NICs are essentially just as software modems, with the CPU handling this anyhow. Even server hardware sometimes has these lame NIC chips on them.

Be careful to rid the specs of the hardware you buy, otherwise it will indeed be your CPU doing all that work.

oaiey · 2 years ago
The chart at the bottom: So google switched it on, and then, let us guess, Facebook and Netflix. Afterwards no growth for a year. Displacing looks different to me. HTTP/2 is growing at cost of HTTP/1 ... so the only conclusion I can draw here, is that HTTP/3 adoption has halted. Do not read that negatively. The users who really needed it (like Google, Facebook, Netfix, ...) are using it and the rest, has it very low on the priority list if at all.

I have my doubts that everyone needs HTTP/3, with UDP traffic has network device wise also its disadvantages and library availability and complexity should be lower for HTTP/1 and /2 for the foreseeable future.

freedomben · 2 years ago
Not disagreeing, but adding: Cloudflare also uses it between browser and proxy, which is a huge amount non-trivial amount of the small internet.
apitman · 2 years ago
I'm a fan of QUIC, but these articles are always heavy on explanations and light on data. I understand in theory how head-of-line blocking can cause serious issues on a lossy network, but by this point I would expect to see a ton of data backing that up in real-world usage from Google and Cloudflare.

One specific question I've had is on a lossy network are there really that many situations where you would have packet loss on one QUIC stream but not most/all the others? I don't doubt that's true but I would love to see a breakdown.

Also, what's the crossover point between opening multiple TCP streams and doing round robin across them? Maybe you only need 3-4 TCP connections to estimate the HOL advantages of QUIC.

I will admit fewer RTT handshakes is a more obvious win.

adgjlsfhk1 · 2 years ago
> One specific question I've had is on a lossy network are there really that many situations where you would have packet loss on one QUIC stream but not most/all the others?

This isn't the advantage. The advantage is that if you have 10 different streams and a low loss rate, the impact of each packet lost would be much smaller. Consider a voip call with an architecture where each person's audio is served as a separate stream from a central server. With head of line blocking, every packet lossed causes a slight stutter in the whole conversation because even if the lost packet is from someone not talking, you have to wait for that packet of silence before decoding the packet from the person speaking. With QUIC, the extra latency only affects the individual stream, so the conversation won't stutter when the person not talking wasn't the one who's packet got lost.

apitman · 2 years ago
What I'm questioning is how likely it is to lose a single packet on one stream. I understand why in theory that would be bad and QUIC would help, but I'm curious how often it actually happens. Sibling provided at least one example but I would be interested to see some data.
KMag · 2 years ago
> One specific question I've had is on a lossy network are there really that many situations where you would have packet loss on one QUIC stream but not most/all the others? I don't doubt that's true but I would love to see a breakdown.

If an intermediate router is using Randomized Early Detection (RED) and is congested within the randomized packed drop region, then you could easily see one random packet in the middle of a stream dropped, with other streams unaffected.

tmikaeld · 2 years ago
QUIC is very exciting, after seeing what it did for latency in Cloudflare network and Cloudflare workers, I can't wait to finally see it in Deno 1.41[0].

[0] https://github.com/denoland/deno/pull/21942#issuecomment-192...