This is the first time I've heard of it, but it looks pretty cool. From what I can tell skimming through the RFC, it's a new congestion control standard that aims to minimise queuing latency. It co-exists with existing AQM technology, and is different in that it actively signals to other nodes about the state of its own queues.
> Below, we outline the three main components to the L4S architecture: 1) the Scalable congestion control on the sending host; 2) the AQM at the network bottleneck; and 3) the protocol between them.
> But first, the main point to grasp is that low latency is not provided by the network; low latency results from the careful behaviour of the Scalable congestion controllers used by L4S senders. The network does have a role, primarily to isolate the low latency of the carefully behaving L4S traffic from the higher queuing delay needed by traffic with preexisting Classic behaviour. The network also alters the way it signals queue growth to the transport. It uses the Explicit Congestion Notification (ECN) protocol, but it signals the very start of queue growth immediately, without the smoothing delay typical of Classic AQMs. Because ECN support is essential for L4S, senders use the ECN field as the protocol that allows the network to identify which packets are L4S and which are Classic.
> What L4S Adds to Existing Approaches: ... Diffserv ... State-of-the-art AQMs ... Per-flow queuing or marking ... Alternative Back-off ECN (ABE) ... Bottleneck Bandwidth and Round-trip propagation time (BBR)
My question, from a background with almost no networking, is this: there is a fundamental tradeoff between latency and throughput; what exactly is the tradeoff they're trying to make here? I understand the desire to move into lower latency, but how does this effect their utilisation?
The idea is that there should not really be a trade-off. If you manage congestion well, you can send just what the network will allow, keeping throughput high but lowering latency. It could even increase throughput as you do not have the default TCP congestion algorithm which drops it's rate after dropped packets meaning it does not dully utilize the link
> there is a fundamental tradeoff between latency and throughput
Only for the unpredictable part of the load. If you can predict it, you can have both. For that you need to control the entire stack though, starting from the OS and the apps. That's what the "scalable congestion control on the sending host" is about.
L4S is an enhanced version of ECN (Explicit Congestion Notification). It spans layer 3 and 4 in the OSI model. ECN works by having a field in the IP header part of the packet that says if the packet experienced congestion on its way to the destination. A ECN-enabled router along the way changes the packet in transit, marking it as congested instead of dropping it. L4S is an advancement upon that, using a bigger field and special handling in routers, which allows for finer grained control over the congestion notification and more advanced algorithms to improve latency.
The answer is buried over halfway down the fucking, droning malaise of an article:
>L4S stands for Low Latency, Low Loss, Scalable Throughput, and its goal is to make sure your packets spend as little time needlessly waiting in line as possible by reducing the need for queuing.
As such, here's an archive version of this crap because it does not deserve user traffic: https://archive.is/XWzbL
Maybe not everything is written with a target audience of Dalewyn? Maybe not everyone reading this article would know what a “packet” is?
HN’s tendency to conclude “this article is not a technical manual-style dispassionate recounting of the issue with a target audience of Me, therefore it is bad” is the one thing I truly dislike about this place.
2. It's possible to answer the question within the first or second paragraph before going on a rant about the surrounding context, regardless the intended audience. Putting forth the entire context first and drowning the reason for going on that rant over halfway down the article is plainly bad writing.
This is the first time I've heard of it, but it looks pretty cool. From what I can tell skimming through the RFC, it's a new congestion control standard that aims to minimise queuing latency. It co-exists with existing AQM technology, and is different in that it actively signals to other nodes about the state of its own queues.
> Below, we outline the three main components to the L4S architecture: 1) the Scalable congestion control on the sending host; 2) the AQM at the network bottleneck; and 3) the protocol between them.
> But first, the main point to grasp is that low latency is not provided by the network; low latency results from the careful behaviour of the Scalable congestion controllers used by L4S senders. The network does have a role, primarily to isolate the low latency of the carefully behaving L4S traffic from the higher queuing delay needed by traffic with preexisting Classic behaviour. The network also alters the way it signals queue growth to the transport. It uses the Explicit Congestion Notification (ECN) protocol, but it signals the very start of queue growth immediately, without the smoothing delay typical of Classic AQMs. Because ECN support is essential for L4S, senders use the ECN field as the protocol that allows the network to identify which packets are L4S and which are Classic.
> What L4S Adds to Existing Approaches: ... Diffserv ... State-of-the-art AQMs ... Per-flow queuing or marking ... Alternative Back-off ECN (ABE) ... Bottleneck Bandwidth and Round-trip propagation time (BBR)
My question, from a background with almost no networking, is this: there is a fundamental tradeoff between latency and throughput; what exactly is the tradeoff they're trying to make here? I understand the desire to move into lower latency, but how does this effect their utilisation?
Only for the unpredictable part of the load. If you can predict it, you can have both. For that you need to control the entire stack though, starting from the OS and the apps. That's what the "scalable congestion control on the sending host" is about.
The answer is buried over halfway down the fucking, droning malaise of an article:
>L4S stands for Low Latency, Low Loss, Scalable Throughput, and its goal is to make sure your packets spend as little time needlessly waiting in line as possible by reducing the need for queuing.
As such, here's an archive version of this crap because it does not deserve user traffic: https://archive.is/XWzbL
HN’s tendency to conclude “this article is not a technical manual-style dispassionate recounting of the issue with a target audience of Me, therefore it is bad” is the one thing I truly dislike about this place.
1. This is Hacker News, not Facebook.
2. It's possible to answer the question within the first or second paragraph before going on a rant about the surrounding context, regardless the intended audience. Putting forth the entire context first and drowning the reason for going on that rant over halfway down the article is plainly bad writing.
I'm missing something because the ACK of a packet may be enough to indicate congestion.