Readit News logoReadit News
metadat · 5 months ago
The key takeaway is hidden in the middle:

> In extreme cases, on purely CPU bound benchmarks, we’re seeing a jump from < 1Gbit/s to 4 Gbit/s. Looking at CPU flamegraphs, the majority of CPU time is now spent in I/O system calls and cryptography code.

400% increase in throughput, which should translate to a proportionate reduction in CPU utilization for UDP network activity. That's pretty cool, especially for better power efficiency on portable clients (mobile and notebook).

I found this presentation refreshing. Too often, claims about transition to "modern" stacks are treated as being inherently good and do not come with the data to back it up.

fulafel · 5 months ago
Any guesses on whether they have other cases where they get more than 4 Gbps but wasn't CPU bound or was this the fastest they got?
mxinden · 5 months ago
_Author here_.

4 Gbit/s is on our rather dated benchmark machines. If you run the below command on a modern laptop, you likely reach higher throughput. (Consider disabling PMTUD to use a realistic Internet-like MTU. We do the same on our benchmark machines.)

https://github.com/mozilla/neqo

cargo bench --features bench --bench main -- "Download"

a-dub · 5 months ago
i wonder if we'll ever see hardware accelerated cross-context message passing for user and system programs.
wbl · 5 months ago
Shared ring buffers for IO exist in Linux, I don't think we'll ever see it extend to DMA for the NIC due to the rearchitecture of security required. However if the NIC is smart enough and the rules simple maybe.
Veserv · 5 months ago
While their improvements are real and necessary for actual high speed (100 Gb/s and up), 4 Gb/s is not fast. That is only 500 MB/s. Something somewhere, likely not in their code, is terribly slow. I will explain.

As the author cited, kernel context switch is only on the order of 1 us (which seems too high for a system call anyways). You can reach 500 MB/s even if you still call sendmsg() on literally every packet as long as you average ~500 bytes/packet which is ~1/3 of the standard 1500 bytes MTU. So if you average MTU sized packets, you get 2 us of processing in addition to a full system call to reach 4 Gb/s.

The old number of 1 Gb/s could be reached with a average of ~125 bytes/packet, ~1/12 of the MTU or ~11 us of processing.

“But there are also memory copies in the network stack.” A trivial 3 instruction memory copy will go ~10-20 GB/s, 80–160 Gb/s. In 2 us you can drive 20-40 KB of copies. You are arguing the network stack does 40-80(!) copies to put a UDP packet, a thin veneer over a literal packet, into a packet. I have written commercial network drivers. Even without zero-copy, with direct access you can shovel UDP packets into the NIC buffers at basically memory copy speeds.

“But encryption is slow.” Not that slow. Here is some AES-128 GCM performance done what looks like over 5 years ago. [1] The Intel i5-6500, a midline processor from 8 years ago, averages 1729 MB/s. It can do the encryption for a 500 byte packet in 300 ns, 1/6 of the remaining 2 us budget. Modern processors seem to be closer to 3-5 GB/s per core, or about 25-40 Gb/s, 6-10x the stated UDP throughput.

[1] https://calomel.org/aesni_ssl_performance.html

raggi · 5 months ago
> which seems too high for a system call anyways

spectre & meltdown.

> you get 2 us of processing in addition to a full system call to reach 4 Gb/s

TCP has route binding, UDP does not (connect(2) helps one side, but not both sides).

> “But encryption is slow.” Not that slow.

Encryption _is slow_ for small PDUs, at least the common constructions we're currently using. Everyone's essentially been optimizing for and benchmarking TCP with large frames.

If you hot loop the state as the micro-benchmarks do you can do better, but you still see a very visible cost of state setup that only starts to amortize decently well above 1024 byte payloads. Eradicate a bunch of cache efficiency by removing the tightness of the loop and this amortization boundary shifts quite far to the right, up into tens of kilobytes.

---

All of the above, plus the additional framing overheads come into play. Hell even the OOB data blocks are quite expensive to actually validate, it's not a good API to fix this problem, it's just the API we have shoved over bsd sockets.

And we haven't even gotten to buffer constraints and contention yet, but the default UDP buffer memory available on most systems is woefully inadequate for these use cases today. TCP buffers were scaled over time, but UDP buffers basically never were, they're still conservative values from the late 90s/00s really.

The API we really need for this kind of UDP setup is one where you can do something like fork the fd, connect(2) it with a full route bind, and then fix the RSS/XSS challenges that come from this splitting. After that we need a submission queue API rather than another bsd sockets ioctl style mess (uring, rio, etc). Sadly none of this is portable.

On the crypto side there are KDF approaches which can remove a lot of the state cost involved, it's not popular but some vendors are very taken with PSP for this reason - but PSP becoming more well known or used was largely suppressed by its various rejections in the ietf and in linux. Vendors doing scale tests with it have clear numbers though, under high concurrency you can scale this much better than the common tls or tls like constructions.

ori_b · 5 months ago
> spectre & meltdown.

I just measured. On my Ryzen 7 9700X, with Linux 6.12, it's about 50ns to call syscall(__NR_gettimeofday). Even post-spectre, entering the kernel isn't so expensive.

Veserv · 5 months ago
I think you are just agreeing with me?

You are basically saying: “It is slow because of all these system/protocol decisions that mismatch what you need to get high performance out of the primitives.”

Which is my point. They are leaving, by my estimation, 10-20x performance on the floor due to external factors. They might be “fast given that they are bottlenecked by low performance systems”, which is good as their piece is not the bottleneck, but they are not objectively “fast” as the primitives can be configured to solve a substantially similar problem dramatically faster if integrated correctly.

vlovich123 · 5 months ago
There is no indication what class the CPU they're benchmarking on. Additionally, this is presumably including the overhead of managing the QUIC protocol as well given they mention encryption which isn't relevant for raw UDP. And QUIC is known to not have a good story of NIC offload for encryption at the moment the way you can do kTLS offload for TCP streams.
Veserv · 5 months ago
Encryption is unlikely to be relevant. As I pointed out, doing it on any modern desktop CPU with no offload gets you 25-40 Gb/s, 6-10x faster than the benchmarked throughput. It is not the bottleneck unless it is being done horribly wrong or they do not have access to AES instructions.

“It is slow because it is being layered over QUIC.” Then why did you layer over a bottleneck that slows you down by 25x. Second of all, they did not used to do that and they still only got 1 Gb/s previously which is abysmal.

Third of all, you can achieve QUIC feature parity (minus encryption which will be your per-core bottleneck) at 50-100 Gb/s per core, so even that is just a function of using a slow protocol.

Finally, CPU class used in benchmarking is largely irrelevant because I am discussing 20x per-core performance bottlenecks. You would need to be benchmarking on a desktop CPU from 25 years ago to get that degree of single-core performance difference. We are talking iPhone 6, a decade old phone, territory for a efficient implementation to bottleneck on the processor at just 4 Gb/s.

But again, it is probably not a problem with their code. It is likely something else stupid happening on the network stack or protocol side of which they are merely a client.

philipallstar · 5 months ago
I really liked this. All Mozilla content should be like this. Technical content written by literate engineers. No alegria.
znpy · 5 months ago
It’s crazy thar sendmmsg/recvmmsg are considered “modern”… i mean, they’ve been around for quite a while.

I was expecting to see io_uring mentioned somewhere in the linux section of the article.

Cloudef · 5 months ago
io_uring doesn't really have equivalent[1], it can't batch multiple UDP diagrams, best it can do is batch multiple sendmsgs and recvmsgs. GSO/GRO is the way to go. sendmmsg/recvmmsg are indeed very old, and some kernel devs wish they could sunset them :)

1: https://github.com/axboe/liburing/discussions/1346

LtdJorge · 5 months ago
Will ZCRX help here? I’m not sure it supports UDP. It should provide great speed-ups but it requires hardware support which is very scarce for now.
jcranmer · 5 months ago
> After many hours of back and forth with the reporter, luckily a Mozilla employee as well, I ended up buying the exact same laptop, same color, in a desperate attempt to reproduce the issue.

Glad to know that networking still produces insanity trying to reproduce issues à la https://xkcd.com/2259/.

3form · 5 months ago
For that matter, a fun read in the "The map download struggle, part 2 (Technical)" section at https://www.factorio.com/blog/post/fff-176 (end of the document).
Analemma_ · 5 months ago
Factorio's dev blog is a great deal of fun. It's on pause at the moment after the release of 2.0, but if you go through the archives there's great stuff in there. A lot of it is about optimizations which only matter once you're building 10,000+ SPM gigafactories, which casual players will never even come close to, but since crazy excess is practically what defines hardcore Factorio players it's cool to see the devs putting in the work to make the experience shine for their most devoted fans.
bobmcnamara · 5 months ago
Could be related to UDP checksum offload.

0x0000 is a special value for some NICs meaning please calculate for me.

One NIC years ago would set 0xFFFF for bad checksum. At first we thought this was horrifyingly broken. But really you can just fallback to software verification for the handful of legitimate and bad packets that arrive with that checksum.

Joel_Mckay · 5 months ago
It is funnier if you've ever dealt with mystery packet runts, as most network appliances still do not handle them very cleanly.

UDP/QUIC can DoS any system not based on a cloud deployment large enough to soak up the peak traffic. It is silly, but it pushes out any hosting operation that can't reach a disproportionate bandwidth asymmetry with the client traffic. i.e. fine for FAANG, but a death knell for most other small/medium organizations.

This is why many LAN still drop most UDP traffic, and rate-limit the parts needed for normal traffic. Have a nice day =3

Too · 5 months ago
Why are they supporting Android 5? It’s over 10 years old, the devices running it after updates even older. Mobile devices from that era must have a real tough time to browse the modern bloated web. It shouldn’t even be possible to publish to Play store when targeting such an old API level. Who is the user base? Hackers who refurbished their old OnePlus, run it with charger always plugged in, didn’t upgrade to a newer LineageOS, and installed an alternative App Store, just for the sake of it? While novel, it’s a steep price to pay, as we see here it is slowing down development for the rest of us.
mxinden · 5 months ago
Note that I (author) made a mistake. We (Mozilla) recently raised the minimum Android version off of 5. See https://blog.mozilla.org/futurereleases/2025/09/15/raising-t... for details.
brycewray · 5 months ago
https://bugzilla.mozilla.org/show_bug.cgi?id=1979683

Still seeing this in Firefox with Cloudflare-hosted sites on both macOS and Fedora.

mxinden · 5 months ago
Author here. Thanks for raising this. I posted a comment. Maybe you can help us reproduce.

https://bugzilla.mozilla.org/show_bug.cgi?id=1979683#c3

brycewray · 5 months ago
I was the one who filed the original webcompat issue :-) ...

https://github.com/webcompat/web-bugs/issues/168913

Although the form result made it sound like a macOS-only issue, I actually have observed (and continue to observe) it on both macOS and Fedora.

EDIT: In the thread, am seeing the reference to how Firefox-on-QUIC works if one has IPv6. My ISP (Frontier FiOS) infamously doesn't support IPv6, so I'm out of luck there where Firefox is concerned.

Cloudef · 5 months ago
Interesting I was not aware of GSO/GRO equivalent on Windows and MacOS, though unfortunate that they seem buggy.
Avamander · 5 months ago
I wonder why Microsoft and Apple do not care about the proper functioning of their network stacks.

Pretty sure GSO/GRO aren't the only buggy parts either.