Readit News logoReadit News
ekr____ commented on OpenSSH Post-Quantum Cryptography   openssh.com/pq.html... · Posted by u/throw0101d
pilif · 21 days ago
In light of the recent hilarious paper around the current state of quantum cryptography[1], how big is the need for the current pace of post quantum crypto adoption?

As far as I understand, the key material for any post quantum algorithm is much, much larger compared to non-quantum algorithms which leads to huge overheads in network traffic and of course CPU time.

[1]: https://eprint.iacr.org/2025/1237

ekr____ · 21 days ago
As a number of people have observed, what's happening now is mostly about key establishment, which tends to happen relatively infrequently, and so the overhead is mostly not excessive. With that said, a little more detail:

- Current PQ algorithms, for both signature and key establishment, have much larger key sizes than traditional algorithms. In terms of compute, they are comparably fast if not faster.

- Most protocols (e.g., TLS, SSH, etc.) do key establishment relatively infrequently (e.g., at the start of the connection) and so the key establishment size isn't a big deal, modulo some interoperability issues because the keys are big enough to push you over the TCP MTU, so you end up with the keys spanning two packets. One important exception here is double ratchet protocols like Signal or MLS which do very frequent key changes. What you sometimes see here is to rekey with PQ only occasionally (https://security.apple.com/blog/imessage-pq3/).

- In the particular case of TLS, message size for signatures is a much bigger deal, to a great extent because your typical TLS handshake involves a lot of signatures in the certificate chain. For this reason, there is a lot more concern about the viability of PQ signatures in TLS (https://dadrian.io/blog/posts/pqc-signatures-2024/). Possibly in other protocols too but I don't know them as well

ekr____ commented on New executive order puts all grants under political control   arstechnica.com/science/2... · Posted by u/pbui
pfannkuchen · 24 days ago
I’m not aware of anything stopping it except for perhaps how the system is set up.

Like if I want to fund a pet study that I’m interested in, can I just call up Harvard and offer the lab $1M to work on it? I’ve never heard of anyone doing that, but I’m not really sure why it doesn’t exist (which is why I’m asking if anyone else knows).

ekr____ · 24 days ago
More or less. Corporations fund research all the time. Just to pick a random example, check out the acknowledgements on this paper: https://eprint.iacr.org/2025/132

"This work was funded in part by NSF Award CNS-2054869 and gifts from Apple, Capital One, Facebook, Google, and Mozilla."

That said, private grant funding is just of a completely different scale than government grant funding. For example, NIH's annual budget is 48 billion and most of that goes to research (https://www.nih.gov/about-nih/organization/budget).

ekr____ commented on Where's Firefox going next?   connect.mozilla.org/t5/di... · Posted by u/ReadCarlBarks
giancarlostoro · 2 months ago
I mean it could take longer sure, but the funding would still be there ;)
ekr____ · 2 months ago
No, not really. It's just not even in the same order of magnitude in terms of level of effort.
ekr____ commented on Where's Firefox going next?   connect.mozilla.org/t5/di... · Posted by u/ReadCarlBarks
giancarlostoro · 2 months ago
If you cut that compensation in half you could have funded a small team of devs to have finished Oxidation of Firefox and have a really interesting browser, and potentially a really rich GUI stack, JavaScript Engine and who knows what else for Rust itself as a result, on top of it all being production ready and proven because of the nature of Firefox's reach.

There were major noticeable speed differences in Firefox when they implemented key component in Rust. I say this having used Firefox since 2004.

ekr____ · 2 months ago
> If you cut that compensation in half you could have funded a small team of devs to have finished Oxidation of Firefox and have a really interesting browser, and potentially a really rich GUI stack, JavaScript Engine and who knows what else for Rust itself as a result, on top of it all being production ready and proven because of the nature of Firefox's reach.

I'm not sure exactly what you have in mind here but this really isn't true for basically any plausible value of "finished Oxidation of Firefox".

As context for scale, during the Quantum Project, Mozilla imported two major pieces of Servo: Stylo and WebRender. Each of these involved sizable teams and took years of effort, and yet these components (1) started from pre-existing work that had been done for Servo and (2) represent only relatively small fractions of Gecko. Replacing most of the browser -- or even a significant fraction of it -- with Rust code would be a far bigger undertaking.

ekr____ commented on The fish kick may be the fastest subsurface swim stroke yet (2015)   nautil.us/is-this-new-swi... · Posted by u/bookofjoe
MengerSponge · 2 months ago
(2015 article)

I get that it's a quirk of the sport's history, but it's funny and dumb that swimming awards medals and records for being the fastest at a slower stroke. It's like if track meets would have a 100m sprint, a 100m skip, and a 100m run-backwards.

If I could change things in the world, I wouldn't eliminate the extraneous strokes in swimming, but I would include additional competitions in all the track distances: backwards running, handstand walk, and one-legged hopping.

ekr____ · 2 months ago
> I get that it's a quirk of the sport's history, but it's funny and dumb that swimming awards medals and records for being the fastest at a slower stroke. It's like if track meets would have a 100m sprint, a 100m skip, and a 100m run-backwards.

This is arguably what race walking is, though it's over longer distances.

ekr____ commented on The fish kick may be the fastest subsurface swim stroke yet (2015)   nautil.us/is-this-new-swi... · Posted by u/bookofjoe
aleph_minus_one · 2 months ago
> I'm curious why it's not a thing.

According to onlypassingthru in https://news.ycombinator.com/item?id=44542370 "The optics of an underwater race were not good".

Additionally consider (as was pointed by swarnie in https://news.ycombinator.com/item?id=44542285 ) that there exist clothing restrictions in Olympic swimming - in my opinion this is also a contradiction to the spirit of "freestyle".

ekr____ · 2 months ago
The usual argument against clothing restrictions (see also supershoes in running and various aero stuff in cycling) is that you want the sport to reward the best athletes rather than turning into a technological arms race. This is especially complicated in sports where people don't get to choose their own gear and so (for instance), whether you have access to the best shoes depends on who your sponsor is. Back when Nike was first rolling out the first supershoes, you would sometimes see athletes sponsored by other brands actually wear Nikes with the logo blacked out, because it was just such a big advantage.

As another comparison point, look at Formula 1, where technology is a huge part of the competition, with the result that a driver can be dominant one year and then fall way back the next because of some technological shift. Of course, even F1 does tinker with the rules a lot to try to preserve competition, as when they banned electronic stabilization.

ekr____ commented on Why SSL was renamed to TLS in late 90s (2014)   tim.dierks.org/2014/05/se... · Posted by u/Bogdanp
adgjlsfhk1 · 3 months ago
Any chance that can be used to undo lots of the ossification that made QUIC a UDP based hack rather than it's own level 4 protocol?
ekr____ · 3 months ago
Basically none.

First the success rate of any new IP-based protocol through most devices is incredibly low, especially now that NAT is so common.

Second, part of why QUIC runs over UDP is because the operating system generally won't let applications send raw IP datagrams.

Even running over UDP, QUIC has nontrivial failure rates and the browsers have to fall back to TLS over TCP.

ekr____ commented on Why SSL was renamed to TLS in late 90s (2014)   tim.dierks.org/2014/05/se... · Posted by u/Bogdanp
upofadown · 3 months ago
You don't have to have everyone switch over on the same day as with your example. Once it is decreed that implementations are widespread enough, then everyone can switch over to the introduced thing gradually. The "flag day" is when it is decreed that implementations no longer have to support some previously widely used method. Support for that method would then gradually disappear unless there was some associated cryptographic emergency that could not be dealt with without changing the standard.
ekr____ · 3 months ago
Well, this is basically what we do, except that we try to negotiate to the highest version during the period before the flag day. This is far more practical for three reasons:

1. You actually get benefit during the transition period because you get to use the new version.

2. You get to test the new version at scale, which often reveals issues, as it did with TLS 1.3. It also makes it much easier to measure deployment because you can see what is actually negotiated.

3. Generally, implementations are very risk averse and so aren't willing to disable older versions until there is basically universal deployment, so it takes the pressure off of this decision.

ekr____ commented on Why SSL was renamed to TLS in late 90s (2014)   tim.dierks.org/2014/05/se... · Posted by u/Bogdanp
da_chicken · 3 months ago
They still should have just called it TLS v4.0 instead of v1.0.

I'm halfway convinced that they have made subsequent versions v1.1, v1.2, and v1.3 in an outrageously stubborn refusal to admit that they were objectively incorrect to reset the version number.

ekr____ · 3 months ago
As I noted below, there was real discussion around the version number for TLS 1.3. I don't recall any such discussion for 1.1 and 1.2.

u/ekr____

KarmaCake day529September 22, 2021View Original