Really glad to see it. It sounds like their reasoning is essentially the same as mine for implementing HQC alongside Kyber in Cyph three years ago: having a different theoretical basis makes it highly complementary to Kyber/ML-KEM, but its public key sizes are much more practical than those of the more well known code-based candidate Classic McEliece.
The title is subtly incorrect: HQC is a fifth post-quantum cryptography algorithm selected for standardization by NIST, but it's only the second KEM (encryption-like).
> NIST selects HQC as fifth algorithm for post-quantum encryption
The other 3 are digital signature algorithms, not encryption.
Give how quickly quantum is potentially coming, I wonder if we should/could find some way of using multiple quantum-resistant algorithms simultaneously as a default, in case a fault is found after the limited time we have to verify that there are no faults.
Also - should we not be switching over to these algorithms starting like... now? Am I wrong that anyone collecting https traffic now will be able to break it in the future?
Correct, KEM's should be replaced ASAP since they are currently vulnerable to store-now-decrypt-later attacks. Digital signature algorithms are less urgent but considering how long it takes to roll out new cryptography standards, they should be preferred for any new designs. That said, the new PQC signatures are much larger than the current 32 byte ed25519 signatures that are most common, and that could end up being very difficult to integrate into embedded systems with low bandwidth or limited memory, ie. CAN bus secure diagnostics, meshtastic nodes etc.
Sorry if my question appears ignorant, but how quickly is quantum really coming? If your prior belief is "nothing practical is ever likely to come out of quantum computing", then so far there is nothing that would seriously suggest you to reconsider it.
I do not say this lightly, having followed the academic side of QC for more than a decade.
Given how seriously the spookie parts of the US government are taking it, I would treat it with a similar level of urgency. While we obviously aren't privy to everything they know, their public actions indicate that they don't think it's a hypothetical risk, and is something that we will need to be ready for within the next decade or so, given technology refresh cycles: they need to get these crypto algorithms in place now, for devices that will still be in production use well in to the 2030s.
There's also a good chance that the initial compromises of the classical algorithms won't be made public, at least initially. There are multiple nation-state actors working on the problem in secret, in addition to the academic and commercial entities working on it more publicly.
It's a reasonable question. The need for quantum resistant crypto isn't because the practical attack is right around the corner. All though, I do really enjoy the analogy of predicting when we'll get QC based crypto attacks, is similar to predicting when humans will land on the moon by looking at the altitude for the highest manned flight. It has more to do with the level of effort it takes to replace infra as critical as cryptography.
Imagine if next year, via magic wand, all the current TLS systems were completely and totally broken such that the whole of the internet using TLS became effectively unencrypted in any way? How much damage would that do to the ecosystem? But we also just invented a new protocol that works, so how long would it take to deploy it to just 50%? or to 80%? And how long would it take to replace the long tail?
I'll also leave record now decrypt later for another commenter.
I agree with you based on my following QC that we're still pretty far away from QC attacks on current crypto.
The problem is, this sort of question suffers from a lot of unknown unknowns. How confident are you that we don't see crypto broken by QC in the next 10 years? The next 20? Whatever your confidence, the answer is probably "not confident enough" because the costs of that prediction being wrong are incalculable for a lot of applications.
I'd say I'm 99% confident we will not see QC break any crypto considered secure now in the next 20 years. But I'll also say that the remaining 1% is more than enough risk that I think governments and industry should be taking major steps to address that risk.
> The essence of our protocol upgrade from X3DH to PQXDH is to compute a shared secret, data known only to the parties involved in a private communication session, using both the elliptic curve key agreement protocol X25519 and the post-quantum key encapsulation mechanism CRYSTALS-Kyber. We then combine these two shared secrets together so that any attacker must break both X25519 and CRYSTALS-Kyber to compute the same shared secret.
You generally don't want to layer encryption like that. It apparently really does introduce new kinds of attacks, which has been observed in the real world.
The pattern typically used for this is that the key for the high-speed symmetric encryption is split into multiple parts, each of which is encrypted with a separate public key system. One classical, one (or two, now?) with a post-quantum algorithm. As long as each part of the key is big enough, this still means you'd need to crack all the separate public key algorithms, but doesn't introduce any of the layering weaknesses.
It's not clear whether NIST's recommendations are influenced by their relationship with the NSA (who would have an incentive to pressure NIST to recommend an algorithm which they--and hopefully they alone--can break). So perhaps the reasonable move is to use only one NIST-recommended algorithm and have the other layer be one that they do not recommend.
In Cyph, I went with Kyber (lattice-based) combined with HQC (code-based) for encryption. NTRU Prime also may be a good option if Kyber is ever broken in a way that doesn't fundamentally break all lattice crypto.
For signing, I treat Dilithium (lattice-based) as "standard security" and SPHINCS+ (hash-based) as "high security". In particular, the former is used for end user public keys and certificates, while the latter is used for code signing where the larger public key and signature sizes are less of an issue.
In all cases, I wouldn't use PQC without combining it with classical crypto, in Cyph's case X25519/Ed25519. Otherwise you run the risk of a situation like SIDH/SIKE where future cryptanalysis of a particular PQC algorithm finds that it isn't even classically secure, and then you're hosed in the present.
> In all cases, I wouldn't use PQC without combining it with classical crypto
With hash-based signatures, hybridization isn't required. They are the most powerful signature scheme approach by far. The security assumption hash-based signatures rely on is also shared with every other signature scheme (hashes are what are signed). Other schemes come with additional assumptions.
It's unfortunate that hash-based public key exchange is not practical. Merkle Puzzles require 2^n data exchange for 2^2n security.
It’s not that off topic IMO, NIST (along with all the other federal agencies) is finalizing restructuring/reorganization plans this week in response to one of last month’s executive orders. It’s not clear when the extent of the cuts will be announced, or what the criteria will be for which programs are to be affected. Though in the Commerce secretary’s confirmation hearing there was mostly good will towards this kind of light touch non-regulatory standardization approach that NIST tries to take
Meta: I can understand the math problems behind RSA and DH, and the general concepts of EC, but all stuff for post-quantum algorithms I have yet to have a intuitive understanding even after reading / watching a bunch of videos trying to explain things.
The intuition for ML-KEM-style lattice cryptography is actually kind of straightforward. Generate a secret vector x and a random matrix A; multiply Ax, then corrupt it with a random vector of errors e, so instead of Ax=b you have Ax+e=b.
Solving Ax=b is like week 2 of undergraduate linear algebra. Solving Ax+e=b, in a lattice setting, is Hard in the same sense factoring is: as you run the steps of elimination to attempt to solve it, the errors compound catastrophically.
Interestingly you can get information theoretically secure constructions using lattices by tuning the parameters. For example, if you make the `A` matrix large enough it becomes statistically unlikely that more than 1 solution exists (e.g. generate a random binary vector `x` and it's unlikely that an `x'` exists that solves for `b`).
Yeah, I'm going to have to spend some time trying to understand the math too. I was mostly waiting for things to calm down standardization wise. This newly selected algorithm is based on error correction codes (the same ones used in communications protocols), which look interesting.
I was in the same exact boat. I just could not digest it.
I found AI (combo Grok & ChatPPT 4o) to be the best resource for this. It was able to break it down to digestible chunks, then pull it together that made sense. It even made suggestions what math areas I need to brush up on.
Predicting anything 25 years out is a fool's errand.
Cryptographically relavent Quantum computers are definitely not happening in the near term, but 25 years is a long enough time horizon that it is plausible.
Just consider what tech was like 25 years ago. Would anyone (without the benefit of hindsight) in 1999 really be able to predict modern AI, the ubiquity of smart phones, etc. Heck 25 years ago people still thought the internet thing was a fad. Anyone trying to predict 25 years out is full of crap.
Except for RC4, which people were side-eyeing on Usenet in the 1990s, and key size progress, I think the cryptography primitives we were using 25 years ago have held up reasonably well. A lot of things that have been major issues since then are things that we know theoretically back in 2000, but just didn't know viscerally, because people hadn't written exploits. That's not the same thing as, like, Peter Schwabe's take on whether a particular key exchange or KEM is likely to survive until 2050.
> NIST selects HQC as fifth algorithm for post-quantum encryption
The other 3 are digital signature algorithms, not encryption.
Also - should we not be switching over to these algorithms starting like... now? Am I wrong that anyone collecting https traffic now will be able to break it in the future?
I do not say this lightly, having followed the academic side of QC for more than a decade.
There's also a good chance that the initial compromises of the classical algorithms won't be made public, at least initially. There are multiple nation-state actors working on the problem in secret, in addition to the academic and commercial entities working on it more publicly.
Imagine if next year, via magic wand, all the current TLS systems were completely and totally broken such that the whole of the internet using TLS became effectively unencrypted in any way? How much damage would that do to the ecosystem? But we also just invented a new protocol that works, so how long would it take to deploy it to just 50%? or to 80%? And how long would it take to replace the long tail?
I'll also leave record now decrypt later for another commenter.
The problem is, this sort of question suffers from a lot of unknown unknowns. How confident are you that we don't see crypto broken by QC in the next 10 years? The next 20? Whatever your confidence, the answer is probably "not confident enough" because the costs of that prediction being wrong are incalculable for a lot of applications.
I'd say I'm 99% confident we will not see QC break any crypto considered secure now in the next 20 years. But I'll also say that the remaining 1% is more than enough risk that I think governments and industry should be taking major steps to address that risk.
Signal has a post about using pre and post-quantum together: https://signal.org/blog/pqxdh/
> The essence of our protocol upgrade from X3DH to PQXDH is to compute a shared secret, data known only to the parties involved in a private communication session, using both the elliptic curve key agreement protocol X25519 and the post-quantum key encapsulation mechanism CRYSTALS-Kyber. We then combine these two shared secrets together so that any attacker must break both X25519 and CRYSTALS-Kyber to compute the same shared secret.
The pattern typically used for this is that the key for the high-speed symmetric encryption is split into multiple parts, each of which is encrypted with a separate public key system. One classical, one (or two, now?) with a post-quantum algorithm. As long as each part of the key is big enough, this still means you'd need to crack all the separate public key algorithms, but doesn't introduce any of the layering weaknesses.
I think Lattice-based ones will eventually be broken by a quantum algorithm. I am fully on board with lamport signatures and SPHINCS+
For signing, I treat Dilithium (lattice-based) as "standard security" and SPHINCS+ (hash-based) as "high security". In particular, the former is used for end user public keys and certificates, while the latter is used for code signing where the larger public key and signature sizes are less of an issue.
In all cases, I wouldn't use PQC without combining it with classical crypto, in Cyph's case X25519/Ed25519. Otherwise you run the risk of a situation like SIDH/SIKE where future cryptanalysis of a particular PQC algorithm finds that it isn't even classically secure, and then you're hosed in the present.
With hash-based signatures, hybridization isn't required. They are the most powerful signature scheme approach by far. The security assumption hash-based signatures rely on is also shared with every other signature scheme (hashes are what are signed). Other schemes come with additional assumptions.
It's unfortunate that hash-based public key exchange is not practical. Merkle Puzzles require 2^n data exchange for 2^2n security.
Solving Ax=b is like week 2 of undergraduate linear algebra. Solving Ax+e=b, in a lattice setting, is Hard in the same sense factoring is: as you run the steps of elimination to attempt to solve it, the errors compound catastrophically.
What you described would be closer to learning with errors (https://en.m.wikipedia.org/wiki/Learning_with_errors) combined with SIS/SVP. Learning with errors is based on the parity learning problem (https://en.m.wikipedia.org/wiki/Parity_learning) in machine learning, which I take as a positive sign of security.
Interestingly you can get information theoretically secure constructions using lattices by tuning the parameters. For example, if you make the `A` matrix large enough it becomes statistically unlikely that more than 1 solution exists (e.g. generate a random binary vector `x` and it's unlikely that an `x'` exists that solves for `b`).
I found https://er4hn.info/blog/2023.12.16-sphincs_plus-step-by-step... to be a nice introduction.
I found AI (combo Grok & ChatPPT 4o) to be the best resource for this. It was able to break it down to digestible chunks, then pull it together that made sense. It even made suggestions what math areas I need to brush up on.
Great talk by Peter Gutman why this whole quantum topic is bollocks: https://www.cs.auckland.ac.nz/~pgut001/pubs/bollocks.pdf
The C in CNSA 2.0 is for "Commercial".
Cryptographically relavent Quantum computers are definitely not happening in the near term, but 25 years is a long enough time horizon that it is plausible.
Just consider what tech was like 25 years ago. Would anyone (without the benefit of hindsight) in 1999 really be able to predict modern AI, the ubiquity of smart phones, etc. Heck 25 years ago people still thought the internet thing was a fad. Anyone trying to predict 25 years out is full of crap.
https://xkcd.com/678/
Dead Comment