Readit News logoReadit News
tptacek · 3 years ago
This is probably the cryptography bug of the year. It's easy to exploit and bypasses signature verification on anything using ECDSA in Java, including SAML and JWT (if you're using ECDSA in either).

The bug is simple: like a lot of number-theoretic asymmetric cryptography, the core of ECDSA is algebra on large numbers modulo some prime. Algebra in this setting works for the most part like the algebra you learned in 9th grade; in particular, zero times any algebraic expression is zero. An ECDSA signature is a pair of large numbers (r, s) (r is the x-coordinate of a randomly selected curve point based on the infamous ECDSA nonce; s is the signature proof that combines x, the hash of the message, and the secret key). The bug is that Java 15+ ECDSA accepts (0, 0).

For the same bug in a simpler setting, just consider finite field Diffie Hellman, where we agree on a generator G and a prime P, Alice's secret key is `a mod P` and her public key is `G^a mod P`; I do the same with B. Our shared secret is `A^b mod P` or `B^a mod P`. If Alice (or a MITM) sends 0 (or 0 mod P) in place of A, then they know what the result is regardless of anything else: it's zero. The same bug recurs in SRP (which is sort of a flavor of DH) and protocols like it (but much worse, because Alice is proving that she knows a key and has an incentive to send zero).

The math in ECDSA is more convoluted but not much more; the kernel of ECDSA signature verification is extracting the `r` embedded into `s` and comparing it to the presented `r`; if `r` and `s` are both zero, that comparison will always pass.

It is much easier to mess up asymmetric cryptography than it is to mess up most conventional symmetric cryptography, which is a reason to avoid asymmetric cryptography when you don't absolutely need it. This is a devastating bug that probably affects a lot of different stuff. Thoughts and prayers to the Java ecosystem!

loup-vaillant · 3 years ago
Interestingly, EdDSA (generally known as Ed25519) does not need as many checks as ECDSA, and assuming the public key is valid, an all-zero signature will be rejected with the main checks. All you need to do is verify the following equation:

R = SB - Hash(R || A || M) A

Where R and S are the two halves of the signature, A is the public key, and M is the message (and B is the curve's base point). If the signature is zero, the equation reduces to Hash(R || A || M)A = 0, which is always false with a legitimate public key.

And indeed, TweetNaCl does not explicitly check that the signature is not zero. It doesn't need to.

However.

There are still ways to be clever and shoot ourselves in the foot. In particular, there's the temptation to convert the Edwards point to Montgomery, perform the scalar multiplication there, then convert back (doubles the code's speed compared to a naive ladder). Unfortunately, doing that introduces edge cases that weren't there before, that cause the point we get back to be invalid. So invalid in fact that adding it to another point gives us zero half the time or so, causing the verification to succeed even though it should have failed!

(Pro tip: don't bother with that conversion, variable time double scalarmult https://loup-vaillant.fr/tutorials/fast-scalarmult is even faster.)

A pretty subtle error, though with eerily similar consequences. It looked like a beginner-nuclear-boyscout error, but my only negligence there was messing with maths I only partially understood. (A pretty big no-no, but I have learned my lesson since.)

Now if someone could contact the Whycheproof team and get them to fix their front page so people know they have EdDSA test vectors, that would be great. https://github.com/google/wycheproof/pull/79 If I had known about those, the whole debacle could have been avoided. Heck, I bet my hat their ECDSA test vectors could have avoided the present Java vulnerability. They need to be advertised better.

DyslexicAtheist · 3 years ago
> Thoughts and prayers to the Java ecosystem!

some very popular PKI systems (many CA's) are powered by Java and BouncyCastle ...

nmadden · 3 years ago
BouncyCastle has its own implementation of ECDSA, and it’s not vulnerable to this bug.
na85 · 3 years ago
>infamous ECDSA nonce

Why "infamous"?

SAI_Peregrinus · 3 years ago
It's more properly called 'k'. It's really a secret key, but it has to be unique per-signature. If an attacker can ever guess a single bit of the nonce with probability non-negligibly >50%, they can find the private key of whoever signed the message(s).

It makes ECDSA very brittle, and quite prone to side-channel attacks (since those can get attackers exactly such information.

Dylan16807 · 3 years ago
I'm not particularly knowledgeable here, but I know it's extremely fragile, far beyond just needing to be unique. See "LadderLeak: Breaking ECDSA With Less Than One Bit of Nonce Leakage"
Zababa · 3 years ago
Thank you for that, that was a great explanation.
tialaramex · 3 years ago
This is the sort of dumb mistake that ought to get caught by unit testing. A junior, assigned the task of testing this feature, ought to see that in the cryptographic signature design these values are checked as not zero, try setting them to zero, and... watch it burn to the ground.

Except that, of course, people don't actually do unit testing, they're too busy.

Somebody is probably going to mention fuzz testing. But, if you're "too busy" to even write the unit tests for the software you're about to replace, you aren't going to fuzz test it are you?

hsbauauvhabzb · 3 years ago
The issue is the assumption juniors should be writing the unit tests, sounds like you might be part of the problem.
tialaramex · 3 years ago
I think I probably technically count as a junior in my current role, which is very amusing and "I don't write enough unit tests" was one of the things I wrote in the self-assessed annual review.

So, sure.

tptacek · 3 years ago
The point of fuzz testing is not having to think of test cases in the first place.
tialaramex · 3 years ago
[Somebody had down-voted you when I saw this, but it wasn't me]

These aren't alternatives, they're complementary. I appreciate that fuzz testing makes sense over writing unit tests for weird edge cases, but "these parameters can't be zero" isn't an edge case, it's part of the basic design. Here's an example of what X9.62 says:

> If r’ is not an integer in the interval [1, n-1], then reject the signature.

Let's write a unit test to check say, zero here. Can we also use fuzz testing? Sure, why not. But lines like this ought to scream out for a unit test.

loup-vaillant · 3 years ago
You still need your tests to cover all possible errors (or at least all plausible errors). If you try random numbers and your prime happens to be close to a power of two, evenly distributed random numbers won't end up outside the [0,n-1] range you are supposed to validate. Even if your prime is far enough from a power of two, you still won't hit zero by chance (and you need to test zero, because you almost certainly need two separate pieces of code to reject the =0 and >=n cases).

Another example is Poly1305. When you look at the test vectors from RFC 8439, you notice that some are specially crafted to trigger overflows that random tests wouldn't stumble upon.

Thus, I would argue that proper testing requires some domain knowledge. Naive fuzz testing is bloody effective but it's not enough.

solarengineer · 3 years ago
If we write an automated test case for known acceptance criteria, and then write necessary and sufficient code to get those tests to pass, we would know what known acceptance criteria are being fulfilled. When someone else adds to the code and causes a test to fail, the test case and the specific acceptance criteria would thus help the developer understand intended behaviour (verify behaviour, review implementation). Thus, the test suite would become a catalogue of programmatically verifiable acceptance criteria.

Certainly, fuzz tests would help us test boundary conditions and more, but they are not a catalogue of known acceptance criteria.

anfilt · 3 years ago
While fuzz testing is good and all, when it comes to cryptography, the input spaces is so large that chances of finding something are even worse than finding a needle in a hay stack.

For instance here the keys are going to be around 256 bits in a size, so if your fuzzer is just picking keys at random, your basically never likely to pick zero at random.

With cryptographic primitives you really should be testing all known invalid input parameters for the particular algorithm. A a random fuzzer is not going to know that. Additionally, you should be testing inputs that can cause overflows and are handled correctly ect...

kasey_junk · 3 years ago
This is true in principle but in practice most fuzz testing frameworks demand a fair bit of setup. It’s worth it!

But if you are in a time constrained environment where basic unit tests are skipped fuzz testing will be as well.

ramblerman · 3 years ago
Imaging being so senior you no longer need to write unit tests yourself, but just delegate them.

Sounds exactly like the kind of disconnected environment that would lead to such bugs.

ptx · 3 years ago
Apparently you have to get a new CPU to fix this Java vulnerability, or alternatively a new PSU.

(That is to say: a Critical Patch Update or a Patch Set Update. Did they really have to overload these TLAs?)

RandomBK · 3 years ago
Does anyone know why this was only given a CVSS score of 7.5? Based on the description this sounds way worse, but Oracle only gave it a CVSS Confidentiality Score of "None", which doesn't sound right. Is there some mitigating factor that hasn't been discussed?

In terms of OpenJDK 17 (latest LTS), the issue is patched in 17.0.3, which was release ~12h ago. Note that official OpenJDK docker images are still on 17.0.2 as of time of writing.

tptacek · 3 years ago
CVSS is a completely meaningless Ouija board that says whatever the person authoring the score wants it to say.
bertman · 3 years ago
The fix for OpenJDK (authored on Jan. 4th 22):

https://github.com/openjdk/jdk/blob/e2f8ce9c3ff4518e070960ba...

drexlspivey · 3 years ago
with commit message “Improve ECDSA signature support” :D
baobabKoodaa · 3 years ago
I'm guessing the commit message is obscured to give people more time to update before it's exploited in the wild.
sdhfkjwefs · 3 years ago
Why are there no tests?
MrBuddyCasino · 3 years ago
I spot no test or comment in the code on why this assertion is important.
bertman · 3 years ago
It's literally what the whole bug is about. From OP's article:

>This is why the very first check in the ECDSA verification algorithm is to ensure that r and s are both >= 1. Guess which check Java forgot?

vlowrian · 3 years ago
What puzzles me most is that two days after the announcement of the vulnerability and the release of the patched Oracle JDK, there is still no patched version of OpenJDK for most distributions.

We're running some production services on OpenJDK and CentOS and until now there are only two options to be safe: shutdown the services or change the crypto provider to BouncyCastle or something else.

The official OpenJDK project lists the planned release date of 17.0.3 as April 19th, still the latest available GA release is 17.0.2 (https://wiki.openjdk.java.net/display/JDKUpdates/JDK+17u).

Adoptium have a large banner on their website and until now there is not a single patched release of OpenJDK available from them (https://github.com/adoptium/adoptium/issues/140).

There are no patched packages for CentOS, Debian or openSUSE.

The only available version of OpenJDK 17.0.3 I've seen until now seems to be the Archlinux package (https://archlinux.org/packages/extra/x86_64/jdk17-openjdk/). They obviously have their own build.

How can it be that this is not more of an issue? I honestly don't get how the release process of something as widely used as OpenJDK can take more than 2 days to provide binary packages for something already fixed in the code.

This shouldn't be much more effort than letting the CI do its job.

Edit: Typo.

ptx · 3 years ago
Azul published updated packages yesterday, including for some older non-LTS Java versions: https://www.azul.com/downloads/?package=jdk#download-openjdk
vlowrian · 3 years ago
Thanks for the info! That's very interesting since they usually only provide out-of-cycle critical fixes for their paid tiers. On the other hand - this only proves that it's actually possible to provide a hot-fixed OpenJDK in time.

Unfortunately, I assume that a very common case is just using the distribution provided openjdk-package and configuring the system for auto updates. So the main issue here is that a serious number of systems is relying on the patch process of the distribution to fix issues like this and they are still vulnerable at this moment.

gunnarmorling · 3 years ago
For folks on RHEL, the java-17-openjdk package for RHEL 8 has been updated: https://access.redhat.com/errata/RHSA-2022:1445.

> The official OpenJDK project lists the planned release date of 17.0.3 as April 19th, still the latest available GA release is 17.0.2

> (https://wiki.openjdk.java.net/display/JDKUpdates/JDK+17u).

I don't think there 17.0.3 ever will be available from openjdk.java.net; there's no LTS for upstream builds, and since Java 18 is out already, no further builds of 17 should be expected there. IMO, this warrants some clarification on that site though.

needusername · 3 years ago
> I don't think there 17.0.3 ever will be available from openjdk.java.net

https://adoptopenjdk.net/upstream.html

These are the official upstream builds by the updates project built by Red Hat. Not to be confused by Red Hat Java, not to be confused by the AdoptOpenJDK/Adoptium builds. These can‘t be hosted on openjdk.java.net because they host only builds done by Oracle, not to be confused by Oracle JDK.

vlowrian · 3 years ago
Thanks for the clarification. The site is not clear on that topic and actually suggests otherwise by listing the planned release dates in the timeline.

On the other hand, the problem that many popular server distributions like CentOS and Debian still haven't updated their Java 17 packages remains and I wonder if this is due to their own package build process or because they are waiting for an upstream process to complete.

If they actually rely on the upstream builds from openjdk.java.net that would mean that the fix will not make it to their repositories at all.

Razhan · 3 years ago
Amazon had releases of Corretto available on April 19th, Corretto 17 was released before 10am PDT, less than one hour after the announcement
LaputanMachine · 3 years ago
>Just a basic cryptographic risk management principle that cryptography people get mad at me for saying (because it’s true) is: don’t use asymmetric cryptography unless you absolutely need it.

Is there any truth to this? Doesn't basically all Internet traffic rely on the security of (correctly implemented) asymmetric cryptography?

fabian2k · 3 years ago
I've seen this argument often on the topic of JWTs, which are also mentioned in the tweets here. In many situations there are simpler methods than JWTs that don't require any cryptography, e.g. simply storing session ids server-side. With these simple methods there isn't anything cryptographic that could break or be misused.

The TLS encryption is of course assumed here, but that is nothing most developers ever really touch in a way that could break it. And arguably this part falls under the "you absolutely need it" exception.

slaymaker1907 · 3 years ago
You can still use encryption with JWTs if you use a symmetric key. I believe HS256 just uses a symmetric key HMAC with SHA256. If you go beyond JWT, Kerberos only uses symmetric cryptography while not being as centralized as other solutions. Obviously, the domain controller is centralized, but it allows for various services to use common authentication without compromising the whole domain if any one service is compromised (assuming correct configuration which is admittedly difficult with Kerberos).
jaywalk · 3 years ago
Server-side session storage isn't necessarily a replacement for JWTs. It can be in many cases, but it's not one to one. JWTs do have advantages.

Deleted Comment

er4hn · 3 years ago
The biggest problem with JWTs is not what cryptography you use (though there was a long standing issue where "none" was something that clients could enter as a client side attack...) but rather revocation.

x509 certificates have several revocation mechanisms since having something being marked as "do not use" before the end of its lifetime is well understood. JWTs are not quite there.

nicoburns · 3 years ago
> Is there any truth to this?

Yes, symmetric cryptography is a lot more straightforward and should be preferred where it is easy to use a shared secret.

> Doesn't basically all Internet traffic rely on the security of (correctly implemented) asymmetric cryptography?

It does. This would come under the "unless you absolutely need it" exception.

adgjlsfhk1 · 3 years ago
note that symmetric encryption is also really hard. it wasn't until 2010 or so that GCM mode came around and provided a system that is somewhat easy to implement without accidentally breaking everything.
lazide · 3 years ago
Initial connection negotiation and key exchange does, anything after that no. It will use some kind of symmetric algo (generally AES).

It's a bad idea (and no one should be doing it) to continue using asymmetric crypto algorithms after that. If someone can get away with a pre-shared (symmetric) key, sometimes/usually even better, depending on the risk profiles.

loup-vaillant · 3 years ago
> It will use some kind of symmetric algo (generally AES).

AES-GCM, you mean. Let's not forget the authentication in "authenticated encryption". I'm nitpicking, but if a beginner comes here it's better to make it clear that in general, encryption alone is not enough. Ciphertext malleability and all that.

smegsicle · 3 years ago
if people were getting mad at him, he must have been pretty obnoxious about it because i don't think there's much controversy- Asymmetric encryption is pretty much just used for things like sharing the Symmetric key that will be used for the rest of the session

of course it would be more secure to have private physical key exchange, but that's not a practical option, so we rely on RSA or whatever

Sirened · 3 years ago
It's generally good to use symmetric cryptography wherever possible because it usually (!) is faster and simpler. More complex crypto systems provide interesting properties but if you can pull off whatever you're doing without it, why bother. The author tries to make a security claim for this but IMO that's not even the real issue
formerly_proven · 3 years ago
I wouldn't be particularly worried of someone decrypting a file encrypted in the 80s using Triple DES anytime soon. I don't think I'll live to see AES being broken.

I wouldn't bet on the TLS session you're using to have that kind of half life.

loup-vaillant · 3 years ago
There are two sides to this coin: one is the actual strength of the primitives involved. RSA is under increasingly effective attacks, and though elliptic curves are doing very well for now, we have the looming threat of Cryptographically Relevant Quantum Computers. Still, without CRQC there's a good chance that X25519 and Ed25519 won't be broken for decades to come.

The other side is the protocol itself. Protocols are delicate, and easy to mess up in catastrophic ways. On the other hand, they're also provable. We can devise security reductions that prove that the only way to break the protocol is to break one of its primitives. Such proofs are even mechanically verified with tools like ProVerif and Tamarin.

Maybe TLS is a tad too complex to have the same half life as AES. The Noise protocols however have much less room for simplification. That simplicity makes them rock solid.

dynamite-ready · 3 years ago
Wonder if someone can add a little more info to the title of this story. It's would probably draw more clicks if the title wasn't so cryptic. This is essentially a Java dev infosec post.
pas · 3 years ago
Just wait a few days and it'll be on the news like the log4j2 vulnerability :) (Though it might not, because in practice BouncyCastle is used in a most big/old Java software - as far as I know.)