Readit News logoReadit News
CiPHPerCoder · 6 years ago
Tired: {"alg":"none"}

Wired: {"alg":"nonE"}

The JOSE standards (including JWT) are a gift that keeps on giving to attackers.

I designed an alternative format in 2018 called PASETO, which doesn't contain the JOSE foot-guns. (I'm pushing for an IETF RFC this year.)

https://paseto.io

EDIT: Also, this affected their Authentication API rather than their JWT library.

If you use their JWT library, well, it certainly allows this kind of horrendous misuse... but it is not, per se, vulnerable.

speedgoose · 6 years ago
Before starting a new project some time ago, I read about the critics to JOSE (JWE) and the alternative PASETO. I decided to use JOSE carefully instead of PASETO because it had an IETF RFC. I think it will be great for PASETO to get a RFC as well. The second point that made me chose JOSE was that PASETO was a bit too mean towards JOSE, and I didn't want drama in my technology choices.

But good work! With a RFC PASETO will be my choice for my next projects.

Nursie · 6 years ago
We use it, but restrict the sig alg to a couple of known-good values, so am hoping this particular vulnerability is not present in our system.

We had an infosec guy excitedly tell us that PASETO was the future, and we need to change to it right now. It looked good, and a way to avoid some of the possible JiWY issues in the same way having a TLS implementation that only allowed strong ciphers might.

But we have to integrate with so many third party pieces that require JWT it wasn't an option.

different_sort · 6 years ago
Why a new standard than to push for reform to the current standard?

Are they just closely protected by greybeards who won't listen to reason?

Question comes from a true place of ignorance/curiosity, I definitely understand the need to have unambiguous, easy to implement security tokens without the foot-guns.

CiPHPerCoder · 6 years ago
Simple answer: Because secure cryptography is backwards-incompatible with insecure cryptography, and the JOSE standards have a lot of legacy cruft that will be hard to jettison.

If you're going to put in the work (which I am), you might as well start with a clean slate rather than trying to piecemeal security improvements into their design-by-committee spec.

inopinatus · 6 years ago
Speaking as a recidivist greybeard, not sure why you'd think we'd have anything to do with something as callow and unproven as Javascript.
pillfill · 6 years ago
> Why a new standard than to push for reform to the current standard?

Or even just an opinionated library with some basic guardrails to prevent bad configurations.

grinich · 6 years ago
I think we're going to use PASETO for some stuff at WorkOS. Thanks for building it. :)
kyrra · 6 years ago
(googler, opinions are my own)

For server-to-server, I continue to prefer PGP to provide my encryption. While Google Payments[0] supports PGP and JWE for encrypting payloads, PGP is well tested and most of the bugs have been worked out. JWS/JWE continues to have implementation bugs (likely due to being too flexible).

[0] https://developers.google.com/standard-payments/reference/be...

CiPHPerCoder · 6 years ago
PGP isn't great either: https://latacora.micro.blog/2019/07/16/the-pgp-problem.html

Better options for PGP use cases:

- AWS Encryption SDK: https://docs.aws.amazon.com/encryption-sdk/latest/developer-...

- age https://age-encryption.org

- Magic Wormhole https://github.com/warner/magic-wormhole

- NaCl/libsodium (and/or usability wrappers) https://libsodium.gitbook.io/doc/

Better options for JWE use cases:

- PASETO: https://paseto.io

- Branca: https://branca.io

XMPPwocky · 6 years ago
> PGP is well tested and most of the bugs have been worked out.

Most of them.

You're either using gnupg or you're using some custom PGP implementation.

A custom PGP implementation has far more footguns than a restricted JWT implementation.

Gnupg is a deeply terrifying codebase- for example, did you know you're not supposed to use the exit code of gpg --verify? You must parse statusfd, too, or many bugs will let you spoof signatures. (Bugs found as recently as this year).

I'd encourage anybody considering using gnupg for anything to read mainproc.c and see if they understand what's going on.

dwaite · 6 years ago
> I'm pushing for an IETF RFC this year.

Are you planning an informational document, or going through the IETF standardization process?

Also, the last published draft is two years old this week. Have there been changes since then to the spec? Are implementations generally interoperable?

CiPHPerCoder · 6 years ago
I'll be sending an email to CFRG, probably next week, with any spec changes.

But before I resuscitate PASETO, XChaCha20 needs an RFC. It's pending IRTF kick-off.

https://tools.ietf.org/html/draft-irtf-cfrg-xchacha-03

OatMilkLatte · 6 years ago
I'm going to use PASETO for a personal project I'm working on. If the COVID lockdown ever ends and I have time to work on it. Thanks for building it!
bflesch · 6 years ago
Paseto is a great project, thank you very much for your contribution!
ucarion · 6 years ago
What IETF WG are you working through?
CiPHPerCoder · 6 years ago
CFRG
thinkshiv · 6 years ago
Hi all - Shiv from Auth0. I am the CPO and wanted to share some additional context here. On July 31st 2019, at 5:11 am, we received an email from Insomnia reporting a service vulnerability. By 11:00 pm the same day, we had fixed the issue in production. We analyzed the logs and validated that no one exploited the vulnerability. More details from our CSO here: https://auth0.com/blog/insomnia-security-disclosure/?utm_sou.... Thanks to Insomnia for reporting the vulnerability and their partnership in coordinated disclosure. We appreciate the continued feedback from the security community-at-large to ensure we are providing the most secure platform for our global customers.
treve · 6 years ago
Why did your implementation have a case-sensitive check for a fixed list of algorithms, and why are you blacklisting vs. whitelisting acceptable algorithms? 'Old, stable' codebase or not... this is production code for a security product and seems like something that would be picked up during an audit.
fulafel · 6 years ago
Not the OP but, the sad truth is that code audits aren't that good at eradicating bugs.
Arnout · 6 years ago
I've encountered issues like this in various systems using JWT at this point. The real problem is that developers blacklist the algorithms they don't want. Instead, the verification code should explicitly whitelist which algorithms you support.

More specifically, you can't even rely on using the 'alg' parameter before successful signature verification with any level of authority: after all, it is protected by the signature it declares the algorithm for itself. So even with a whitelist, there is the potential of downgrade attacks.

In other words, don't even use a whitelist, use a single specific expected algorithm.

hinkley · 6 years ago
I have a coworker who does shit like this all the time.

The list of supported options is not only knowable, but changes very slowly. Which means it's almost certainly known at commit time. Just enumerate them. By hand. Oh no, you might have to type in some text that exists somewhere else! Quelle horreur!

The list of unsupported options is unknowable. The list of string or path interpolation bugs is knowable, but isn't known by the sort of person who thinks a whitelist is a bad idea. Build a lookup table and stop trying to be clever.

deathanatos · 6 years ago

  fn verify_jwt(
    // The input from the network/user; the JWT we will be verifying
    untrusted_jwt: String,
    // *The* way in which we expect JWTs to be issued / we will be verifying against:
    verifier: Verifier,
  ) -> ... {

    // Basic sanity checks & decoding.
    let untrusted_jwt: Jwt = ...;

    if !verifier.acceptable_alg(untrusted_jwt.alg) {
       bail;
    }
    verifier.verify(untrusted_jwt)
  }
Where `verifier` is something like `JwtRsa` or `JwtEd25519`, or `JwtHmac`. The verifier should know what few, limited algorithms to look for, and it should reject anything and everything that's not under its domain area. There's no blacklist, no guesswork. (I don't even think I'd have `acceptable_alg`; just let `verify` do that work; it has to look at that field anyways to set up, e.g., the hasher.)

I'm largely omitting¹ JWE, so perhaps there's some hidden dragon in there, or perhaps we just handle those completely separately. But for JWS, am I missing something? Unless you pass `NoneVerifier`, in which case you're explicitly opting in to alg: none and it's goriness.

Sadly, I don't think any of the top three Rust libraries do this.

¹I'm also omitting that RSA has many attached hash algorithms in JWS; one can imagine that JwtRsa might let you configure what hashers it will/will not use. As it is, some libraries take sort of this form, but split the key material off from the algorithm … letting you pass craziness like an RSA key material and HMAC-SHA256, which makes no sense. That is, what hash algorithms (if any) are possible is a function of the type of key material coming in.

¹I'm also omitting verification of the claims, which I think a library should generally handle.

karatestomp · 6 years ago
Is there some good reason to prefer a blacklist in this case, that I'm not thinking of, that might change my reaction to this from "uh, maybe I need to entirely re-think my assumption that Auth0 is any better at this whole securing-users thing than I am"? My immediate and ongoing reaction to the headline was and is, "wait, a blacklist? WTF!"
user5994461 · 6 years ago
The purpose of a blacklist is to seamlessly support newer algorithms invented later.

For example consider SHA1 replaced by SHA256 by SHA512. If some components is hardcoded to only accept SHA1, it can't use the newer algorithms and potentially block adoption by other components.

So it's somewhat normal design for future proofing, but it's a bit stupid for security/authentication stuff which sole purpose is to verify messages.

applecrazy · 6 years ago
> Instead, the verification code should explicitly whitelist which algorithms you support.

What libraries are you using? I just looked through the auth code for a project I'm working on (which uses `jsonwebtoken`) and it has an option to whitelist algorithms in the `jwt.verify` method.

Edit: removed repeated info

Arnout · 6 years ago
Various, been a while since I wrote code using them myself.

Often JWT tokens come from sources other than our own and they will have passed through user agent or client land. Don't trust anything in them unless you verified them.

edit: good on that library! That's what it should do. Clearly auth0's code did not do that though, it should never have accepted any variant of 'none' in the first place.

tptacek · 6 years ago
Does it also have the option to blacklist algorithms? Because if so, people are going to use that option.
xianb · 6 years ago
It's fascinating how Auth0 actually had a blog post about finding and fixing a handful of JWT vulnerabilities years ago (one of them is more advanced to exploit than this). Just another example of why you always have to be vigilant and that properly implementing encryption/security is hard

https://auth0.com/blog/critical-vulnerabilities-in-json-web-...

twic · 6 years ago
At some point, that alg parameter gets resolved to an algorithm - an object or enum constant or something. This bug implies that the filtering was done on the string value of the parameter, and not the resolved value. That seems like a schoolboy error.
user5994461 · 6 years ago
It's much worse than that. There are like 5 options for the algorithm value, none, RS256, HS256, etc...

The vulnerabilities implies that they don't verify the value against the very limited list of possible values, which is incredibly stupid.

rvz · 6 years ago
The gist of this Auth0 authentication API bypass is detailed as follows:

> The Authentication API prevented the use of alg: none with a case sensitive filter. This means that simply capitalising any letter e.g. alg: nonE, allowed tokens to be forged.

I really don't know what to think of why you need a case-sensitive filter for alg:'none'. The question is that why use and support 'alg:none' in the standard in the first place? As I previously commented, the option to have 'alg: none' should never be used as it is still the biggest footgun in the JOSE specification. Even giving the user a choice of ciphers to use is a recipe for disaster. Thus, JWT is still a cryptographically weak standard and its use is discouraged by many cryptographers.

PASETO [0] or Branca [1] are cryptographically stronger alternatives to use over JWT here.

[0] https://paseto.io

[1] https://branca.io

applecrazy · 6 years ago
> the option to have 'alg: none' should never be used

I doubt anyone uses this deliberately (edit: except maybe for internal server to server communications?). I agree that having it as an option is a footgun. I still think this is a non-issue on the client/backend, most libraries explicitly make you whitelist token signing algorithms and will throw errors if the token isn't signed with the right algorithm.

> Even giving the user a choice of ciphers to use is a recipe for disaster.

How so? I'm still learning this stuff, so I'm genuinely curious.

rvz · 6 years ago
> How so? I'm still learning this stuff, so I'm genuinely curious.

It is the same reason why the author of Wireguard rejected cryptographic agility in its use of protocols and ciphers:

From the Wireguard paper [0]:

> 'Finally, WireGuard is cryptographically opinionated. It intentionally lacks cipher and protocol agility. If holes are found in the underlying primitives, all endpoints will be required to update. As shown by the continuing torrent of SSL/TLS vulnerabilities, cipher agility increases complexity monumentally.'

[0] https://www.wireguard.com/papers/wireguard.pdf

CiPHPerCoder · 6 years ago
> > Even giving the user a choice of ciphers to use is a recipe for disaster.

> How so? I'm still learning this stuff, so I'm genuinely curious.

https://paragonie.com/blog/2019/10/against-agility-in-crypto... :)

user5994461 · 6 years ago
Protocols agility allow applications to pick between multiple settings, let's say RSA128 and RSA256 for example. This allows to add and remove ciphers over time, which is very important.

In theory it's a bad idea, because it means stuff might select obsolete ciphers during operation, which is bad.

In practice, there is no choice but to design agility. Ciphers will invariably get weak after some years (computer get faster) so they need to be phased off and replaced by newer ones.

In the real world, there are meshes of client-server interacting with one another. You can't just upgrade the software on one side to only use the newer cipher, or nothing could connect to it anymore. Thus there has to be the capability to work with multiple ciphers, so older ciphers can gradually be phased-in across systems and older cipher phased-off.

Pretty sure the two other commenters are mostly researchers with no real world software deployment to manage. Otherwise they wouldn't be so strong against agility. Fact is a system with no agility is dead in the water because it can't evolve.

wereHamster · 6 years ago
> The question is that why use and support 'alg:none' in the standard in the first place?

FTFA:

> The JWT standard supports insecure JWT algorithms for scenarios where encryption and a signature are not suitable, such as trusted server-to-server communication. In these scenarios, the none algorithm is specified in the JWT header. The none alg type should never be used for untrusted user-supplied tokens.

jaywalk · 6 years ago
It's funny that they say it's "not suitable" when it's really just pure laziness. It takes two seconds to create a secret key and use the HS256 algorithm to generate and verify a signature.
different_sort · 6 years ago
"trusted server"
user5994461 · 6 years ago
Most JWT libraries require to hardcode the expected algorithm when verifying a token, so if your applications are verifying the token provided by Auth0 with a JWT library, they're most likely not vulnerable to this mistake.
userbinator · 6 years ago
I think what's more harmful is the fact that something is case-insensitive.

Case insensitive may have some benefits for human-facing stuff, but otherwise the byte-exact comparison you get with case-sensitive semantics is superior.