Like anything OAuth2-related, it's frustratingly vague and jargony.
Verifiable credentials is a terrible name. We have had verifiable cryptographic credentials for more than 40 years.
What I want, is a practical protocol to prove to a third party
1. That I am a real person (e.g, has a unique credential issued by my government)
2. That I'm the only one currently "logged in" with them, with that credential.
3. Without the third party, (and with as few parties as mathematically possible) knowing which of the people with a credential issued by my government I am.
In short, to prove that I'm a real person, that I'm not running an army of sockpuppets, and yet preserve my privacy.
Tangentially, if you’re a B2C startup like an app, avoid using auth0. Their pricing starts out cheap but once you hit 10k MAU, it’ll go from $X00 a month to a 3 year contract for $X00000.
It’s not designed as a business for that use case, and you’ll be paying for a lot of premium features that you’ll never use.
If something like Firebase Auth suits your use case, use that instead.
Can confirm, also very dissapointed with Auth0. We have been happily using Auth0 for years within their self-service license tier. Unfortunately we are now in the process of migrating away from Auth0 on short notice.
We never hit 10k MAU, but according to their sales people you can't have multiple tenants without an enterprise license, even though that is not mentioned anywhere in the docs. We have been using them in good faith, however they will not shy away from aggressive sales tactics and threatening to 'de-provision' you if you do not commit to a very expensive license.
That’s been our experience as well. Auth0 seemed like a great deal and we were happy to have such a sensitive component handled by a dedicated third party, but their sales team aggressively pursues us as soon as we go over some MAU limit and it’s clear that the nickel-and-diming is only going to get worse as we grow. We need to migrate away sooner or later.
That's a great path for many folks. I think building on a framework is far far better than rolling your own.
However, often there comes a time when you have multiple applications that all need a shared user data store. (Note, I work for FusionAuth, so I am, to some extent, talking my book here.)
You then have some choices:
* run everything off of one app (both features and user management) and have other apps oauth/oidc into that app. This means that other apps are now dependent on one main app. Within that app, user data and feature data may get entangled
* hive off the user manangement and login data from the main app to a separate one and have other apps oauth/oidc into that second app. This lets you deploy/manage them separately and still have one user data store. Congrats, you've just invented an auth server! This can be a good option, but that refactoring may be a bit painful, and you're on the hook for maintaining the auth server (dependencies, adding workflows and functionality).
* stand up a separate auth server (or use a SaaS offering) and have apps oauth/oidc into it. In this scenario the auth server vendor is responsible for updates (adding new forms of MFA such as WebAuthn, for example) and your apps get the benefits from it. The issue here is that this is a core part of your app, so any downtime or integration issues due to an upgrade can cause major headaches. (At FusionAuth, we work around this issue with a single tenant model and by allowing you to pin your version.)
It's engineering, so there's no perfect solution. The above are some of the tradeoffs I've seen.
Anyone with a keypair can issue verifiable credentials, and we work on making this simple[0], starting with developers. However, the ultimate challenge will be to be able to associate that keypair to the entity (or abstracted entity) who is making those statements, which is what Web of Trust tried to do, and there are some adjacent efforts to revitalize SPKI-style[1] trust models that are being discussed at RWoT[2].
It works for both use cases. The only difference between the two is the source of trust (in case 1 it is some issuing authority, in case 2 it is you). There's no reason why you can't issue a certificate for yourself. The receiving party can choose to trust your public key if they wish.
> It is not good for use cases where you want to assert that you say something (e.g., I voted for Blah, or I authorized this transfer).
The challenge is knowing who 'I' is and why you should trust their statements.
Once a verifier knows who you are, you should be able to self-issue statements to them about yourself. Present back self-signed financial instructions in response to a request to confirm a money transfer, for example.
However, I've seen enough people to incorrectly assert who they voted for that I doubt such a self-assertion to have significant utility. Likewise, a self-asserted email address for contact could very well not meet the verifier's business/regulatory requirements without going through a more traditional email address verification.
I went the Cognito route for customer facing, however there were a couple of gotchas:
- There's no turn key method for multi-region availability.
- It has a limited number of 2FA/MFA options.
- It does not offer a SAML idp. We ended up writing a Lambda to issue SAML claims, put it behind an API gateway with Cognito/OIDC authorization. It works, but we'll need to maintian it.
- It's AWS, so you'll need a half dozen other services to build a complete solution
You can do that by having consortiums of trusted parties. California, New York and Walmart were pioneers in this space for vaccination credentials. (SMART health cards)
If you lived in a crazy state like Florida, vaccinating at Walmart was the best way to get a credential for international travel. In the absence of federal action, countries like Israel recognized these credentials and airlines incorporated them into their ticketing workflow.
I love that verifiable credentials are starting to make mainstream, but one thing I couldn’t glean from this page: is this purely VC from a perspective of “here’s the attributes I care to see”, or “here’s the characteristics I care to see”? To clarify: say I’m an alcohol vendor and wish to confirm that a user is 21 or older. Does the VC issuance provide a range proof that does not reveal the age, or does the VC issuance explicitly reveal the age?
You could issue the credential with an "ageOver" property set to 21; use of abstract claims like that is actually a non-normative preference in the W3C standard.
Maybe this is a silly question. But wouldn't this just mean that the ID would need to be updated very regularly? Like at least every year if not more frequently?
In the theoretical use case of drinking, I couldn't just goto the bar on my 21st birthday, provide my ID, and buy a drink. I'd have make sure that I did whatever process was required to update my Verifiable Credential first?
I get the abstraction is great from a PII standpoint, and that likely this wouldn't be a big roadblock since this is all digital anyways. One assumes that the user could just press a big ol "refresh" button and be done in a few seconds. But still curious.
Surely you'd store their birthdate (or maybe birth year and month for privacy reasons). Then you only allow people who you know are older than 21, those born before (not in) 2001-11 (based on currentYear of 2022). With the magic of modern JavaScript, this shouldn't be too hard:
let birthday = "1998-08" # This value is taken from the user's verified credentials, in my case September 1998.
let ageRestriction = 21
let legalDate = new Date(new Date().setFullYear(new Date().getFullYear() - ageRestriction))
let birthdayDate = new Date(birthday)
if (legalDate > birthdayDate) console.log("Is that cash or card?")
else console.log("Sorry, you're not old enough.")
e: I didn't fully read OOP's comment about not wanting to reveal age, however I still feel that YYYY-MM is a valid option for age verification. It allows for the most privacy while impacting the smallest group of people, and in those cases you would effectively need to be 21 and a month old to enter these bars, without some other form of legal ID.
e2: Reading through the article after commenting (I know, I'm terrible and you can all violently detest me if you wish), there is an "ID card" example which clearly states the user's date of birth. Surely this is a prime use-case for OOP?
VCs are what a user holds. So generally with age you will be issued a credential that includes your age.
However, VCs have the concept of a 'presentation' which is what you show someone that wishes to verify something about you. The 'just show them the data and the digital signature' approach is one way to do presentations. But depending on the actual digital signature, you can also do presentations based on range proofs. Or general zero knowledge proofs.
So the issuer is not using range-proofs. Its the holder who is presenting his credentials that gets to do range proofs. That does depend on the issuer using specific kinds of signatures though.
I'm not sure that anyone seriously implementing verifiable credentials has claimed that it's the "true unifying one [standard]", as it's just a data model represented as JSON. So, unfortunately this comment reads to me as an uninformed strawman argument.
VCs can be represented as JWTs (read the spec), issued with X.509-based PKI issuers, extended with JSON-LD, and further ride on top of exchange protocols defined at OpenID for issuance/presentation. So, indeed it is a combination that best fits your use case, this is just another tool in the belt.
Offerings in the SSI/VD space are currently exploding -some even government backed. Microsoft, MasterCard, Auth0, the European Union are the biggest players that come to my mind.
This will turn the whole billion dollar kyc/identity verification space upside down.
Would you mind expanding on the how ? I am trying to get a handle on whether identity providers / VC providers are going to be 5 big firms or if everyone will do it like everyone used to have a office stamp for banging a red inked logo onto documents
Feels like reinventing the past, looking at what people have done with just x509/PKI in multiple European countries. Though that's not to say there aren't a few fun ideas in there, it just seems much less mature with many mistakes repeated.
The new thing about VC is that it introduces a mechanism for claims (which is what JWTs contain) to be made and presented in an interoperable way (requiring schemas for data, so multiple parties can make sense of claims without knowing each other), something JWTs cannot do by themselves.
Also, OpenID is being extended to support self-issued claims:
I'm fine with a new standard if it can provide me something new that other standards doesn't. But does this do that?
Yes, you can get "partial" verification, like someone wants to know your age, and you can prove it without exposing all your other identifying information. With a centralised authority this is a pretty trivial thing. Nice to have it standardized I guess, but I don't see what's so exciting about it.
Cryptographic protocols are often defined in IETF/IRTF. You'll see things like Kerberos (authentication and attributes), OAuth (delegated authorization), and privacypass (anonymized authorization).
Many of these also have wire protocols too, such as OAuth describing HTTP API to get access tokens.
W3C has also defined some Web API for these concepts, for instance they have Web Crypto APIs as well as Web Authentication. But these concepts typically have split responsibility, such as Web Crypto being based on algorithms standardized in the IETF JOSE group, or Web Authentication being based on transports standardized under the FIDO Alliance.
So things tend to happen where they will be most successful, which means sometimes going to the place where all the right people are already participating.
JWT is a way to express claims. JWS is the underlying data format.
JWTs are still profiled for their usage - the JWT you use under ACMEv2 is going to be different than the ID Tokens you get from OpenID Connect, or some bespoke cookie format by a product/site.
A VC-JWT is a profile JWTs for use with Verifiable Credentials. Part of this is adding new claims to hold JSON-LD formatted credential / presentation data.
Hmm, I don't know if it's consider JWT to be "just a data format". It's an envelope format (dotted base64'd JSON), combined with a schema for each component in the envelope. That scheme isn't particularly strict when it comes to the payload component, but that doesn't mean it isn't a schema.
OIDC's well-known discovery[1] also does this kind of claim standardization/expectation setting already. But maybe it goes beyond that, and actually normalizes between different IdPs? I'm not sure what that would entail.
I wonder what the benefits of this versus e.g. OpenID Connect[1] are: OIDC is already semi-widely adopted, reuses a popular underlying envelope scheme (JWTs), and performs a similar type of proof (that some identity provider claims something about an identity).
The biggest problem with OIDC is how non-standard every implementation is.
I mean, there is a standard, but then there's what everyone actually does. Even within the standard, there is a very surprising amount of it that is... optional.
Even discovery endpoints are non-standard... basics like `/.well-known/openid-configuration` is recommended but not required... and don't even try to guess where /userinfo lives!
Claims are willy-nilly, and even some IDP's provide duplicate-in-intent but different-in-name claims, like `phone_verified` vs. `phone_number_verified`. It's just a complete wild west out there!
Anyone bringing some level of standards to the delegated authentication arena would be very welcome in my opinion.
I agree completely about OIDC's discovery limitations! If this standard can improve along that axis, then that alone will make it a valuable contribution to the identity space.
I also agree about standardized claim names, although I'll point out that standardizing something like `phone_verified` just pushed the identity/claim value question one level deeper: what does it mean for IdP A to have `phone_verified` versus IdP B? Do they have the same ontological value? That's part of why (IMO) "generalized" identity management has never succeeded: you can make everybody generate the same claims, but you can't assert that they've done a uniform or sufficient degree of diligence for those claims. The only way you can do the latter is to select "high quality" IdPs, at which point the consistency of the claim names no longer matters.
For extra context, there is currently work ongoing in the OIDC standards community to support Verifiable Credential Issuance[0] as well.
Even better, this verifiable credentials work is intended to integrate well with self-issued identities, which they are also working on[1], under the name Self-Issued OpenID Provider v2 (SIOPv2).
As someone else pointed out, there is work in OpenID Connect to support this model.
The difference is that traditional Connect is typically a two party model - an OpenID Provider which gives claims which can be used for registration/authentication, and a Relying Party willing to accept them. This is an active dance back and forth, with the OpenID Provider deciding how to implement privacy, what records to keep on usage, prompting for user consent, etc.
Verifiable Credentials have Issuers and Verifiers which map reasonably well into these roles, but also an end user agent in the middle which acquires credentials, holds onto them and presents them with user consent.
While the verifier needs to still know who the issuer is to know if it should trust the data, the issuer no longer needs any relationship whatsoever with the verifier. The issuer does not see where credentials are being used, or (for credentials with selective disclosure) which information is being disclosed.
Just to add to your excellent description, one can draw an analogy between the id_token and a VC.
But an id_token usually has an audience which is the RP, and a short expiration. A VC is issued for the user (aka holder), with long or no expiration to store in wallet.
A VC is bound to the user’s did (think pk thumbprint) and is useless without a proper presentation. A verifier does not expect just the VC but a V. Presentation signed by the user.
This is where using id_tokens as vcs will fall short. Once you give it to one verifier, you could assume is public.
The good thing about VCs is that is standard and easy to grasp. There are too many flavors though
Call me a cynic but all this proves is that someone vouched for the stated credential being associated with a key I currently hold?
i.e. you have to check imperial.ac.uk/.well-known/auth0-vc.pub or whatever, and if that matches, still all you know is that I have the key (or device, whatever), not necessarily that it was truly me. And if you don't check the issuer (or don't trust it's claim - impeerial.ac.uk says that the bearer has a degree from imperial.ac.uk for example) then it doesn't tell you anything.
Of course that's not really avoidable... but I can't think of a use case for this that doesn't just reduce to 'the issuer may as well just publish the contents' - it's useful when you want to only selectively share a credential from party A with party B, and it's something that B has reason to doubt/verify... but I can't think of an example?
Right, and to better illustrate this problem, they have a link in "verificationMethod" pointing to identity0.io domain, it was registered by just yesterday, could be just someone snatching that domain
Verifiable credentials is a terrible name. We have had verifiable cryptographic credentials for more than 40 years.
What I want, is a practical protocol to prove to a third party
1. That I am a real person (e.g, has a unique credential issued by my government)
2. That I'm the only one currently "logged in" with them, with that credential.
3. Without the third party, (and with as few parties as mathematically possible) knowing which of the people with a credential issued by my government I am.
In short, to prove that I'm a real person, that I'm not running an army of sockpuppets, and yet preserve my privacy.
https://www.w3.org/TR/vc-data-model/
For example, if the lower bound is hundreds then it's mostly superfluous since that information will eventually leak one way or another.
It’s not designed as a business for that use case, and you’ll be paying for a lot of premium features that you’ll never use.
If something like Firebase Auth suits your use case, use that instead.
We never hit 10k MAU, but according to their sales people you can't have multiple tenants without an enterprise license, even though that is not mentioned anywhere in the docs. We have been using them in good faith, however they will not shy away from aggressive sales tactics and threatening to 'de-provision' you if you do not commit to a very expensive license.
We had to roll our own auth solution because they couldn’t make the pricing even remotely viable.
I’d like to write „stay away from Auth0”, but is there any comparable alternative?
If you need mostly B2C features I would have a look at Clerk [0] and Supertokens [1]
If you are more interested in B2B features we found Ory [2], FusionAuth [3], and good old Keycloak [4].
None of them are fully comparable yet however, which might be the reason why they might get away with their current behaviour.
[0] https://clerk.dev/ [1] https://supertokens.com/ [2] https://www.ory.sh/ [3] https://fusionauth.io/ [4] https://www.keycloak.org/
However, often there comes a time when you have multiple applications that all need a shared user data store. (Note, I work for FusionAuth, so I am, to some extent, talking my book here.)
You then have some choices:
* run everything off of one app (both features and user management) and have other apps oauth/oidc into that app. This means that other apps are now dependent on one main app. Within that app, user data and feature data may get entangled
* hive off the user manangement and login data from the main app to a separate one and have other apps oauth/oidc into that second app. This lets you deploy/manage them separately and still have one user data store. Congrats, you've just invented an auth server! This can be a good option, but that refactoring may be a bit painful, and you're on the hook for maintaining the auth server (dependencies, adding workflows and functionality).
* stand up a separate auth server (or use a SaaS offering) and have apps oauth/oidc into it. In this scenario the auth server vendor is responsible for updates (adding new forms of MFA such as WebAuthn, for example) and your apps get the benefits from it. The issue here is that this is a core part of your app, so any downtime or integration issues due to an upgrade can cause major headaches. (At FusionAuth, we work around this issue with a single tenant model and by allowing you to pin your version.)
It's engineering, so there's no perfect solution. The above are some of the tradeoffs I've seen.
This is good for use cases where you want to assert that an organization says something about you (e.g., you have a degree).
It is not good for use cases where you want to assert that you say something (e.g., I voted for Blah, or I authorized this transfer).
[0] https://www.spruceid.dev/quickstart
[1] https://en.wikipedia.org/wiki/Simple_public-key_infrastructu...
[2] https://github.com/WebOfTrustInfo/rwot11-the-hague
The challenge is knowing who 'I' is and why you should trust their statements.
Once a verifier knows who you are, you should be able to self-issue statements to them about yourself. Present back self-signed financial instructions in response to a request to confirm a money transfer, for example.
However, I've seen enough people to incorrectly assert who they voted for that I doubt such a self-assertion to have significant utility. Likewise, a self-asserted email address for contact could very well not meet the verifier's business/regulatory requirements without going through a more traditional email address verification.
- There's no turn key method for multi-region availability.
- It has a limited number of 2FA/MFA options.
- It does not offer a SAML idp. We ended up writing a Lambda to issue SAML claims, put it behind an API gateway with Cognito/OIDC authorization. It works, but we'll need to maintian it.
- It's AWS, so you'll need a half dozen other services to build a complete solution
If you lived in a crazy state like Florida, vaccinating at Walmart was the best way to get a credential for international travel. In the absence of federal action, countries like Israel recognized these credentials and airlines incorporated them into their ticketing workflow.
https://www.w3.org/TR/vc-data-model/#favor-abstract-claims
In the theoretical use case of drinking, I couldn't just goto the bar on my 21st birthday, provide my ID, and buy a drink. I'd have make sure that I did whatever process was required to update my Verifiable Credential first?
I get the abstraction is great from a PII standpoint, and that likely this wouldn't be a big roadblock since this is all digital anyways. One assumes that the user could just press a big ol "refresh" button and be done in a few seconds. But still curious.
e2: Reading through the article after commenting (I know, I'm terrible and you can all violently detest me if you wish), there is an "ID card" example which clearly states the user's date of birth. Surely this is a prime use-case for OOP?
However, VCs have the concept of a 'presentation' which is what you show someone that wishes to verify something about you. The 'just show them the data and the digital signature' approach is one way to do presentations. But depending on the actual digital signature, you can also do presentations based on range proofs. Or general zero knowledge proofs.
So the issuer is not using range-proofs. Its the holder who is presenting his credentials that gets to do range proofs. That does depend on the issuer using specific kinds of signatures though.
JWT is already a thing, as is X.509, OAuth/OpenID, WebAuthn... Just use a combination of these that best fits your use case.
"But this new standard will be the true unifying one". Nope, it will not. The most it will do is get some share of usage and add to the chaos.
VCs can be represented as JWTs (read the spec), issued with X.509-based PKI issuers, extended with JSON-LD, and further ride on top of exchange protocols defined at OpenID for issuance/presentation. So, indeed it is a combination that best fits your use case, this is just another tool in the belt.
This will turn the whole billion dollar kyc/identity verification space upside down.
I work in that space.
https://www.w3.org/TR/vc-data-model/#json-web-token
The new thing about VC is that it introduces a mechanism for claims (which is what JWTs contain) to be made and presented in an interoperable way (requiring schemas for data, so multiple parties can make sense of claims without knowing each other), something JWTs cannot do by themselves.
Also, OpenID is being extended to support self-issued claims:
https://openid.net/specs/openid-connect-self-issued-v2-1_0.h...
VCs integrate with the existing specs, it doesn't compete with them.
Yes, you can get "partial" verification, like someone wants to know your age, and you can prove it without exposing all your other identifying information. With a centralised authority this is a pretty trivial thing. Nice to have it standardized I guess, but I don't see what's so exciting about it.
Cryptographic protocols are often defined in IETF/IRTF. You'll see things like Kerberos (authentication and attributes), OAuth (delegated authorization), and privacypass (anonymized authorization).
Many of these also have wire protocols too, such as OAuth describing HTTP API to get access tokens.
W3C has also defined some Web API for these concepts, for instance they have Web Crypto APIs as well as Web Authentication. But these concepts typically have split responsibility, such as Web Crypto being based on algorithms standardized in the IETF JOSE group, or Web Authentication being based on transports standardized under the FIDO Alliance.
So things tend to happen where they will be most successful, which means sometimes going to the place where all the right people are already participating.
This VC thing seems to take ID Tokens from OIDC providers a little further and also standardizes what claims you can expect.
JWTs are still profiled for their usage - the JWT you use under ACMEv2 is going to be different than the ID Tokens you get from OpenID Connect, or some bespoke cookie format by a product/site.
A VC-JWT is a profile JWTs for use with Verifiable Credentials. Part of this is adding new claims to hold JSON-LD formatted credential / presentation data.
OIDC's well-known discovery[1] also does this kind of claim standardization/expectation setting already. But maybe it goes beyond that, and actually normalizes between different IdPs? I'm not sure what that would entail.
[1]: https://swagger.io/docs/specification/authentication/openid-...
Deleted Comment
[1]: https://openid.net/connect/
I mean, there is a standard, but then there's what everyone actually does. Even within the standard, there is a very surprising amount of it that is... optional.
Even discovery endpoints are non-standard... basics like `/.well-known/openid-configuration` is recommended but not required... and don't even try to guess where /userinfo lives!
Claims are willy-nilly, and even some IDP's provide duplicate-in-intent but different-in-name claims, like `phone_verified` vs. `phone_number_verified`. It's just a complete wild west out there!
Anyone bringing some level of standards to the delegated authentication arena would be very welcome in my opinion.
I also agree about standardized claim names, although I'll point out that standardizing something like `phone_verified` just pushed the identity/claim value question one level deeper: what does it mean for IdP A to have `phone_verified` versus IdP B? Do they have the same ontological value? That's part of why (IMO) "generalized" identity management has never succeeded: you can make everybody generate the same claims, but you can't assert that they've done a uniform or sufficient degree of diligence for those claims. The only way you can do the latter is to select "high quality" IdPs, at which point the consistency of the claim names no longer matters.
I'm sure you've read it but I have to mention it for good measure. OAuth 2.0 and the Road to Hell: https://gist.github.com/nckroy/dd2d4dfc86f7d13045ad715377b6a...
Even better, this verifiable credentials work is intended to integrate well with self-issued identities, which they are also working on[1], under the name Self-Issued OpenID Provider v2 (SIOPv2).
[0] https://openid.net/specs/openid-connect-4-verifiable-credent...
[1] https://openid.net/specs/openid-connect-self-issued-v2-1_0.h...
The difference is that traditional Connect is typically a two party model - an OpenID Provider which gives claims which can be used for registration/authentication, and a Relying Party willing to accept them. This is an active dance back and forth, with the OpenID Provider deciding how to implement privacy, what records to keep on usage, prompting for user consent, etc.
Verifiable Credentials have Issuers and Verifiers which map reasonably well into these roles, but also an end user agent in the middle which acquires credentials, holds onto them and presents them with user consent.
While the verifier needs to still know who the issuer is to know if it should trust the data, the issuer no longer needs any relationship whatsoever with the verifier. The issuer does not see where credentials are being used, or (for credentials with selective disclosure) which information is being disclosed.
But an id_token usually has an audience which is the RP, and a short expiration. A VC is issued for the user (aka holder), with long or no expiration to store in wallet.
A VC is bound to the user’s did (think pk thumbprint) and is useless without a proper presentation. A verifier does not expect just the VC but a V. Presentation signed by the user.
This is where using id_tokens as vcs will fall short. Once you give it to one verifier, you could assume is public.
The good thing about VCs is that is standard and easy to grasp. There are too many flavors though
i.e. you have to check imperial.ac.uk/.well-known/auth0-vc.pub or whatever, and if that matches, still all you know is that I have the key (or device, whatever), not necessarily that it was truly me. And if you don't check the issuer (or don't trust it's claim - impeerial.ac.uk says that the bearer has a degree from imperial.ac.uk for example) then it doesn't tell you anything.
Of course that's not really avoidable... but I can't think of a use case for this that doesn't just reduce to 'the issuer may as well just publish the contents' - it's useful when you want to only selectively share a credential from party A with party B, and it's something that B has reason to doubt/verify... but I can't think of an example?