For those interested, I recommend reading how FIDO U2F works. There's more in a security key than just FIDO U2F, but FIDO U2F is easily the most ergonomic system that these security keys support. Simplified:
* The hardware basically consists of a secure microprocessor, a counter which it can increment, and a secret key.
* For each website, e.g., GitHub, it creates a HMAC-SHA256 of the domain (www.github.com) and the secret key, and uses this to generate a public/private keypair. This is used to authenticate.
* To authenticate, the server sends a challenge, and the security key sends a response which validates that it has the private key. It also sends the nonce, which it increments.
If you get phished, the browser would send a different domain (www.github.com.suspiciousdomain.xx) to the security key and authentication would fail. If you somehow managed to clone the security key, services would notice that your nonces are no longer monotonically increasing and you could at least detect that it's been cloned.
I'm excited about the use of FIDO U2F becoming more widespread, for now all I use it for is GitHub and GMail. The basic threat model is that someone gets network access to your machine (but they can't get credentials from the security key, because you have to touch it to make it work) or someone sends you to a phishing website but you access it from a machine that you trust.
It's also tremendously more efficient to tap your finger on the plugged in USB than it is to wait for a code to be sent to your phone or go find it on an app to type in. I've added it to everything that allows it, more for convenience than security at this point.
Most places that allow it require that you have a fallback method available.
One thing I don't understand is why are apps like authy or google authenticator not using push notifications to allow you to directly auth via unlocking or touchID instead of having to go through the app. If you really want the user to type something then you can still use push notitication for easy app access
Thats the single reason I got a smart watch. Just to have my 2fa codes on my wrist instead of getting my phone out of my pocket (I'm using Authenticator+)
>"It's also tremendously more efficient to tap your finger on the plugged in USB than it is to wait for a code to be sent to your phone or go find it on an app to type in."
But with regular TOTP and a software device on a smart phone I can print out backup codes in case you lose your phone. This allows one to log in and reset their 2FA token. What happens if you lose your Yubikey or similar? I guess this doesn't matter as much in an enterprise setting where there is a dedicated IT department but for for individual use outside of the enterprise doesn't TOTP and a software device have a better story in case of loss of the 2FA device?
If you're interested in seeing how it works in action, I built a U2F demo site that shows all the nitty-gritty details of the process - https://mdp.github.io/u2fdemo/
> If you somehow managed to clone the security key, services would notice that your nonces are no longer monotonically increasing and you could at least detect that it's been cloned.
At least a year ago or so (last time when I checked) most services didn't appear to check the nonce and worked fine when the nonce was reset.
If you can reset the nonce without resetting the key you can probably retrieve the key easily if you can read the traffic. The service should not need to check the nonce, and adding that much state is going to be complicated.
The counter feature is dubious. You correctly describe the upside - if Bob's device says 4, then 26, then 49 it's weird if the next number is 17, and we may suspect it's a clone.
But there are many downsides, including:
Devices now need state, pushing up the base price. We want these things to be cheap enough to give away.
The counter make tokens weakly trackable. If Facebook knows your token's counter was 205 when you signed in at work this morning and 217 when you signed in from your iMac this evening, somebody who visited GitHub at midday with counter 213 might be you, someone with counter 487 definitely isn't you, or at least not with the same token.
> Devices now need state, pushing up the base price. We want these things to be cheap enough to give away.
State is only expensive when it adds a significant amount of die area or forces you to add additional ICs. If you need a ton of flash memory, you can't put it on the same die because the process is different, and adding a second IC bumps up the cost. However, staying with the same process you used for your microcontroller, you can add some flash with much worse performance... which is a viable alternative if you only need a handful of bits. Your flash is slower and needs more die area, but it's good enough.
> The counter make tokens weakly trackable. If Facebook knows your token's counter was 205 when you signed in at work this morning and 217 when you signed in from your iMac this evening, somebody who visited GitHub at midday with counter 213 might be you, someone with counter 487 definitely isn't you, or at least not with the same token.
What kind of ridiculous threat model is this? "Alice logs into Facebook and GitHub, and Bob, who has compromised both Facebook and GitHub's authentication services..." Even then, it's not guaranteed, because the device might be using a different counter for Facebook and GitHub.
> Devices now need state, pushing up the base price.
You can buy pretty decent (16Mhz, 2K EEPROM, 8K flash) microcontrollers for less than twenty cents (my numbers are from 7 years ago, things are probably cheaper, faster and bigger now). A few bytes of stable storage -- whatever you need to safely increment a counter and not lose state to power glitches -- are not going to add significantly to the cost of a hardware token.
I can think of a few ways to reduce that tracking risk. The token could use a per-site offset (securely derived from the per-site key) added to the global counter, and/or could have a set of global counters (using a secure hash of the per-site key to select one). I don't know how much that would increase the cost, or if there's something on the standard mandating a single global counter.
Touch to auth is also the part that Google ignores for some strange reason. Their high security Gmail program defaults to remembering the device! There isn't a way to disable it either.
Do you know why GNUK (the open source project used by Nitrokey and some other smart cards) chooses not to support U2F? I don't understand the maintainer's criticisms[0] and I'd like to probe someone knowledgeable to find out more.
I am having some trouble understanding as well. Here is what I understand.
The point of GNUK is to move your GnuPG private key to a more secure device so it doesn't have to sit on your computer. With GnuPG, users are basically in control of the cryptographic operations: what keys to trust, what to encrypt or decrypt, etc.
With U2F, in order to comply with the spec you are basically forced to make a bunch of decisions that don't necessarily line up with GNUK's goals. You have to trust X.509 certificates and the whole PKI starting from your browser (CA roots and all that). Plus, U2F is basically a cheaper alternative to client certificates, but with GNUK you already have client certificates, so why go with something that provides less security?
To elaborate: With GnuPG, the reason you trust that Alice is really Alice is because you signed her public key with your private key. You can then secure your private key on a hardware device with GNUK. With FIDO U2F and GMail, you have to trust that you are accessing GMail, which is done through pinned certificates and a chain of trust starting from a public CA root. This system doesn't offer you much granularity for trusting certificates. Adding FIDO U2F to a system designed to support a GnuPG-like model of trust dilutes the purpose of the device. By analogy, imagine if you used your credit card to log in to GMail, maybe by using it as the secret key for U2F. The analogy isn't great, but you can imagine that even if you can trust that (through the protocol) GMail can't steal your credit card number, the fact that you are waving your credit card about all the time makes it a little less secure.
In general, people who work on GnuPG and other similar cryptography systems tend to be critical of the whole PKI, and I'm sympathetic to that viewpoint.
I appreciate the summary, but it's still a bit unclear to me. What do you mean by "for each website"? Certainly that doesn't mean every website in existence, so there must be some process by which a new website is registered with the hardware and the key communicated to the site?
But if so, I don't see how that solves the problem of "user goes to site X for the first time, mistakenly thinking it's Github." That registers a new entry for the site and sends different credentials/signatures than it would send to Github. But site X doesn't care that they're invalid, and logs you in anyway, hoping you will pass on some secret information.
Normal MFA is the user answering a challenge. Hopefully that challenge came from the expected site, but it is up to the user to verify the authentication/authenticity of the site. If the username/password/OTP challenge came from someone actively phishing the user, the phisher can use the user's responses to create a session for its own nefarious purposes.
Verifying the authenticity of a site is something that has been demonstrated both to be nontrivial and also something that the majority of users cannot do successfully.
U2F/WebAuthn tie the identity of the requesting site into the challenge - by requiring TLS and by using the domain name that the browser requested. So if the user is being phished, the domain mismatch will result in a challenge that cannot be used for the legitimate site.
Solely going by GP's summary, nothing needs to be 'registered with the hardware' because the public/private keypair is deterministically generated on-the-fly, cheaply, with a PRNG every time it's needed. Only two things are ever in the nonvolatile storage on the device: the secret key used as entropy to generate those keypairs, and the single global counter.
The system makes it impossible for phishing sites to log in to your account using your credentials. That's the threat model it guards against.
Entering 'secret information' that isn't user credentials just plain isn't part of it. Though wouldn't anyone phished by e.g. FakeGmail already get suspicious if they don't see any of their actual emails that they remember from the last time they logged in to Gmail?
It could still be susceptible to a user mistaking fakegithub.com to github.com, but a pairing with github.com will never work with a request from a server from fakegithub.com. Likewise, github.com cannot request the user to sign an auth challenge for fakegithub.com. The requesting server is directly tied to the signature response.
> For each website, e.g., GitHub, it creates a HMAC-SHA256 of the domain (www.github.com) and the secret key, and uses this to generate a public/private keypair. This is used to authenticate.
Can one usb device work on two separate accounts for a given domain, (e.g. work gmail and personal gmail), or do you need two of them?
One device can work on two separate accounts, no problem. For the same reason you can use the same password for two different accounts (although there are other reasons why you wouldn't want to do that).
Do you have a recommended device? Ideally it would work reasonably well with iphone as well as macbooks (unfortunately both usb-A and a courage's worth of usb-C).
U2F is fantastic. I wish Apple supported it in Safari (hoping!).
Also, YubiKey 4 is a great device. Set it up with GnuPG and you have "pretty good privacy" — with convenience. I recommend this guide for setting things up: https://github.com/drduh/YubiKey-Guide
The great thing about YubiKeys is that apart from U2F, you also use them for OpenPGP, and the same OpenPGP subkey can be used for SSH. It's an all-in-one solution.
That vuln only affected RSA keys generated for specific niche functionality and not most uses of the YubiKey.
> The issue weakens the strength of on-chip RSA key generation and affects some use cases for the Personal Identity Verification (PIV) smart card and OpenPGP functionality of the YubiKey 4 platform. Other functions of the YubiKey 4, including PIV Smart Cards with ECC keys, FIDO U2F, Yubico OTP, and OATH functions, are not affected. YubiKey NEO and FIDO U2F Security Key are not impacted.
I'd like to mention that I've been testing the Krypton app (iOS only for now) for U2F. You install Krypton on your iOS device, it creates keys that are only on the device. You then install the extension for Chrome. When U2F is requested they send the challenge to the iOS device which calculates the response and sends it back to the extension. App can be configured to require approval or always send response.
I wish you just had the workstation download on the homepage again. I had to find your homebrew bottle GitHub repo to figure out how to install Krypton on my new MacBook.
When I started using Krypton for ssh and code-signing last year, the first thing I did was ask the Krypton team on twitter if they were going to add U2F. Glad to hear it’s in beta! It’s rarer these days to subsume another device into our phones’ functionality, but it’s still a good feeling.
Am I only the one who is disappointed in the seemingly stalling of traction for U2F? Google, Github, and Facebook supported U2F 2 years ago - so all I can see is Twitter, Dropbox and niche security news like KrebsOnSecurity.com have added support since then? Sure it's something, but 2 years I would have expected more - Who am I missing? Without more websites, consumer mass market has little incentive to adopt - and without users, websites have little incentive to support U2F - thereby furthering the stalling.
I needed a new bank and thought surely there will be one that offers U2F.. days of searching later, and I still have yet to find one that does. It seems like the vast majority of online banks don't even support any kind of 2FA except email/text. Really really sad.
For regular guys like me, I can't think of any online service more important to protect than my bank account.
Banks seem very slow to adapt to technology. My credit union for years after the release of the first iPhone still used a Flash login, although they did have a mobile login link you could get from them by asking.
U2F was never fully supported in browsers making it hard for sites to deploy it everywhere. The new WebauthN standard is going to be supported everywhere which makes it more likely that sites will actually use it.
Something like U2F is never going to find mass success in a consumer application. Every enterprise auth provider supports it, which is its major use case for now.
I guess this is a dumb question, but is it still "multi factor authentication" if you only use a single physical device to complete the login process?
The way the article is written, it makes it sound like the physical key is a replacement for 2FA instead of just a hardware device that handles the second factor (while leaving the password component in place).
OK thanks, this clarifies the part that says "it began requiring all employees to use physical Security Keys in place of passwords and one-time codes," which I found super confusing.
> I guess this is a dumb question, but is it still "multi factor authentication" if you only use a single physical device to complete the login process?
This is a common misconception. The threat model of 2FA is not "I lost my device, and it is now in the hands of someone who knows the password".
The threat model of 2FA is one of:
1) "An attacker has gained remote access to my computer, but not physical access"
2) "I have been targeted by a sophisticated phishing attack, and I trust the machine that I am currently using"
TOTP (and even SMS) protects against (1) in most cases, though U2F is still preferable. U2F is the only method that protects against (2).
> U2F is the only method that protects against (2)
A bit of clarification: U2F protects against phishing attacks by automatically detecting the domain mismatch when a link from a phishing email sends you to g00gle.com rather than google.com, which is something that a human might overlook while they're typing in both their password and the second factor they've been sent via SMS. However, if someone were to use a password manager and exclusively rely on autocomplete to fill in their passwords, then that would also alert them to the fact that something was fishy when their browser/password manager refuses to autocomplete their Google password on g00gle.com. So this isn't exactly the only method that protects against the second scenario above... though I will concede that using a password manager in this way sort would sort of change 2FA from "something you know and something you have" to "these two somethings you have" (your PC with your saved passwords and your USB authenticator), which is something that might be worth considering.
Regardless, these physical authenticators are a huge step up from SMS and I'm very happy that an open standard for them is being popularized and implemented in both sites and browsers.
I always thought the 2FA threat model was "Someone acquired my password" or else "someone has access to my email account and may try to do password resets by email."
> Google has not had any of its 85,000+ employees successfully phished on their work-related accounts since early 2017, when it began requiring all employees to use physical Security Keys in place of passwords and one-time codes, the company told KrebsOnSecurity.
If they "use (physical security keys) in place of (passwords and one-time codes)", that would no longer be MFA: they're authing strictly with "something they have".
A more in-depth quote is later in the article: "Once a device is enrolled for a specific Web site that supports Security Keys, the user no longer needs to enter their password at that site (unless they try to access the same account from a different device, in which case it will ask the user to insert their key)."
The parenthetical seems to imply that they're doing initial auth (and thus cookie generation) with password + U2F, and then re-validating U2F for specific actions / periodically without re-prompting for the password, similarly to how GitHub prompts for "sudo mode" when you do specific account actions.
Poorly worded (or possibly misunderstood) — it was password + OTP → password + U2F. (In practice the OTP was also usually supplied by a dedicated USB stick, so the change was mostly transparent.)
Now the real question becomes: how often were they getting phished before the new policies? Knowing Google, there's no way they will answer THAT before another decade.
Huh? The article literally is about 2FA, as originally conceived. It isn't replacement for 2FA -- it is 2FA.
The key (sic) thing about U2F isn't that it is new and special (it isn't -- it's plain old 2FA as used for more than a decade) but rather that it is practical to deploy for smaller organizations. You don't need to buy large quantities of keys. You don't need a special server. You don't need staff with special skills to deploy it. It works with "cloud" providers like Google and Facebook, out the box (same key as you use for your internal services).
Not quite. A 6 digit code can be phished out from users pretty easily. They'll enter it anywhere its asked, similar to a password.
However the U2F and Fido spec requires a Cryptographic assertion (with all that replay attack mitigation stuff like Nonces) that makes it so that an attacker can reuse a Token touch. I'd probably encourage a glance over this https://fidoalliance.org/specs/fido-u2f-v1.0-ps-20141009/fid...
Sadly the Wikipedia article doesn't have a good layman's explanation yet, but I'm sure it'll will soon.
Yes at a high level, its still 2FA but like most options in any factor of Auth. It can be improved upon. (For a simple case, take Fingerprint readers and look at the advances of liveliness checks and how many unique points it requires.)
Microsoft's position is interesting. The article states Edge will be implementing support this year. They run Github, which supports U2F.
But Microsoft are in the process of launching a new MFA and password management product in Office 365/Azure, and I'm informed U2F isn't on the roadmap.
I wonder what happens when you forget your key at home. If I forget my keyfob for work, I usually have to do the walk of shame, around the hallway from the reception desk, whenever I use the bathroom. But I can still get in. And if I really wanted to, I could get a temp key for the day.
Do compiles like Google, et. al. have security departments that give people temporary keys that expire after a day, or do they have to run back home?
* The hardware basically consists of a secure microprocessor, a counter which it can increment, and a secret key.
* For each website, e.g., GitHub, it creates a HMAC-SHA256 of the domain (www.github.com) and the secret key, and uses this to generate a public/private keypair. This is used to authenticate.
* To authenticate, the server sends a challenge, and the security key sends a response which validates that it has the private key. It also sends the nonce, which it increments.
If you get phished, the browser would send a different domain (www.github.com.suspiciousdomain.xx) to the security key and authentication would fail. If you somehow managed to clone the security key, services would notice that your nonces are no longer monotonically increasing and you could at least detect that it's been cloned.
I'm excited about the use of FIDO U2F becoming more widespread, for now all I use it for is GitHub and GMail. The basic threat model is that someone gets network access to your machine (but they can't get credentials from the security key, because you have to touch it to make it work) or someone sends you to a phishing website but you access it from a machine that you trust.
Most places that allow it require that you have a fallback method available.
But with regular TOTP and a software device on a smart phone I can print out backup codes in case you lose your phone. This allows one to log in and reset their 2FA token. What happens if you lose your Yubikey or similar? I guess this doesn't matter as much in an enterprise setting where there is a dedicated IT department but for for individual use outside of the enterprise doesn't TOTP and a software device have a better story in case of loss of the 2FA device?
It's convenient only when you physically have the security key; it's a hassle if you forgot or lost it.
You just need a U2F key to try it.
At least a year ago or so (last time when I checked) most services didn't appear to check the nonce and worked fine when the nonce was reset.
But there are many downsides, including:
Devices now need state, pushing up the base price. We want these things to be cheap enough to give away.
The counter make tokens weakly trackable. If Facebook knows your token's counter was 205 when you signed in at work this morning and 217 when you signed in from your iMac this evening, somebody who visited GitHub at midday with counter 213 might be you, someone with counter 487 definitely isn't you, or at least not with the same token.
State is only expensive when it adds a significant amount of die area or forces you to add additional ICs. If you need a ton of flash memory, you can't put it on the same die because the process is different, and adding a second IC bumps up the cost. However, staying with the same process you used for your microcontroller, you can add some flash with much worse performance... which is a viable alternative if you only need a handful of bits. Your flash is slower and needs more die area, but it's good enough.
> The counter make tokens weakly trackable. If Facebook knows your token's counter was 205 when you signed in at work this morning and 217 when you signed in from your iMac this evening, somebody who visited GitHub at midday with counter 213 might be you, someone with counter 487 definitely isn't you, or at least not with the same token.
What kind of ridiculous threat model is this? "Alice logs into Facebook and GitHub, and Bob, who has compromised both Facebook and GitHub's authentication services..." Even then, it's not guaranteed, because the device might be using a different counter for Facebook and GitHub.
You can buy pretty decent (16Mhz, 2K EEPROM, 8K flash) microcontrollers for less than twenty cents (my numbers are from 7 years ago, things are probably cheaper, faster and bigger now). A few bytes of stable storage -- whatever you need to safely increment a counter and not lose state to power glitches -- are not going to add significantly to the cost of a hardware token.
Our internal gmail might not require it every day, but most systems at Google do. You can't get very far without it.
Do you know why GNUK (the open source project used by Nitrokey and some other smart cards) chooses not to support U2F? I don't understand the maintainer's criticisms[0] and I'd like to probe someone knowledgeable to find out more.
[0] https://lists.gnupg.org/pipermail/gnupg-users/2017-March/057...
The point of GNUK is to move your GnuPG private key to a more secure device so it doesn't have to sit on your computer. With GnuPG, users are basically in control of the cryptographic operations: what keys to trust, what to encrypt or decrypt, etc.
With U2F, in order to comply with the spec you are basically forced to make a bunch of decisions that don't necessarily line up with GNUK's goals. You have to trust X.509 certificates and the whole PKI starting from your browser (CA roots and all that). Plus, U2F is basically a cheaper alternative to client certificates, but with GNUK you already have client certificates, so why go with something that provides less security?
To elaborate: With GnuPG, the reason you trust that Alice is really Alice is because you signed her public key with your private key. You can then secure your private key on a hardware device with GNUK. With FIDO U2F and GMail, you have to trust that you are accessing GMail, which is done through pinned certificates and a chain of trust starting from a public CA root. This system doesn't offer you much granularity for trusting certificates. Adding FIDO U2F to a system designed to support a GnuPG-like model of trust dilutes the purpose of the device. By analogy, imagine if you used your credit card to log in to GMail, maybe by using it as the secret key for U2F. The analogy isn't great, but you can imagine that even if you can trust that (through the protocol) GMail can't steal your credit card number, the fact that you are waving your credit card about all the time makes it a little less secure.
In general, people who work on GnuPG and other similar cryptography systems tend to be critical of the whole PKI, and I'm sympathetic to that viewpoint.
But if so, I don't see how that solves the problem of "user goes to site X for the first time, mistakenly thinking it's Github." That registers a new entry for the site and sends different credentials/signatures than it would send to Github. But site X doesn't care that they're invalid, and logs you in anyway, hoping you will pass on some secret information.
Am I missing something?
Verifying the authenticity of a site is something that has been demonstrated both to be nontrivial and also something that the majority of users cannot do successfully.
U2F/WebAuthn tie the identity of the requesting site into the challenge - by requiring TLS and by using the domain name that the browser requested. So if the user is being phished, the domain mismatch will result in a challenge that cannot be used for the legitimate site.
The system makes it impossible for phishing sites to log in to your account using your credentials. That's the threat model it guards against.
Entering 'secret information' that isn't user credentials just plain isn't part of it. Though wouldn't anyone phished by e.g. FakeGmail already get suspicious if they don't see any of their actual emails that they remember from the last time they logged in to Gmail?
Can one usb device work on two separate accounts for a given domain, (e.g. work gmail and personal gmail), or do you need two of them?
Is there a requirement that FIDO be implemented on a hardware device?
thank you
Also, YubiKey 4 is a great device. Set it up with GnuPG and you have "pretty good privacy" — with convenience. I recommend this guide for setting things up: https://github.com/drduh/YubiKey-Guide
The great thing about YubiKeys is that apart from U2F, you also use them for OpenPGP, and the same OpenPGP subkey can be used for SSH. It's an all-in-one solution.
https://www.linux.com/blog/2018/3/nitrokey-digital-tokens-li...
https://www.yubico.com/support/security-advisories/ysa-2017-...
And, if you lose your fob or your backup fob you're boned.
> The issue weakens the strength of on-chip RSA key generation and affects some use cases for the Personal Identity Verification (PIV) smart card and OpenPGP functionality of the YubiKey 4 platform. Other functions of the YubiKey 4, including PIV Smart Cards with ECC keys, FIDO U2F, Yubico OTP, and OATH functions, are not affected. YubiKey NEO and FIDO U2F Security Key are not impacted.
It maybe you're talking about U2F applet of Yubikey? Then it's not affected by the bug you posted. And you should have backup codes enabled.
App also support SSH keys.
Works very well for me and the service is free. https://krypt.co/
Last month I tried to make an e-banking account in South Europe. In 2018.
- They required "6-12 characters as a password, and no special characters". You can't hash special chars?
- Apparently it's okay, because "2FA". Which is a "changeable via a call" 4-digit-code, which the bank employee knows "only" two digits.
I'd be far more inclined to trust Twitter or GitHub than my bank with my data.
For regular guys like me, I can't think of any online service more important to protect than my bank account.
The problem is that all of these things are a PITA to administer.
I wanted a VPN between our two offices. Cool. I'll buy some YubiKeys, type some command line magic on Linux and I'll be good to go ...
Pschye!
This stuff is fine if you have 100+ people and the resources to administer.
If you simply want to manually distribute stuff to <10 people, it's a nightmare.
Until I can set up something easily at the 10-person level and scale it gradually to 100+, this stuff is going to remain tractionless.
The way the article is written, it makes it sound like the physical key is a replacement for 2FA instead of just a hardware device that handles the second factor (while leaving the password component in place).
You can already use the same process on your GMail if you have a compatible U2F key.
Deleted Comment
This is a common misconception. The threat model of 2FA is not "I lost my device, and it is now in the hands of someone who knows the password".
The threat model of 2FA is one of:
1) "An attacker has gained remote access to my computer, but not physical access"
2) "I have been targeted by a sophisticated phishing attack, and I trust the machine that I am currently using"
TOTP (and even SMS) protects against (1) in most cases, though U2F is still preferable. U2F is the only method that protects against (2).
A bit of clarification: U2F protects against phishing attacks by automatically detecting the domain mismatch when a link from a phishing email sends you to g00gle.com rather than google.com, which is something that a human might overlook while they're typing in both their password and the second factor they've been sent via SMS. However, if someone were to use a password manager and exclusively rely on autocomplete to fill in their passwords, then that would also alert them to the fact that something was fishy when their browser/password manager refuses to autocomplete their Google password on g00gle.com. So this isn't exactly the only method that protects against the second scenario above... though I will concede that using a password manager in this way sort would sort of change 2FA from "something you know and something you have" to "these two somethings you have" (your PC with your saved passwords and your USB authenticator), which is something that might be worth considering.
Regardless, these physical authenticators are a huge step up from SMS and I'm very happy that an open standard for them is being popularized and implemented in both sites and browsers.
Would you be able to elaborate on this? I'm not understanding the difference between TOTP and the physical key from the article for this scenario.
I always thought the 2FA threat model was "Someone acquired my password" or else "someone has access to my email account and may try to do password resets by email."
> Google has not had any of its 85,000+ employees successfully phished on their work-related accounts since early 2017, when it began requiring all employees to use physical Security Keys in place of passwords and one-time codes, the company told KrebsOnSecurity.
A more in-depth quote is later in the article: "Once a device is enrolled for a specific Web site that supports Security Keys, the user no longer needs to enter their password at that site (unless they try to access the same account from a different device, in which case it will ask the user to insert their key)."
The parenthetical seems to imply that they're doing initial auth (and thus cookie generation) with password + U2F, and then re-validating U2F for specific actions / periodically without re-prompting for the password, similarly to how GitHub prompts for "sudo mode" when you do specific account actions.
The key (sic) thing about U2F isn't that it is new and special (it isn't -- it's plain old 2FA as used for more than a decade) but rather that it is practical to deploy for smaller organizations. You don't need to buy large quantities of keys. You don't need a special server. You don't need staff with special skills to deploy it. It works with "cloud" providers like Google and Facebook, out the box (same key as you use for your internal services).
However the U2F and Fido spec requires a Cryptographic assertion (with all that replay attack mitigation stuff like Nonces) that makes it so that an attacker can reuse a Token touch. I'd probably encourage a glance over this https://fidoalliance.org/specs/fido-u2f-v1.0-ps-20141009/fid...
Sadly the Wikipedia article doesn't have a good layman's explanation yet, but I'm sure it'll will soon.
Yes at a high level, its still 2FA but like most options in any factor of Auth. It can be improved upon. (For a simple case, take Fingerprint readers and look at the advances of liveliness checks and how many unique points it requires.)
Any other 2FA method can.
Short version: the keys are matched directly from the device to the site making it virtually impossible to phish unless you control the site itself.
But Microsoft are in the process of launching a new MFA and password management product in Office 365/Azure, and I'm informed U2F isn't on the roadmap.
Do compiles like Google, et. al. have security departments that give people temporary keys that expire after a day, or do they have to run back home?
Deleted Comment
Dead Comment