This is one of the all-time cryptography footguns, an absolutely perfect example of how systems development intuition fails in cryptography engineering.
The problem here is the distinction between an n-bit random number and n-bit modulus. In DSA, if you're working with a 521-bit modulus, and you need a random k value for it, k needs to be random across all 521-bits.
Systems programming intuition tells you that a 512-bit random number is, to within mind-boggling tolerances, as unguessable as a 521-bit random number. But that's not the point. A 512 bit modulus leaves 9 zero bits, which are legible to cryptanalysis as bias. In the DSA/ECDSA equation, this reduces through linear algebra to the Hidden Number Problem, solvable over some number of sample signatures for the private key using CVP.
What's really interesting to me is that there was a known solution to the DSA/ECDSA nonce generation problem, RFC 6979, which was published 4 years before the vulnerability was introduced into PuTTY. And it sounds like the developer knew about this RFC at the time but didn't implement it because the much earlier version of deterministic nonce generation that PuTTY already had seemed similar enough and the differences were assessed to not be security critical.
So I think the other lesson here is that deviating from a cryptographic right answer is a major footgun unless you understand exactly why the recommendation works the way it does and exactly what the implications are of you doing it differently.
I think 6979 is a bit of a red herring here. 6979 is about deterministic nonce generation, which is what you do to dodge the problem of having an insecure RNG. But the problem here isn't that the RNG is secure; it's more fundamentally a problem of not understanding what the rules of the nonce are.
But I may be hair-splitting. Like, yeah, they freelanced their own deterministic nonce generation. Either way, I think this code long predates 6979.
I don't think anybody consciously looked at 9 zero bits and thought this is fine, but it rather looks like unfortunate effect of plugging old code into new algorithms without proper verification.
You could be right. If you look at the old code, dsa_gen_k(), that was removed during the commit (https://git.tartarus.org/?p=simon/putty.git;a=commitdiff;h=c...), it does basically no bounds checking, presumably because at the time it was written it was assumed that all modulus values would be many fewer bits than the size of a SHA-512 output.
So it would have been pretty easy to just reuse the function for a modulus value that was too big without encountering any errors. And the old code was written 15+ years before it was used for P-521, so it's entirely possible the developer forgot the limitations of the dsa_gen_k() function. So maybe there's another lesson here about bounds checking inputs and outputs even if they don't apply to anything you're currently doing.
The very existence of 521-bit ECDSA is a footgun just waiting to go off.
To any programmer who is accustomed to thinking in binary but hasn't heard the full story about why it ended up being such an odd number, 521 is virtually indistinguishable at a glance from the nice round number that is 512. Heck, when I first read about it, I thought it was a typo!
The size is unexpected, but I believe this would have been an issue even if it really was 512-bit ECDSA rather than 521. Taking a random 512-bit number, which is what the PuTTY nonce function produced, and taking it modulo another 512-bit number, would also bias the output. Not as severely as having 9 bits that are always zero, but enough to be potentially exploitable anyways.
To avoid this issue, you either want your random value to be significantly larger than the modulus (which is what EDDSA does) or you want to generate random values of the right number of bits until one happens to be smaller than the modulus (which is what RFC 6979 does).
$ bc
bc 1.07.1
Copyright 1991-1994, 1997, 1998, 2000, 2004, 2006, 2008, 2012-2017 Free
Software Foundation, Inc.
This is free software with ABSOLUTELY NO WARRANTY.
For details type `warranty'.
2^521-1
68647976601306097149819007990813932172694353001433054093944634591855\
43183397656052122559640661454554977296311391480858037121987999716643\
812574028291115057151
> Systems programming intuition tells you that a 512-bit random number is, to within mind-boggling tolerances, as unguessable as a 521-bit random number.
Sure, but the other half of systems programming intuition tells you "the end user is going to truncate this value to 8 bits and still expect it to be random".
It's not the difference between an n-bit random number and an n-bit modulus. It's the difference between a 512-bit random number and a 521-bit random number. It's very simple, but wording it as number vs. modulus is needlessly confusing, just adding to the problem you are bemoaning.
The issue with cryptography is that you have to be precise, that means the communication needs to involve far more detail, even if it can initially be confusing.
This is one of the major reasons that crypto is hard and if you try to get around the "hard" bit your "shortcut" will probably come back to bite you. When it comes to crypto and accuracy (and hence security), more communication, and detailed communication are probably the solution not the problem.
I feel like there should be some intuition like a cable rated for 100kg is not suitable for holding 110kg, therefore 512 bits of entropy is not rated to be 521 bits of entropy?
Oh well, there’s this very popular library which generates 256-bit keys setting the last 128 bits to a value derived from the first 128 bits. So I guess in agreement with your post: actually achieving full entropy is not obvious.
No, it’s actually far worse than that. This is like if you bought prestressed concrete rated for 100kg and you loaded it with 50kg. This is less than the limit, so it’s good right? Nope, the way it works is that you have to give it exactly 100kg of load or else it’s weak to tension and your building falls over in the wind. The problem here is that not that they needed 521 bits of entropy and 512 was too little but that 521 bits of entropy of which 512 are legit and the top 9 bits are all zeroes breaks the algorithm completely and makes it not secure at all. In fact I think copying 9 bits from the other 512, while not great, would have probably made this basically not a problem. I am not a cryptographer though, so don’t quote me on that ;)
The article has a good writeup. Clear, actionable, concise.
If you have a bit of instinct for this, it feels obvious that 'reducing' a smaller number by a larger one is not going to obscure the smaller 1 in any meaningful way, and instead it will leave it completely unchanged.
I don't think this is so much what you make it out to be, but a poor understanding of basic discrete maths (also I think you mean the 521 bit modulus leaves 9 zero bits, the modulus normally refers to the divisor not the remainder)
Here it is the difference between a bit that is part of a cryptographically-safe random number and just happens to be zero, but had equal chance of being one, and a bit that is zero every time because of the way the numbers are being generated.
I don't have a substantive comment to offer, but good on Simon Tatham for the clear and forthcoming write-up. No damage-control lawyerly BS, no 'ego', just the facts about the issue.
It's reassuring to see a solid disclosure after a security issue, and we too often see half-truths and deceptive downplaying, e.g. LastPass.
Yes, Simon is a brilliant person (hi Simon!) and would be the last person on earth to do any spin. He also doesn't owe anyone anything, PuTTY was a gift from him to the world when there was no good alternative on Windows, a gift that has had an incalculably large benefit to so many people that no one should forget.
I had the pleasure to meet him in person and the guy is just so grounded and nice to interact and help you with stuff in a non-judgmental way.
Many people I know, with less than 1% of his contributions to OSS, have inflated egos and are just full of themselves, so it is refreshing to have people such as Simon in the OSS community.
I think named vulnerabilities are useful when it's a "STOP THE WORLD" kind of vulnerability like Heartbleed and Shellshock. It's much easier to talk about Heartbleed than "CVE-2014-0160".
The problem, IMO, is when medium-severity vulnerabilities are given names, like Terrapin. I think it makes people think a vulnerability is much worse than it really is.
I wish this announcement included the backstory of how someone discovered this vulnerability.
Public keys are enough of a pain in the ass with PuTTY / KiTTY that I stick with password auth for my windows SSH'ing needs.
KiTTY even let's you save the passwords so you don't have to type it in, a horrible security practice no doubt, but so convenient... Perhaps more secure than the putty-gen'd ECDSA P521 keys? A tad bit ironic.
We found it by investigating the security of SSH as part of a larger research program focussing on SSH, which also resulted in our publication of the Terrapin vulnerability.
This particular bug basically fell into our hands while staring at the source code during our investigation of the security of SSH client signatures.
Ironically, DJB considers the 521 curve to be the only NIST standard that uses reasonable primes.
"To be fair I should mention that there's one standard NIST curve using a nice prime, namely 2^521 - 1; but the sheer size of this prime makes it much slower than NIST P-256."
This vulnerability has very little to do with P-521 per se. The issue is with ECDSA: any use of ECDSA with biased nonce generation, regardless of the elliptic curve it's implemented over, immediately causes secret key leakage.
(Rant: All these years later, we're all still doing penance for the fact that Schnorr signatures were patented and so everyone used ECDSA instead. It's an absolute garbage fire of a signature scheme and should be abandoned yesterday for many reasons, e.g., no real proof of security, terrible footguns like this.)
Assuming I'm reading it right, this is an absolutely classic vulnerability, something people who study cryptographic vulnerability research would instinctually check for, so what's taken so long is probably for anyone to bother evaluating the P-521 implementation in PuTTY.
Sometimes useful reminder: you may not need PuTTY today. On the one side Windows Terminal does a lot of the classic VT* terminal emulation that old ConHost did not. On the other side Windows ships "real" OpenSSH now as a feature that turns on automatically with Windows "Dev Mode". No built in GUI for the SSH agent, but at this point if you are familiar with SSH then using a CLI SSH agent shouldn't be scary. If you are "upgrading" from PuTTY you just need to export your keys to a different format, but that's about the only big change.
PuTTY was a great tool for many years and a lot of people have good reasons to not want to let it go. As with most software it accretes habits and processes built on top of it that are hard to leave. But also useful to sometimes remind about the new options because you never know who wants to be in the Lucky 10K to learn that Windows Terminal now has deeper, "true" terminal emulation or that Windows has ssh "built-in".
I'm sorry that you need to work around the inability to run a simple Windows service because of some mistakenly bad corporate policy trying to micro-manage which Windows services are allowed to run. I don't think the long term solution should be "shadow IT install an older app just because it pretends to be a GUI rather than a Windows service", but I'm glad it is working for you in the short term.
If you need ammunition to encourage your corporate IT to allow you to run the proper ssh-agent service to do your job instead of increasing your attack surface by installing PuTTY/Pageant, you could collect a list of vulnerabilities such as the one posted here (look at the huge count of affected versions on just this one!). There should be plenty of vulnerability maintenance evidence on the Microsoft-shipped version of an open source tool with a lot of eyeballs because it is "the standard" for almost all platforms over the "single developer" tool that took at least a decade off from active development (and it shows).
Really helpful. I found challenge in getting a Windows system (no admin) into a state where i can use it productively, and having a functional ssh-agent was one of the remaining pain points.
There are a few different options in Windows that are all measurably superior to PuTTY:
Install WSL2 - you get the Linux SSH of your choice.
As mentioned above, Windows now ships with OpenSSH and windows terminal is good.
My favourite, but now probably obsolete solution was to install MobaXTerm which shipped with an SSH client. It's still great and there is a usable "free" version of it, but WSL2 does everything for me now when I'm forced to use windows.
I have some rhel5 systems where I have compiled PuTTY psftp and plink, because it's the easiest way to get a modern client that can do chacha20-poly1305 with ed25519.
Too few nerds are willing to admit this. I use git all-day-long, but need to check stackoverflow to use the command line for anything more complicated than switching branches...and I'm ok with that. I save my brain space for more useful things.
So this signing method requires a 521 bit random number, and this flaw caused the top 9 bits of that number to be zero instead, and somehow after 60 signatures this leaks the private key?
Anyone care to explain how exactly? How it it any different to the top 9 bits being zero by chance (which happens in 1 out of ~500 attempts anyway)
For the attack all 60 signatures need a nonce that is special in this way. If for example only one out of the 60 is short, the attack fails in the lattice reduction step.
The reason is that in the attack, all 60 short nonces "collude" to make up a very special short vector in the lattice, which is much shorter than usual because it is short in all 60 dimensions, not just one out of 500 dimensions. The approximate shortest vector is then obtainable in polynomial time, and this happens to contain the secret key by construction.
As an analogy: Imagine you had a treasure map with 60 steps "go left, go right, go up, go down, go down again" etc. If only one out of 60 instructions where correct, you wouldn't know where the treasure is. All of the instructions need to be correct to get there.
Doesn't make sense to me as well, even when fully random after 30,000 signatures you would get around 60 signatures where the nonce starts with nine zero bits.
I suspect there must be something else at play here.
EDIT: the nonce is PRIVATE, so the scenario I described would not work because we wouldn't know for which of the 30k signatures the nonce starts with 9 zero bits. Makes sense now.
If I'm understanding correctly, the difference is between knowing you have 60 and having to try (30000!/(60! * (30000 - 60)!) combinations and seeing if they worked, which is quite a few.
I did a bit of a deep dive into this, in case anyone is interested. I think reading the code is a great way to understand _why_ this vulnerability happened:
Your title says "PuTTY-Generated" but the OP article says "The problem is not with how the key was originally generated; it doesn't matter whether it came from PuTTYgen or somewhere else. What matters is whether it was ever used with PuTTY or Pageant".
The answer being, per that post: author was worried about low quality randomness on Windows and ran it through a sha512 hash function which outputs fewer than 521 bits so the remaining ones will be left zero
This exposed client keys, not server keys. The client keys are at risk only in a handful of specific scenarios - e.g., if used to connect to rogue or compromised servers, or used for signing outside SSH.
This is not exploitable by simply passively watching traffic, so even for client keys, if you're certain that they were used in a constrained way, you should be fine. The difficulty is knowing that for sure, so it's still prudent to rotate.
No, only NIST P-521 client keys used with PuTTY are affected. The server host key signature is computed by the server (most likely OpenSSH) which is unaffected.
> (The problem is not with how the key was originally generated; it doesn't matter whether it came from PuTTYgen or somewhere else. What matters is whether it was ever used with PuTTY or Pageant.)
The problem here is the distinction between an n-bit random number and n-bit modulus. In DSA, if you're working with a 521-bit modulus, and you need a random k value for it, k needs to be random across all 521-bits.
Systems programming intuition tells you that a 512-bit random number is, to within mind-boggling tolerances, as unguessable as a 521-bit random number. But that's not the point. A 512 bit modulus leaves 9 zero bits, which are legible to cryptanalysis as bias. In the DSA/ECDSA equation, this reduces through linear algebra to the Hidden Number Problem, solvable over some number of sample signatures for the private key using CVP.
Later
Here you go, from Sean Devlin's Cryptopals Set 8:
https://cryptopals.com/sets/8/challenges/62.txt
So I think the other lesson here is that deviating from a cryptographic right answer is a major footgun unless you understand exactly why the recommendation works the way it does and exactly what the implications are of you doing it differently.
But I may be hair-splitting. Like, yeah, they freelanced their own deterministic nonce generation. Either way, I think this code long predates 6979.
So it would have been pretty easy to just reuse the function for a modulus value that was too big without encountering any errors. And the old code was written 15+ years before it was used for P-521, so it's entirely possible the developer forgot the limitations of the dsa_gen_k() function. So maybe there's another lesson here about bounds checking inputs and outputs even if they don't apply to anything you're currently doing.
To any programmer who is accustomed to thinking in binary but hasn't heard the full story about why it ended up being such an odd number, 521 is virtually indistinguishable at a glance from the nice round number that is 512. Heck, when I first read about it, I thought it was a typo!
To avoid this issue, you either want your random value to be significantly larger than the modulus (which is what EDDSA does) or you want to generate random values of the right number of bits until one happens to be smaller than the modulus (which is what RFC 6979 does).
Sure, but the other half of systems programming intuition tells you "the end user is going to truncate this value to 8 bits and still expect it to be random".
This is one of the major reasons that crypto is hard and if you try to get around the "hard" bit your "shortcut" will probably come back to bite you. When it comes to crypto and accuracy (and hence security), more communication, and detailed communication are probably the solution not the problem.
Oh well, there’s this very popular library which generates 256-bit keys setting the last 128 bits to a value derived from the first 128 bits. So I guess in agreement with your post: actually achieving full entropy is not obvious.
https://chilkatforum.com/questions/622/algorithm-for-generat...
If you have a bit of instinct for this, it feels obvious that 'reducing' a smaller number by a larger one is not going to obscure the smaller 1 in any meaningful way, and instead it will leave it completely unchanged.
I don't think this is so much what you make it out to be, but a poor understanding of basic discrete maths (also I think you mean the 521 bit modulus leaves 9 zero bits, the modulus normally refers to the divisor not the remainder)
https://mathworld.wolfram.com/Modulus.html
It's reassuring to see a solid disclosure after a security issue, and we too often see half-truths and deceptive downplaying, e.g. LastPass.
Many people I know, with less than 1% of his contributions to OSS, have inflated egos and are just full of themselves, so it is refreshing to have people such as Simon in the OSS community.
And no cutesy name for the vulnerability
The problem, IMO, is when medium-severity vulnerabilities are given names, like Terrapin. I think it makes people think a vulnerability is much worse than it really is.
Public keys are enough of a pain in the ass with PuTTY / KiTTY that I stick with password auth for my windows SSH'ing needs.
KiTTY even let's you save the passwords so you don't have to type it in, a horrible security practice no doubt, but so convenient... Perhaps more secure than the putty-gen'd ECDSA P521 keys? A tad bit ironic.
This particular bug basically fell into our hands while staring at the source code during our investigation of the security of SSH client signatures.
"To be fair I should mention that there's one standard NIST curve using a nice prime, namely 2^521 - 1; but the sheer size of this prime makes it much slower than NIST P-256."
http://blog.cr.yp.to/20140323-ecdsa.html
http://safecurves.cr.yp.to/rigid.html
(Rant: All these years later, we're all still doing penance for the fact that Schnorr signatures were patented and so everyone used ECDSA instead. It's an absolute garbage fire of a signature scheme and should be abandoned yesterday for many reasons, e.g., no real proof of security, terrible footguns like this.)
Ssh agent will manage your ssh keys through windows registry windows login process.
Also if you use wsl, you can access your ssh keys in wsl from the windows ssh-agent via npiperelay
For me, the stakes are very low. It's my windows "gaming" machine, and has access to a few low-value hosts.
Otherwise I'd invest the time to learn wtf is pageant ;D
PuTTY was a great tool for many years and a lot of people have good reasons to not want to let it go. As with most software it accretes habits and processes built on top of it that are hard to leave. But also useful to sometimes remind about the new options because you never know who wants to be in the Lucky 10K to learn that Windows Terminal now has deeper, "true" terminal emulation or that Windows has ssh "built-in".
PuTTY has a workaround, allowing PAGENT.EXE to be used in place of the forbidden/inaccessible Microsoft agent:
https://tartarus.org/~simon/putty-snapshots/htmldoc/Chapter9...
So PuTTY remains quite relevant because of the mechanisms that Microsoft has chosen.
If you need ammunition to encourage your corporate IT to allow you to run the proper ssh-agent service to do your job instead of increasing your attack surface by installing PuTTY/Pageant, you could collect a list of vulnerabilities such as the one posted here (look at the huge count of affected versions on just this one!). There should be plenty of vulnerability maintenance evidence on the Microsoft-shipped version of an open source tool with a lot of eyeballs because it is "the standard" for almost all platforms over the "single developer" tool that took at least a decade off from active development (and it shows).
I encountered this, too, but the fix is quite simple.
That service is set to “manual” by default, (or maybe “disabled”) and setting it to “automatic” then starting it will get you running.
It is unlikely that this is a corporate lockdown measure.
Install WSL2 - you get the Linux SSH of your choice.
As mentioned above, Windows now ships with OpenSSH and windows terminal is good.
My favourite, but now probably obsolete solution was to install MobaXTerm which shipped with an SSH client. It's still great and there is a usable "free" version of it, but WSL2 does everything for me now when I'm forced to use windows.
ssh command is absolutely fine, but I much prefer a list of saved presets versus ~/.ssh/config file fuckery
[1]: https://www.bitvise.com/ssh-client
Anyone care to explain how exactly? How it it any different to the top 9 bits being zero by chance (which happens in 1 out of ~500 attempts anyway)
I suspect there must be something else at play here.
EDIT: the nonce is PRIVATE, so the scenario I described would not work because we wouldn't know for which of the 30k signatures the nonce starts with 9 zero bits. Makes sense now.
https://ericrafaloff.com/your-putty-generated-nist-p-521-key...
Any k generation and subsequent signature generation are going to be impacted.
This is not exploitable by simply passively watching traffic, so even for client keys, if you're certain that they were used in a constrained way, you should be fine. The difficulty is knowing that for sure, so it's still prudent to rotate.
Sounds like your server keys are safe.
Deleted Comment