Readit News logoReadit News
LeonM · 3 years ago
The fact that this key was apparently not stored in an HSM, and that GH employees had access to this private key (allowing them to accidentally push it) means that effectively all communication with GH since the founding of the company has to be considered compromised. This basically means that, depending on your level of paranoia, you will have to review all code that has ever interacted with Github repositories, and any code pushed or pulled from private repositories can no longer be considered private.

Github's customers trust GH/MS with their code, which, for most businesses, is a high value asset. It wouldn't surprise me if this all results in a massive lawsuit. Not just against GH as a company, but also to those involved (like the CISO). Also, how on earth was it never discovered during an audit that the SSH private key was plain text and accessible? How has GH ever been able to become ISO certified last year [0], when they didn't even place their keys in a HSM?

Obviously, as a paying customer I am quite angry with GH right now. So I might be overreacting when I write this, but IMO the responsible persons (not talking about the poor dev that pushed the key, but the managers, CISO, auditors, etc.) should be fined, and/or lose their license.

[0] https://github.blog/2022-05-16-github-achieves-iso-iec-27001...

nwallin · 3 years ago
SSH uses ephemeral keys. It's not enough to have the private key and listen to the bytes on the wire, you have to actively MITM the connection. A github employee who has access to the private key and enough network admin privileges to MITM your connection already has access to the disk platters your data is saved on.

Regarding the secrecy of the data you host at github, you should operate under the assumption that a sizeable number of github employees will have access to it. You should assume that it's all sitting unencrypted on several different disk platters replicated at several different geographically separated locations. Because it is.

One of the things that you give up when you host your private data on the cloud is controlling who and under what circumstances people can view or modify your private data. If the content of the data is enough to sink you/your company without remedy you should not store it on the cloud.

snowwrestler · 3 years ago
Agreed; GitHub documentation refers to repo “visibility,” not “security,” and that is an intentional distinction.

When we signed on with GH as a paying customer over a decade ago, they were quite clear that private repos should not be considered secure storage for secrets. It’s not encrypted at rest, and GitHub staff have access to it. It takes only a few clicks to go from private to public.

ajross · 3 years ago
Exactly. Host keys are about authentication, not connection security. Presumably the upthread comment is trying to say that "ssh communication with github could have been subject to an undetectable MitM attack by an attacker with access to this key"[1], which isn't remotely the same thing as "all communication with GH since the founding of the company has to be considered compromised".

[1] Which is sort of tautological and silly, because that's true of all sites and all host keys. What the comment was trying to imply was that the choice of storage of this key invalidates any trust we might have in GitHub/Microsoft regarding key management, and that therefore we shouldn't trust them. Which is also tautological and silly. Either you trust them or you don't, that's not a technological argument.

adql · 3 years ago
You shouldn't commit unencrypted secrets to git anyway, public or private, on-site or in cloud.

There are plenty of tools to either keep them encrypted (we just use simple GPG, but our team is small) or just auto-generate and never show to user in the first place (various key vaults that can be used from automation like HashiCorp's Vault)

est31 · 3 years ago
I would also add that your ability to pretend to be the client to the server is also limited, if ssh key based client authentication is used. This means that even if the host key is leaked, an attacker will not be able to push in the name of the attacked client. The attacker will be able to pretend to be the server to the client, and thus be able to get the pushed code from the client (even if the client just added one patch, the attacker can pretend to be an empty repo server side and receive the entire repo.

If ssh token based auth is used, it's different of course, because then the server gets access to the token. Ideally Github would invalidate all their auth tokens as well.

The fun fact is that a token compromise (or any other attack) can still happen any point in the future with devices that still have outdated ssh keys. That's a bit unfortunate as no revocation mechanism exists for ssh keys... ideally clients would blacklist the ssh key, given the huge importance of github.

CHY872 · 3 years ago
It's worth saying that GitHub also has GitHub AE, which has some requirements (e.g. 500+ heads) but is a lot better for paranoid administrators, offering stricter audit results (FedRAMP High is no joke), data residency, stricter auth requirements, etc etc. I'd _imagine_ that such an environment is deployed as an isolated managed GitHub Enterprise instance, and at the very least in that environment I'd expect all data to be secured encrypted at rest.
steve1977 · 3 years ago
> you should not store it on the cloud.

Well, at least not without encryption that is under your control.

Shank · 3 years ago
> How has GH ever been able to become ISO certified last year?

ISO/IEC 27001:2013 doesn’t say you have to store private keys in HSMs? It just requires you to have a standards compliant ISMS that implements all Annex A controls and all of the clauses. Annex A and the clauses don’t specifically mandate this.

If you can convince an auditor that you have controls in-place that meet the standard for protecting cryptographic media you basically meet the standard. The controls can be a wide variety of options and don’t specifically mandate technical implementation details for many, many things.

You shouldn’t rely on ISO/IEC 27001:2013 as attestation for technical implementation details here. Just because your auditor would red flag you doesn’t mean all auditors would. The standard is only as effective as the weakest, cheapest auditor, and there are perverse incentives that make auditors financially incentivized to certify companies due to recurring revenue.

LeonM · 3 years ago
> You shouldn’t rely on ISO/IEC 27001:2013 as attestation for technical implementation details here.

Thanks for the insight, good advice.

But also from the same GH article [0]:

The ISO 27001 certification is the latest addition to GitHub’s compliance portfolio, preceded by SOC and ISAE reports, FedRAMP Tailored LiSaaS ATO, and the Cloud Security Alliance CAIQ.

Do you have any knowledge on one of these certifications (for exmaple FedRAMP) that puts any restrictions on handling key material?

[0] https://github.blog/2022-05-16-github-achieves-iso-iec-27001...

lxgr · 3 years ago
> [...] depending on your level of paranoia, you will have to review all code that has ever interacted with Github repositories [...]

Not to diminish the problems of having a large entity like Github handle a private key like that, but if that was your level of paranoia, you probably should have used commit signatures all along and not relied on Github to do that job for you.

dannyincolor · 3 years ago
As usual on HN, I find the pragmatic response about 3 pages down in the replies to an extremely hyperbolic top-level comment.

I also don't want to diminish the concerns around Github or similar orgs losing control of a private key, but the far more realistic concern for the vast majority of threat models is often put to the wayside in favor of what amounts to a scary story. Rather than the straightforward key removal and replacement that this should be, I (and surely many others) have spent all morning combatting this specific FUD that cropped up on HN with leadership and many engineers. It's actually quite detrimental to quickly remediating the actual concerns introduced by this leak.

I understand that security inspires people to be as pedantic as possible - that's where some big exploits come from on occasion - but I really hope the average HN narrative changes toward "what is your actual, real-world threat model" vs. "here is a highly theoretical edge-case scenario, applicable to very few, that I'll state as a general fact so everyone will now wonder if they should spend months auditing their codebase and secrets". Put simply: this is why people just start ignoring security measures in the real world. Surely someone has already coined the term "security fatigue".

It's all just a bit unbalanced, and definitely becomes frustrating when those suggesting these "world is burning" scenarios didn't even take the available precautions that apparently would satisfy their threat model (i.e. commit sigs, as you suggested)

Ok, end rant :)

peterkelly · 3 years ago
Git provides the ability for authors to sign their commits with their own private key. To ensure the integrity of code in a repository, this method should be relied on rather than whatever hosting provider(s) have a copy of the repository.

Requiring all commits to be signed by trusted keys avoids the risks associated with someone tampering with a repository hosted on GitHub if they are able to get access to it, although it doesn't protect against code being leaked.

See here for details: https://git-scm.com/book/en/v2/Git-Tools-Signing-Your-Work

travisd · 3 years ago
Parent comment is concerned with privacy, not authenticity. They're not worried that someone modified their code, they're worried that someone saw it.
jlokier · 3 years ago
About signed commits, but off topic for the article:

Many signed commits ("Verified" in green) on GitHub are signed with GitHub's own private key¹, not the committer's key. There's a good technical reason, but many committers don't realise their own signing key isn't the one used on their signed commits to the main branch.

That GitHub private key would be a fun one to have leaked!

It would invalidate most of the "Verified" flags on commits on most repo main branch histories.

(¹ GitHub uses GitHub's own commit-signing key when you use the GitHub GUI to merge or squash-merge commits if you already signed them, and the resulting commits show as "Verified" the same as conmits signed by your own key. So you do have to sign with your own key,, but it's not the key used on the main branch commits everyone sees whrn they visit and doenload. Many controlled workflows require approved PRs to be merged to main/master through GitHub's tools, and many users default to using the GUI for PR merges by choice.)

Veliladon · 3 years ago
> The fact that this key was apparently not stored in an HSM, and that GH employees had access to this private key (allowing them to accidentally push it) means that effectively all communication with GH since the founding of the company has to be considered compromised.

For a host key? Like I get that being able to impersonate Github isn't great as far as state level actors having the ability to do this but you do know the actual transport layer keys are ephemeral and aren't derived at all from the host key, right?

LeonM · 3 years ago
> state level actors having the ability to do this

Not just nation state actors, but basically anyone in a position to MITM.

Also, you don't have to be a nation state actor to extort a GH employee. Any bad guy can do a "give me this key or I'll hurt your kid". People are being extorted for a lot less.

There are billions of dollars of assets flowing through GH's infrastructure, for the sake of safety (!= security) of Github's employees, nobody should ever have access to key material.

0xEFF · 3 years ago
Yes for a host key. It’s like accidentally publishing the tls key for https://accounts.google.com

The host key is the only thing ensuring you’re actually talking to GitHub.com when you push code.

To add to sibling comments, it should not have been possible to make this mistake. That it was possible is concerning.

runeks · 3 years ago
> [...] the actual transport layer keys are ephemeral and aren't derived at all from the host key, right?

Great! Then I can communicate confidentially with whomever is MITM'ing me.

/s

megous · 3 years ago
A bit of an overreaction, right?

Number of people just not blindly using TOFU with github over ssh must be quite low.

Who here went here https://docs.github.com/en/authentication/keeping-your-accou... and added the keys manually to known_hosts before using github for the first time?

nebulous1 · 3 years ago
I would have guessed that the significant majority would be using TOFU, but in any case the actual key was leaked so it wouldn't matter which method was used.
killerstorm · 3 years ago
> any code pushed or pulled from private repositories can no longer be considered private.

Do you realize that the code just sits on GitHub servers even if it's private?

If you have any degree of paranoia, why do you put your code into GitHub?!?!

Like, if you work on code which

BlueTemplar · 3 years ago
Ah, I see that the men in black got there just in time ! XD
sgt · 3 years ago
> How has GH ever been able to become ISO certified last year [0], when they didn't even place their keys in a HSM?

ISO 27001 certification does not require you to put keys into an HSM. The standard requires you to have controls in place, be aware of your risks and to maintain a risk register. But in no way does the standard require HSM's.

The standard would even be OK with storing this on a floppy drive if the risks surrounding that were identified and mitigated (or accepted).

j16sdiz · 3 years ago
I have never knew a single person put ssh host key into HSM.

In fact, this is not a supported option in openssh.

LeonM · 3 years ago
> I have never knew a single person put ssh host key into HSM.

You probably also never met a single person where the SSH interface sees millions of sessions as day with valuable assets (code) being transported over said sessions.

> In fact, this is not a supported option in openssh.

This definitely is supported. Though documentation for this is often HSM vendor specific, which if heavily NDA'd. So that's why you probably haven't found much information about it.

ammar2 · 3 years ago
For what it's worth, Github uses libssh (https://www.libssh.org/) for their ssh servers.

It looks like they currently use the `ssh_bind_options_set` function with `SSH_BIND_OPTIONS_HOSTKEY` to set the host keys which means they exist on disk at some point. HSM aside, I believe it would be possible to use the `ssh_bind_set_key` and deserialize them from a secret vault so they only exist in the memory of the ssh daemon.

Obviously they also just straight up have enough resources to fork the code and modify it to use an HSM.

Source: looking at their ssh server portion of `babeld` in ghidra right now as part of hunting for bug bounties.

mkj · 3 years ago
It would work with OpenSSH's HostKeyAgent option.
mlyle · 3 years ago
There is the HostKeyAgent configuration directive, which communicates over a unix domain socket to make signing requests.

https://framkant.org/2017/10/strong-authentication-openssh-h...

tashian · 3 years ago
It's easy to say "should have used an HSM" (or, in truth, many HSMs), but I can appreciate the technical challenges of acutually doing that at their scale. It would not be a trivial project. There's a ton of operational concerns here, including figuring out how you would go about rotating the key on all those HSMs in an emergency.
lxgr · 3 years ago
These would also need to be very distributed and high-throughput HSMs: You'd need to talk to one for every single SSH login! This is in contrast to e.g. having a CA signing key in a HSM, but distributing keys signed with it more widely.

I suppose (Open?)SSH's PKI mode could support a model like that, but as others have noted here, this requires much more manual work on the user's side than comparing a TOFU key hash.

Maybe that model could be extended to allow TOFU for CAs, though? But I think PKI/CA mode is an OpenSSH extension to the SSH protocol as it is, and that would be a further extension to that extension...

JeremyNT · 3 years ago
There's a lot of daylight between "use a HSM" specifically and "use a system that prevents junior developers from accessing the key and checking it into public repos."

Storing the key in some kind of credential vault that can only be accessed from the hosts that need it at startup would usually be enough to prevent this particular kind of error (unless you're giving root on those boxes to people without enough sense to avoid checking private keys into git, in which case you've probably got worse problems).

nonethewiser · 3 years ago
> the responsible persons (not talking about the poor dev that pushed the key, but the managers, CISO, auditors, etc.) should be fined, and/or lose their license.

By no means do I want to see the dev get fined or blackballed from the industry.

But if there is any 1 person responsible, it’s the person who did it. The reason why the dev shouldn’t be fined/blackballed is because it’s not just 1 person’s fault. I mean, fining or booting his manager out of the industry? Really?

grumple · 3 years ago
There's a few reasons I wouldn't worry too much:

1) Nation state level actors can probably insert or compromise high level staff, or multiple high level staff, at any given company, and perform MITM attacks fairly easily. And some could compel turning over your code or secrets more directly anyway. Not worth worrying about this scenario: nobody working on anything truly sensitive should be using any externally hosted (or even externally networked) repositories.

2) It is much more difficult for other actors to do a MITM attack, and if they did, they'd probably have access to your code more directly.

3) Your code actually isn't worth much to anybody else. Imagine someone launching a complete clone of HN or any other site. Who cares? Nobody. What makes your code valuable is that you have it, and that you have a network and relationship with your customers. If somebody stole my company's codebases, I'd feel sorry for them, that they are going to waste any time wading through these useless piles of shit. The only potential problems are if secrets or major vulnerabilities are exposed and provide a path for exploit (like ability to access servers, services, exposing potential ransomware attacks).

Art9681 · 3 years ago
Information has different levels of value depending on what the user needs to do with it. It's kind of like how two individual pieces of "unclassified" info are...well, Unclassified but putting the two together as a cohesive whole that provides further context turns it into "classified" info. All it takes is a little bit of time for actors working with the funding and compute capacity of a major nation to scrape the entirety of Github, dump it in a data processing tool none of us know about, and make the correlations you and I cannot.

This leak opened a time window big enough for that to happen. We may or may not know if it did. I doubt this info would be offered to the public because it would sink the business.

amrb · 3 years ago
Your 100% right to hold critical infrastructure to higher standards. Putting Solarwinds aside, how many companies could to grind to a halt via this 3rd party.
nebulous1 · 3 years ago
> The fact that this key was apparently not stored in an HSM, and that GH employees had access to this private key (allowing them to accidentally push it) means that effectively all communication with GH since the founding of the company has to be considered compromised.

I think this suggests we need more information from github. For instance GH employees may not always have had live access to this key, this could have happened as part of an operation that gave temporary access to an employee only recently. Or it could have been stored plaintext on multiple employees' home computers since creation.

When was the leaked key created anyway?

belter · 3 years ago
Looking from the outside, for many of these companies: GitHub, OpenAI, Cloudflare, Facebook, and so on, it seems that they torture their hires with ridiculously code challenges. Spend a lot of time on elegant engineering blogs. Write about how many Phds work at their locations, can't stop talking about how selective of the 0.01% Developers they are... But then, internally, everything seems more or less tied up with Shoestrings and Rube Goldberg machines.
JasserInicide · 3 years ago
"Temporary" solutions and countless "TODO: Fix this" simply are endemic to development, even at the top companies. They just like to pretend they're better than everyone else.
peanut-walrus · 3 years ago
It is so incredibly rare for public-facing service keys to be stored on an HSM that I don't think anyone could reasonably have expected this to be the case?
j45 · 3 years ago
Makes self-hosting git look more preferable.

The cloud is always the convenience of someone else’s computer over some amount of security.

steponlego · 3 years ago
I don't think there will be any lawsuits. The user agreement precludes that. I don't even know how anybody could be angry about this - your code's on somebody else's computer and if you didn't know that that's a huge risk, you do now.
brightball · 3 years ago
I wonder if they found it by turning on their own secret detection system?
djbusby · 3 years ago
What license?
LeonM · 3 years ago
Auditors require a license/accreditation to do certain certifications.
yetanotherjosh · 3 years ago
Please before replacing your local fingerprint with the new one, double check it is the expected value. This is an opportune time for man-in-the-middle attackers to strike, knowing everyone has to replace their stored signatures, and that some will be lazy about it with a blind "ssh-keygen -R github.com" command.
p-e-w · 3 years ago
It never fails to amaze me how most incident mitigations seem completely oblivious to such security side effects.

"We have no reason to believe that the exposed key was abused, but out of an abundance of caution, we are going to expose 50 million users to a potential MITM attack unless they are extremely careful."

Not a single word in the post about whether this impact was even considered when making the decision to update the key. Just swap the key, and fuck the consequences. Same with the mass password resets after a compromise that some services have done in the past years. Each of those is any phishing operation's dream come true.

tomp · 3 years ago
Don't trust corporate PR. They're obviously lying when they say "out of an abundance of caution". The private key was exposed in a public GitHub repo, it could literally be anywhere.

So MITM for some of 50m users is strictly better than MITM for all of 50m users.

justeleblanc · 3 years ago
I'm always amazed at this kind of posts. Did these 50 million users (surely none of them use git+https!) check the host key the first time they connected to github? Did you?
faeriechangling · 3 years ago
While their reaction is more likely to cause a security breach, consider the psychology.

If the key was breached and Github just didn't know it, then a breach happened, then only Github would be to blame.

If Github rotates its key, and somebody suffers a MITM attack, the blame is more diffuse. Why didn't they verify the key out of band?

BHSPitMonkey · 3 years ago
How is there an alternative here?
dheera · 3 years ago
How would one stage a MITM attack without knowing the private key corresponding to the old key?
mihaaly · 3 years ago
Is there a benefit (and practicality) in recording encrypted traffic by an adverse intermediary waiting for keys being exposed sometime? Like now?
defanor · 3 years ago
Here are the expected fingerprints (since they don't publish those via SSHFP RRs): https://docs.github.com/en/authentication/keeping-your-accou...

    SHA256:uNiVztksCsDhcc0u9e8BujQXVUpKZIDTMczCvj3tD2s (RSA)
    SHA256:br9IjFspm1vxR3iA35FWE+4VTyz1hYVLIE2t1/CeyWQ (DSA - deprecated)
    SHA256:p2QAMXNIC1TJYWeIOttrVc98/R1BUFWu3/LiyKgUfQM (ECDSA)
    SHA256:+DiY3wvvV6TuJJhbpZisF/zLDA0zPMSvHdkr4UvCOqU (Ed25519)

cwillu · 3 years ago
Note the MITM here :)

We humans really aren't cut out for this, are we.

tgsovlerkhgsel · 3 years ago
They provide convenient commands to import the correct keys. It would probably be better to only include the block that contains both the -R and the update command, but at least they do provide them.
yosito · 3 years ago
> double check it is the expected value

Not all of us are familiar enough with the SSH protocol to understand how to "double check the expected value"? Where can I determine what the expected value should be?

Gravyness · 3 years ago
Run "ssh -T git@github.com" command.

It should error like this:

    @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
    @    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
    @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
    IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
    Someone could be eavesdropping on you right now (man-in-the-middle attack)!
    It is also possible that a host key has just been changed.
    The fingerprint for the RSA key sent by the remote host is
    SHA256:uNiVztksCsDhcc0u9e8BujQXVUpKZIDTMczCvj3tD2s.
    Please contact your system administrator.
Note that the SHA256 present there matches perfectly the one github send. If you don't remember the very first time you connected to github you also had to accept the key. The warning above shows up because the server is saved as a different RSA, for the SSH client it seems that someone setup a server for github but has a different key, which could mean someone is trying to trick you into connecting into the wrong server. This could also mean that github changed their RSA key which is why they published this article.

yetanotherjosh · 3 years ago
A key part of avoiding MITM is to get the values from an authoritative origin, not comments on HN, so the link is here:

https://docs.github.com/en/authentication/keeping-your-accou...

Yes, this assumes the github-hosted docs and your SSL connection to them are not also compromised, but it's far better than not checking at all.

Dylan16807 · 3 years ago
Look for the part of the article that says "the following message"

Or the parts below it about updating and verifying in other ways.

brabel · 3 years ago
I've updated the key in known_hosts, then was able to connect successfully.

What do I have to do to ensure I connected to the right server?? I thought just making sure the correct RSA key was in known_hosts would be enough?

nirimda · 3 years ago
It depends on how you found out what the new key value is. By the sounds of your description, you're fine. But in principle there's more than one way people could proceed from here.

If you read the blog post on a verified domain and saw the new key and updated manually, or you deleted the known key and verified the key fingerprint when it warned you about an unknown key, you should be good to go. Here, you trust the people who issue TLS certificates and you trust github to be in control of their domain name, so you can be reasonably confident that the key they advertised on their website is the correct key. If your internet connection was compromised, you would have got an error message when you connected to https://github.blog (because they wouldn't have a certificate from a trusted certificate issuer) or when you connected to the git server (because they wouldn't have the key you just trusted).

If you saw the blog post and then removed the old key and told ssh to save the new key it's receiving without checking it matches the value on the webpage, you might have a problem. The connection to github's ssl could have been compromised, and if you just accepted whatever it told you, you have no trusted intermediate to verify that the key is trustworthy. All you know is that each time you connect to github's server hereafter, you're either connecting to a server with the same key (no error), or you're connecting to one that doesn't have the same key (error message). But whether you can trust that key? You don't know that. You just know it's the same key.

But even if you did the latter, all is not lost. You can look at the known_hosts file (on Linux and MacOS it's ~/.ssh/known_hosts) and check the fingerprint. If it's what they advertise, then you're good to go. If it's different, you should fix it and find people who can help you deal with a security incident.

The reason people are raising a flag is that today, lots of people will be rotating their key. That means if you're looking to target someone, today is the day to do it. Even if 90% of people do it the proper way, by manually verifying the key, that still means there's going to be a lot of people who could be victimised today.

snorremd · 3 years ago
That is enough, given that you've fetched or compared the key from a trusted GitHub.com server.
rpigab · 3 years ago
Double-check with what source? The one mentionned in docs.github.com?

I assume it's safe because the SSL cert for docs.github.com is probably not compromised, so it's giving us the right key, and compromising docs.github.com would be extra effort and is unlikely to happen.

However, I wonder what kind of steps an MITM attack would have to perform, I assume one of the easiest would be compromising my local DNS server, since regular DNS does not offer a high level of security, then github.com resolves to the attacker's IP and the attack works. Do you have examples of such attacks that don't involve a virus already active on the end user's PC? Maybe if someone owns an IP previously owned by Github that is still somehow advertised as being Github by some DNS lagging behing?

8organicbits · 3 years ago
This is always a concern with SSH as it uses trust on first use. The first time you connect it shows you the fingerprint for out of band verification. That manual step is on you to perform, but most people skip it. Future visits check against the saved fingerprint.

The best practice is to verify the fingerprint out of band using a secure channel. In this case, that's HTTPS and docs.github.com. If (hypothetically) docs.github.com was also compromised, then you don't have a secure channel.

https://en.m.wikipedia.org/wiki/Man-in-the-middle_attack has some MITM examples.

devenvdev · 3 years ago
There should be a StackOverflow Streisand effect, at first, I peeked at the end of your comment to copy-paste the ssh-keygen string "solution".
aidenn0 · 3 years ago
Or just use the posted command to fetch the fingerprint over ssh and automatically add it to your known hosts?
bityard · 3 years ago
SSH host certs would make this a non-issue, and I've often wondered why GitHub doesn't use them.

Deleted Comment

datadeft · 3 years ago
Certificate pinning check built in when?

We should have a blockchain for certificates btw. That would be such an amazing solution to this problem. You could advertise ahead of the time that you are changing certificates and we could verify that it was in fact you.

lrvick · 3 years ago
Github had one RSA ssh host key, the most widely supported key format.

It was trusted to clone code into the infrastructure of hundreds of thousands of organizations. People pin it everywhere.

With this key someone could infect the infrastructure of fintech companies and steal billions of dollars. I know this well because I run a security consulting company focusing mostly on that industry. Mainly this is possible because almost no companies check for git commit signing, and Github does not enforce it anyway, and I digress.

This key held enough power over value that some might have used violence to acquire it.

With that context of course they chose to place this key in a hardware security module controlled with an m-of-n quorum of engineers to ensure no single human can steal it, expose it, or leak it. Right? ... right?

Nope. They admit they just stuck it in a git repo in plain text where any engineer could have copied it to a compromised workstation, or been bribed for it, for who knows how many years. Worse, it was not even some separate employee only intranet git repo, but their own regular public production infra and someone had the power to accidentally make it public.

I have no words for this level of gross negligence.

Maybe, just maybe, centralizing all trust and security for most of the worlds software development to a proprietary for-profit company with an abysmal security reputation was not the best plan.

I will just leave this here: https://sfconservancy.org/blog/2022/jun/30/give-up-github-la...

kelnos · 3 years ago
Wow, your assessment of the impact here (or even possible impact) is way way way overblown.

In reality, the vast majority of users don't pay attention to SSH host keys at all.

Even if an attacker got hold of this private host key, they'd have to be able to MitM the connections of their target.

Next, they have to decide what they want to do.

If they want to send malicious code to someone doing a 'git pull', they'd have to craft the payload specifically for the repo being pulled from. Not impossible, but difficult.

If they want to "steal" source code from someone doing 'git push' (perhaps to a private repo on GitHub), that's a bit easier, as they can just tell the client "I have no objects yet", and then the client will send the entire repo.

And, again, they'd have to have the ability to MitM some git user's connections. Regardless, there is no way that they could change any code on github.com; this key would not give them any access to GH's services that they don't already have.

So I think your anger here is a bit over the top and unnecessary.

I agree that it's pretty bad that some GH employee even had the ability to get hold of this private key (sure, probably should be in an HSM, but I'm not particularly surprised it's not) in order to accidentally leak it, but... shit happens. They admitted the mistake and rotated the key, even though it's likely that there was zero impact from all this.

lrvick · 3 years ago
End users do not pay attention but their clients pin the key after first use. Also everyone is using gitops these days and almost no one is using dns-over-tls.

Imagine you control the right router on a company wifi, or any home wifi a production engineer works from and suddenly you can cause them to clone the wrong git submodule, the wrong go package, or the wrong terraform config.

If you knew a CI/CD system blindly clones and deploys git repos to prod without signature checks, and that prod is a top 10 crypto exchange with 1b of liquidity in hot wallets, then suddenly a BGP attack to redirect DNS is a good investment. Myetherwallet got taken over for 15 minutes with a BGP so this is not hypothetical.

Should that be the case? Of course not. But the reality is I find this in almost all of the security audits I do for fintech companies. Blind trust in Github host keys is industry standard all the way to prod.

throwawaaarrgh · 3 years ago
> Even if an attacker got hold of this private host key, they'd have to be able to MitM the connections

This is not hard. If it were hard, we wouldn't need encrypted connections.

If I were a nation state, it would be trivial to position an attack at the edge of a network peering point for a given service. How do I know? Our own government did it for 10+ years.

Cyber criminals often have the same tactics and skills and can find significant monetary reasons to assume heightened risk in order to pull off a compromise.

Random black hats enjoy the challenge of compromising high value targets and can find numerous ways to creatively infiltrate networks to perform additional attacks.

Even without gaining access to a network, virtually anyone on the internet can simply push a bad BGP config and capture traffic for an arbitrary target. Weird routing routinely happens that nobody can definitely say isn't such an attack.

ses1984 · 3 years ago
The key has to be in memory on all of their front end servers. Do you think a quorum of engineers should get together every time a front end server boots or reboots?

Genuinely asking because I’ve struggled with this question.

lrvick · 3 years ago
Lots of cloud instances support remote attestation these days which gives you a reasonable path to autoscaling secure enclaves.

1. You compile a deterministic unikernel appliance-style linux kernel with a bare bones init system

2. You deploy it to a system that supports remote attestation like a nitro enclave.

3. It boots and generates a random ephemeral key

4. m-of-n engineers compile the image themselves, get the same hash, and verify the remote attestation proof confirming the system is running the bit-for-bit trusted image

5. m-of-n engineers encrypt and submit shamirs secret shares of the really important private key that needs protecting

6. key is reconstituted in memory of enclave and can start taking requests

7. Traffic goes up and autoscaling is triggered

8. New system boots with an identical account, role, and boot image to the first manually provisioned enclave

9. First enclave (with hot key) remotely attests the new enclave and obtains its ephemeral key (with help of an internet connected coordinator)

10. First enclave encrypts hot key to new autoscaled enclave

11. rinse/repeat

bob1029 · 3 years ago
I don't understand how initializing cryptographic keys from an HSM at boot time is an untenable proposition. The quorum would be for accessing the key by human means. You can have a separate, approved path for pre-authorized machines to access cryptographic primitives across an isolated network.
kccqzy · 3 years ago
The key doesn't have to in memory on all of their front end servers. Any respectable company that cares about security wouldn't put their TLS private key on all of their front end servers anyways. You expose a special crypto oracle that your front end servers talk to; the oracle can be a specially hardened process on a dedicated server or better yet a HSM; the point is, the private key is never in memory on any server that handles untrusted data.
CGamesPlay · 3 years ago
> With that context of course they chose to place this key in a hardware security module controlled with an m-of-n quorum of engineers to ensure no single human can steal it, expose it, or leak it. Right? ... right?

This is unfortunately not how SSH works. It needs to be unlocked for every incoming connection.

You raise valid hypotheticals about the security of the service... but fixing it involves removing SSH host key verification from Github; better OpsSec would not fully resolve this issue.

lrvick · 3 years ago
I am well aware how ssh works. I have written ssh servers and design secure enclave key management solutions for a living.

Even if they wanted the most quick and dirty lazy option with no policy management, they could stick the key in a PKCS#11 supporting enclave every cloud provider supports these days. OpenSSH natively supports them today.

At a minimum they could have placed their whole ssh connection termination engine in a immutable read only and remotely attestable system like a Nitro enclave or other TEE. You do not need to invent anything new for this.

There are just no excuses here for a company with their size and resources, because I do this stuff all the time as just one guy.

p-e-w · 3 years ago
Hardware security modules can perform key operations without allowing anyone to access the key data. Key material being (accidentally or deliberately) leaked has been a solved problem for a long, long time.
marcosdumay · 3 years ago
> It needs to be unlocked for every incoming connection.

Yep. Well, certificates exist exactly to bridge the GP's requirement with your reality.

mjg59 · 3 years ago
Which HSM are you looking at that would be able to handle the required number of transactions per second?
lrvick · 3 years ago
The same ones that terminate TLS for millions. Most are garbage but at least they keep the key offline. Also you can scale horizontally or only use the HSM as a CA to sign short lived host keys.

You could also use things like Nitro enclaves which have all the same specs as a regular EC2 instance.

Tons of options. They clearly chose none of them.

ZiiS · 3 years ago
The HSM only needs to sign new host keys, transactions per decade at thier current rate.
ed25519FUUU · 3 years ago
Thoughts with the sysadmins and devops people out there on this wonderful Friday afternoon.

These kinds of changes suuuuuuck. Messing with known_hosts file is not always something easy to do. Might require a image rebuild, if you have access at all.

uejfiweun · 3 years ago
Is it negligence or just incompetence? I get the sense that security is such a tough problem that all of us, even CISOs and red teamers, are incompetent.
lrvick · 3 years ago
If hospital workers spread disease because they could not be bothered to do the obvious things we -know- prevent this like basic sanitation... then yeah, I would call it negligence.

Do not put long lived cryptographic key material in the memory of an internet connected system. Ever. It is a really easy to understand rule.

paxys · 3 years ago
Conveniently missing from the announcement:

- When exactly was the key leaked? Yesterday? A month ago?

- How long was it visible for? And no, "briefly" doesn't cut it.

- Did any internet traffic reach the page while it was exposed? We know you log everything, so it is a yes or no answer.

If any of these answers were pretty, I imagine they would have explicitly included them in the post.

renonce · 3 years ago
> How long was it visible for? And no, "briefly" doesn't cut it.

I don't know how long exactly, but in theory you can subscribe to a stream of ALL events happening at GitHub by fetching from this endpoint: https://api.github.com/events

With these events you know what new repositories are created and what changes are pushed, so you can fetch every new change. Once something gets pushed to a public repository, it's very likely that some spider will have fetched it within a few minutes.

KirillPanov · 3 years ago
> you can subscribe to a stream of ALL events happening at GitHub by fetching from this endpoint: https://api.github.com/events

Wow I am shocked that they allow "firehose" access not only for free, but without even an API key.

Given enough disk and bandwidth, does this mean you could keep your own copy of all of github? I'd love to be able to grep the whole thing.

Maxious · 3 years ago
Especially something easily identifiable as a SSH private key, you get emails from a variety of security vendors to the address associated with the commit offering their services
rschoultz · 3 years ago
Indeed, and about their statement

> We have no reason to believe that the exposed key was abused and took this action out of an abundance of caution.

This is not verifiable, right? As the authentication method has no public and required revocation source, and given that the key, if having leaked, likely will be acquired by an authoritarian government, they can selectively MITM organizations and users that are not aware of this blog post.

CGamesPlay · 3 years ago
In fairness, it's better that they rotate the key immediately before even looking at the logs. But we can demand better answers from them from this point.
paxys · 3 years ago
Did they rotate it immediately? The only reference to a specific time is:

> This week, we discovered...

So they found out about it sometime between Monday and Thursday, and rotated it Thursday evening.

3np · 3 years ago
The discrepancy between "briefly" and "this week" stood out to me. It reads like multiple hours or even days would classify as "briefly" here where in this context it would otherwise usually mean in the seconds or maaaaybe single-digit minutes, but even that would be stretching it.
rumsbums · 3 years ago
Also sorely missing: an apology. Instead we're given crap like "out of an abundance of caution, we replaced our RSA SSH host key".
hoffs · 3 years ago
> - When exactly was the key leaked? Yesterday? A month ago?

"This week, we discovered that GitHub.com’s RSA SSH private key was briefly exposed in a public GitHub repository"

tyingq · 3 years ago
That's when they discovered it had been leaked, not necessarily when the leak happened.
robbat2 · 3 years ago
Start protecting yourself from a potential MITM better, mark the key as revoked. Hopefully distributions & OpenSSH upstream can start shipping this by default.

(sorry, the comments are mangling this, clean version at https://gist.github.com/robbat2/b456f09b7799f4dafe24115095b8...)

``` # You might need to insert this in a slightly different place cat >>/etc/ssh/ssh_config <<EOF Host * RevokedHostKeys /etc/ssh/ssh_revoked_hosts EOF

cat >>/etc/ssh/ssh_revoked_hosts <<EOF # https://github.blog/2023-03-23-we-updated-our-rsa-ssh-host-k... ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAq2A7hRGmdnm9tUDbO9IDSwBK6TbQa+PXYPCPy6rbTrTtw7PHkccKrpp0yVhp5HdEIcKr6pLlVDBfOLX9QUsyCOV0wzfjIJNlGEYsdlLJizHhbn2mUjvSAHQqZETYP81eFzLQNnPHt4EVVUh7VfDESU84KezmD5QlWpXLmvU31/yMf+Se8xhHTvKSCZIFImWwoG6mbUoWf9nzpIoaSjB+weqqUUmpaaasXVal72J+UX2B+2RPW3RcT0eOzQgqlJL3RKrTJvdsjE3JEAvGq3lGHSZXy28G3skua2SmVi/w4yCE6gbODqnTWlg7+wC604ydGXA8VJiS5ap43JXiUFFAaQ== EOF ```

ollien · 3 years ago
Just in case anyone is paranoid that this comment has the right key, you can generate a fingerprint with

    $ ssh-keygen -lf github.old.pub
    2048 SHA256:nThbg6kXUpJWGl7E1IGOCspRomTxdCARLviKw6E5SY8 no comment (RSA)
and you'll notice that fingerprint is on this archived page

https://web.archive.org/web/20230320230907/https://docs.gith...

(please check my work on your own machines and don't take my attestation on faith!)

e12e · 3 years ago
Thank you - TIL about ssh key revocation (I was aware of them, but haven't really used them).

I expanded on your gist:

https://gist.github.com/e12e/0c1868479c0b8d0a52914d44be66d76...

LeonM · 3 years ago
You can do verbatim formatting on HN by placing 4 spaces. See https://news.ycombinator.com/formatdoc

Thanks for the gist though, seems helpful!

bravetraveler · 3 years ago
Anyone finding the same thing I am?

RevokedHostKeys doesn't accept ~ for your home directory... while things like ControlPath will.

I'd rather confine this to my account, but I either have to use a relative path that doesn't always work... or a fully qualified path that includes my username (and may change)

fjni · 3 years ago
> This week, we discovered that GitHub.com’s RSA SSH private key was briefly exposed in a public GitHub repository

> ... out of an abundance of caution, we replaced our RSA SSH host key used to secure Git operations for GitHub.com

Yeah, that's not an "abundance of caution." That's the bare minimum response at that point. What's the "not cautious approach?" Make the repo private and go on your merry way?

neilv · 3 years ago
I had just composed a comment with the exact same two quotes before I saw yours.

I suppose "abundance of caution" might apply, if they determined that the only ways it could've leaked were from requests that were logged, and they've removed all the ways and checked all the logs.

But if I had to guess, even a brief exposure can be picked up by bots (and perhaps they can already see log entries for this). Even if no one at all picked it up, there'd still be the question of whether traces of it are still left behind on various infrastructure, in storage and caches (even ML training?) not intended for key safety.

fjni · 3 years ago
You are right of course. There’s a scenario where they have exhausted all possible ways in which it could have leaked, determined that it wasn’t and decided despite that conclusion to rotate the key. That would be doing so purely out of abundance of caution.
nessex · 3 years ago
It's not mentioned in the blog post or keys page, but the _old_ value[1] you'll find in known_hosts is:

  github.com ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAq2A7hRGmdnm9tUDbO9IDSwBK6TbQa+PXYPCPy6rbTrTtw7PHkccKrpp0yVhp5HdEIcKr6pLlVDBfOLX9QUsyCOV0wzfjIJNlGEYsdlLJizHhbn2mUjvSAHQqZETYP81eFzLQNnPHt4EVVUh7VfDESU84KezmD5QlWpXLmvU31/yMf+Se8xhHTvKSCZIFImWwoG6mbUoWf9nzpIoaSjB+weqqUUmpaaasXVal72J+UX2B+2RPW3RcT0eOzQgqlJL3RKrTJvdsjE3JEAvGq3lGHSZXy28G3skua2SmVi/w4yCE6gbODqnTWlg7+wC604ydGXA8VJiS5ap43JXiUFFAaQ==
You can search for this in your codebases, hosts etc. to see if there are any areas that need updating. The new value is linked from the blog post, you can find it here: https://docs.github.com/en/authentication/keeping-your-accou...

[1] https://github.blog/changelog/2022-01-18-githubs-ssh-host-ke...

Timothee · 3 years ago
Good callout!

I looked at my `~/.ssh/known_hosts` file and that key is associated with a few IP addresses in addition to github.com. Those lines stayed after I ran `ssh-keygen -R github.com`.

I imagine that I also need to remove those other lines manually, but isn't that something that GitHub should have mentioned? I'm not sure in which circumstances these got added either…

bravetraveler · 3 years ago
Same here, found the two following IPs with the same hostkey:

    - 192.30.253.112
    - 140.82.114.3
WHOIS shows github ownership, just not sure when/how/why I got these

bxparks · 3 years ago
My ~/.ssh/known_hosts used to look like that, but now it looks like the host part is encrypted? It looks like:

  |1|{base64-encoded-string?}|{another-base64-encoded} ssh-rsa AAA{long-string}
This is on Ubuntu 22.04 (Linux Mint).

banana_giraffe · 3 years ago
This is because HashKnownHosts has been turned on, the first value is a salt, the second value is a hash of the salt and the host name.
mjg59 · 3 years ago
In an ideal world this could be avoided by using SSH certificates - an assertion by a trusted party that the key presented by an SSH server is legitimate. The signer of those certificates could be in an HSM, ensuring that the private keys can never be leaked, and the certificates could be sufficiently short-lived that there'd be no need to deal with revocation.

Unfortunately there's no way to make this easy. To associate a CA key with a specific domain you need to add something like:

@cert-authority github.com (key)

to known_hosts, and this (as far as I know) can't currently be managed in the same way that keys are generally added to known_hosts on first use when you ssh to something. So instead we're left with every github.com SSH frontend using the same private keys, and those can't be in an HSM because it simply wouldn't scale to the required load. Which means the private key exists in a form where this kind of thing can occur.

londons_explore · 3 years ago
> every github.com SSH frontend using the same private keys

HTTPS has the same problem. Compromise any frontend, and you can MITM traffic to any other front-end till the expiry of the certificate (usually 90 days).

I would really like to see some kind of almost realtime delegation - so that for example, every second a new sub certificate could be generated and signed by a HSM, valid for only 2 seconds.

8organicbits · 3 years ago
Lots of bad things happen if your front-end is compromised, that's different and it's a high bar. They can persist access with a backdoor. They can exfil historical data, password hashes. They can corrupt and modify data.

With HTTPS certs you usually have at most 90 days of impact when a key is leaked (less if you revoke and software is checking CRL). GitHub used the same RSA key for over a decade, they may have continued using this key for quite some time more had they not noticed the leak this week.

mjg59 · 3 years ago
Clock skew unfortunately makes that scale a little impractical, but yeah it would be great to have mechanisms to delegate endpoint certificate issuance to much shorter timescales
mjg59 · 3 years ago
Wrote my thoughts on this up in some more detail at https://mjg59.dreamwidth.org/65874.html