I don't know the history of this bug but just want to chime in with a word about how absolutely terrifying the "associate email address with account" feature in account-based web apps is. Which, I guess that's my word: terrifying, one of the things pentesters make a beeline to mess with, with a vulnerability history stretching all the way back to the early 2000s when these features were often implemented on standard Unix MTAs that could be tricked into sending password resets to multiple addresses at once, an attack featureful web frameworks seem to have resurrected in Gitlab.
If you're a normal HN reader that found themselves interested in this story, go check your password reset feature, specifically the email association logic!
Gitlab has, as I understand it, a pretty excellent security team, which gives some sense of how hard this bug class is to avoid.
> Gitlab has, as I understand it, a pretty excellent security team, which gives some sense of how hard this bug class is to avoid.
Based on the other comment that describes how this bug works, it is completely trivial to avoid. If you use a statically typed language, you'd have to go out of your way to create this bug, and it'd stand out like a sore thumb in code review, to the extent that if I saw that I might wonder whether my coworker is actively trying to create a backdoor.
It doesn't sound like they have a great security team, they added the "Associate a secondary email address" feature recently. This isn't something that has always been in the software. It seems more like they were cutting corners and not properly testing through ways to exploit their own new feature when it was related to account security.
On top of that it looks like they had a 9.6 CVE that allowed integrations to perform commands as other users...
From the outside it looks like they are trying to ship features faster than they can keep them safe and tested. Perhaps because they are having an incredibly difficult time monetizing well? It makes sense from business standpoint in some respects, but also the security stuff could just absolutely tank the business when the whole point of a (self)hosted git solution is essentially just account management.
I don't put much stock in this "9.6" stuff; CVSS is a ouija board that will say whatever people want it to say. But regardless: the best security teams in the world still see critical vulnerabilities in their software, because software is all garbage.
> It doesn't sound like they have a great security team
That's an unfair comment. Even the best teams ship bugs. If you want to measure the quality of a security team, you look at their performance trajectory (for both detection and response) relative to the size of their total threat surface.
All i needed to know about the quality of software gitlab ships can be found in using their CI system on any half decent size project. You can tell it was half baked with many bugs and edge cases that can be easily avoided. When you look at the bug tracker all of them have been documented for years and they just ignore them.
My favorites are
* using included files that run no job is a failure. The only real work around is adding a noop job all over your ci system.
* try to use code reviewers based on groups. The logic is so complex and full of errors i can’t even explain it unless i spend an hour reading the docs.
* when using the merge train and enabling merge result pipelines you end up with two different jobs per commit. This is cool except in the UI it always shows merge results first. If you have ten commits you need to look on the second page to find the most recent commits ci jobs. That is just annoying but more no environment variables overlap for what MR or commit it is. This makes doing trivial things like implementing break glass pipless almost impossible.
Anyway gitlab sucks i wanted to not use github but really it’s just bad. Not to mention we have outages monthly that we always know of 30 minutes to an hour before gitlab does then we look on the status page and see the downtime is 10 minutes when its been 40 for us and likely everyone else. We have in the last year had close to 2 full days combined of downtime from gitlab. Of course they report 99.95% uptime.
What a strange perspective. If not e-mail, then what would you associate it with? I've been running a site for over 20 years with a large user base. We initially used usernames, but it was disastrous. Everyone knew each other's usernames, making it easy to attempt brute force attacks on passwords, reset them, etc. Using e-mails isn't the problem, the issue is overcomplicating the login and password recovery logic, tons of abstraction everywhere, overengineering, people rushing to push code without proper checks in security sensitive parts etc.
> Gitlab has, as I understand it, a pretty excellent security team, which gives some sense of how hard this bug class is to avoid.
You should examine the history of security issues on GitLab. There are critical exploits multiple times a year, requiring an urgent upgrade of your GitLab distribution. Gitlab is the worst product I've used security wise.
You took the wrong thing away from the comment. I'm not saying you shouldn't do email password resets. We do. Everybody does. I'm saying: be ultra careful with that code.
Gitlab is both open source and has an on-prem product, so my guess is that you're simply hearing about more of the Gitlab bugs than you would with a comparably sized competitor.
I tend to agree. Regarding Gitlab, there’s a bit of a dichotomy here. On one hand it’s good that they’re diligently catching and patching these things quickly and effectively notifying with transparency, that’s a great thing. On the other hand, it means Gitlab is an absolute nightmare to maintain, the process to upgrade it is not always trivial and to add to that, consistently, the day after a Gitlab upgrade a critical vulnerability is found and patched.
Every product has security issues and what should worry you more, things that never see security patches or something that does?
Yeah, every single time I get a spurious password reset email (presumably from someone trying to hijack my account), I'm worried they've somehow managed to add an unauthorized recovery email address outside of my control. It hasn't yet happened to me, but as we can see from this story, it is absolutely possible unfortunately.
I can't remember getting something like that. Maybe because I use a different e-mail address for signing up to services than for regular communication.
I just had an idea, maybe using a + alias (yourname+some-alias-address@example.com, made famous by gmail) could help against attackers. Even if they find out your email they will never guess the part after the plus. If you forget it though then you can't reset your password anymore either.
I'm guessing here, because I only read a high-level description of it, but I think it's a password reset flow endpoint that takes the email address to look up and send a reset to, and the framework will accept an array instead of a simple string; the endpoint looks up the first address, but the variable used to determine who to send the reset mail to is the array. Again: just a guess as to the underlying bug (I've seen that specific bug before is why I guessed).
> Gitlab has, as I understand it, a pretty excellent security team, which gives some sense of how hard this bug class is to avoid.
I disagree. It's a bonehead mistake to send password resets out to tainted email addresses. As this was an authentication change it should have received extra scrutiny and so have been even harder to introduce.
Something is wrong with their engineering culture that needs correcting.
Every time we get a story about a vulnerability, we get comments about how they're indications that engineering cultures need correcting. All I'll say is, the impression one gets is that every engineering culture needs correcting. I buy it!
- recoverable.send_reset_password_instructions(to: email) if recoverable&.persisted?
+ recoverable.send_reset_password_instructions if recoverable&.persisted?
haha the first thing i would've caught in the initial PR was the file name... and the default setting of `confirmed: true`... seems like a big oversight or possibly an inside job (if im being conspiratorial)
Initially a single email could be passed into the API/form call and they would look it up. If found they would send a recovery to that email but it was the email the user supplied not what was in the DB.
Oh, no problem we looked it up so they are the same!
But then the ability to look up accounts from a list of emails was added. If any email matches the account lookup would succeed. Then they sent the reset link to that same user supplied value but OH NOEHS IT'S AN ARRAY NOW AND SOME MIGHT NOT HAVE MATCHED ACCOUNT EMAILS!
So they ended up sending out reset links to a tainted list of emails.
Rails "concerns" are the worst IMHO anyway, but looks like they aren't using strong params here either which is even worse. Also someone thought it was more elegant to reuse the tainted value which is par for the RoR course.
I got hit by this, we also noticed it being used with a second “feature” that exposed us a bit more than it should have.
Basically a requirement for this attack is to know the email of the user you want to reset, but, there is a hidden email address that is tied to your gitlab userid (a number incrementing from 1).
Since its a safe bet that ID 1 or 2 is an admin: thats a good target.
the email is something like 1-user@mail.noreply.<gitlabhost>.
E-mail password reset is a security nightmare even when implemented correctly.
And the worst part: On most services you can't even disable it, the only way around is often only Enterprise SSO.
On some services you can set up a phone number for SMS token instead. But I've never seen the possibility to require both. Password reset only with e-mail AND SMS token.
There's really nothing about the e-mail system that makes it particularly good for anything secure at all.
- Easy to accidentally forward confidential tokens
- Validation of email sender authenticity is still piecemeal, and there are relatively frequently ways to bypass or work around validation.
- Mail is not end-to-end encrypted. I hope that it's at least encrypted with TLS, but last time I actually messed with talking directly to an MX, it seemed like TLS was still limited to smart relays, and actual mail delivery always went to port 25 as usual...
- The cardinality of e-mail addresses is not really defined anywhere. Gmail has plus addressing and ignores periods for the purposes of delivering an e-mail. Meanwhile, Google Workspace defaults to not ignoring the periods. Pretty sure some mailbox providers are case-sensitive and others are not. This means you can't know if an e-mail address at a given provider is unique. Best bet is to treat the entire damn address as a bag of bytes, but that opens up room for UX issues of all sorts, so it's hard to balance.
- It's bad that if your e-mail address gets compromised, any account you have that doesn't have some kind of secure 2FA is literally seconds away from being compromised, too. Obviously, this is not a limitation that is limited to e-mail addresses, people wrongly treat SMS as secure, too. The thing with e-mail addresses though is that they're easier to treat as secure because it is somehow still less of a crapshoot than cell carrier security, and also because literally the majority of online accounts can be stolen with just access to the user's email address, and it can be done quickly and easily, and in many cases it can be hard to convince support that you are the correct owner afterwards.
First because email is not a secure form of communication, many ways where email content could be leaked. Sending a password reset link via email is similar to sending passwords or credit card information via email. Funnily that is something nobody is doing, because it's considered unsafe.
And second because hijacking someone's email account opens up a lot of different services to the attacker. 2FA over imap is still not a thing with most services/clients. Some people log into their webmail with username&/password on untrusted devices, ...
Ruby on Rails accepts arrays as parameters to the ORM's ".where(...)", which means "OR" between the array values. So if the code does something like "User.where(name: name, password: password)", I could totally see this happening.
I work for a huge government owned telco and our networking guys are the best. They keep us server guys in line. So even though they did expose our Gitlab to an extent, for certain external projects and consultants, you still can't visit it from the internet freely.
And also we manage users in AD so there is no SMTP connection to even do password resets.
But we really need to enforce more 2FA, we've left it up to each project to enforce their own rules on 2FA.
Especially something like Gitlab might benefit a lot from external integrations that need to call the Gitlab API. It might be possible to whitelist exactly those requests, but also a bit cumbersome.
Sorry, it is really easy to automate Gitlab updates. Just one option is to use Gitlab in Docker+Compose, which works rock-solid, and have Watchtower (e.g.) do the updates daily. I have two Gitlab servers that do this since 7+ years and no issue whatsoever. Looking around, I see so many outdated Gitlabs - what are the administrators doing?
If you're a normal HN reader that found themselves interested in this story, go check your password reset feature, specifically the email association logic!
Gitlab has, as I understand it, a pretty excellent security team, which gives some sense of how hard this bug class is to avoid.
Based on the other comment that describes how this bug works, it is completely trivial to avoid. If you use a statically typed language, you'd have to go out of your way to create this bug, and it'd stand out like a sore thumb in code review, to the extent that if I saw that I might wonder whether my coworker is actively trying to create a backdoor.
On top of that it looks like they had a 9.6 CVE that allowed integrations to perform commands as other users...
From the outside it looks like they are trying to ship features faster than they can keep them safe and tested. Perhaps because they are having an incredibly difficult time monetizing well? It makes sense from business standpoint in some respects, but also the security stuff could just absolutely tank the business when the whole point of a (self)hosted git solution is essentially just account management.
this sums up their entire product
every feature you could possibly imagine, somewhat working
That's an unfair comment. Even the best teams ship bugs. If you want to measure the quality of a security team, you look at their performance trajectory (for both detection and response) relative to the size of their total threat surface.
My favorites are
* using included files that run no job is a failure. The only real work around is adding a noop job all over your ci system.
* try to use code reviewers based on groups. The logic is so complex and full of errors i can’t even explain it unless i spend an hour reading the docs.
* when using the merge train and enabling merge result pipelines you end up with two different jobs per commit. This is cool except in the UI it always shows merge results first. If you have ten commits you need to look on the second page to find the most recent commits ci jobs. That is just annoying but more no environment variables overlap for what MR or commit it is. This makes doing trivial things like implementing break glass pipless almost impossible.
Anyway gitlab sucks i wanted to not use github but really it’s just bad. Not to mention we have outages monthly that we always know of 30 minutes to an hour before gitlab does then we look on the status page and see the downtime is 10 minutes when its been 40 for us and likely everyone else. We have in the last year had close to 2 full days combined of downtime from gitlab. Of course they report 99.95% uptime.
> Gitlab has, as I understand it, a pretty excellent security team, which gives some sense of how hard this bug class is to avoid.
You should examine the history of security issues on GitLab. There are critical exploits multiple times a year, requiring an urgent upgrade of your GitLab distribution. Gitlab is the worst product I've used security wise.
Gitlab is both open source and has an on-prem product, so my guess is that you're simply hearing about more of the Gitlab bugs than you would with a comparably sized competitor.
Every product has security issues and what should worry you more, things that never see security patches or something that does?
I just had an idea, maybe using a + alias (yourname+some-alias-address@example.com, made famous by gmail) could help against attackers. Even if they find out your email they will never guess the part after the plus. If you forget it though then you can't reset your password anymore either.
I disagree. It's a bonehead mistake to send password resets out to tainted email addresses. As this was an authentication change it should have received extra scrutiny and so have been even harder to introduce.
Something is wrong with their engineering culture that needs correcting.
https://gitlab.com/gitlab-org/gitlab/-/commit/c571840ba2f0e9...
Anyway, in the fixed version it's still called RecoverableByAnyEmail. Do people not read the code around what they are changing??
Added 8 months ago [1]. And then one month later:
> "password_reset_any_verified_email"
Was removed. 7 months ago [2], *note* __verified__ word here.
No blaming or conspiracy intended in this post, just listing links to relevant commits.
1 - https://gitlab.com/gitlab-org/gitlab/-/commit/94069d38c9cd63...
2 - https://gitlab.com/gitlab-org/gitlab/-/commit/a935d28f3decf8...
Initially a single email could be passed into the API/form call and they would look it up. If found they would send a recovery to that email but it was the email the user supplied not what was in the DB.
Oh, no problem we looked it up so they are the same!
But then the ability to look up accounts from a list of emails was added. If any email matches the account lookup would succeed. Then they sent the reset link to that same user supplied value but OH NOEHS IT'S AN ARRAY NOW AND SOME MIGHT NOT HAVE MATCHED ACCOUNT EMAILS!
So they ended up sending out reset links to a tainted list of emails.
Rails "concerns" are the worst IMHO anyway, but looks like they aren't using strong params here either which is even worse. Also someone thought it was more elegant to reuse the tainted value which is par for the RoR course.
Basically a requirement for this attack is to know the email of the user you want to reset, but, there is a hidden email address that is tied to your gitlab userid (a number incrementing from 1).
Since its a safe bet that ID 1 or 2 is an admin: thats a good target.
the email is something like 1-user@mail.noreply.<gitlabhost>.
Really bad, seemed like it was automated.
2FA saved us here.
And the worst part: On most services you can't even disable it, the only way around is often only Enterprise SSO.
On some services you can set up a phone number for SMS token instead. But I've never seen the possibility to require both. Password reset only with e-mail AND SMS token.
- Easy to accidentally forward confidential tokens
- Validation of email sender authenticity is still piecemeal, and there are relatively frequently ways to bypass or work around validation.
- Mail is not end-to-end encrypted. I hope that it's at least encrypted with TLS, but last time I actually messed with talking directly to an MX, it seemed like TLS was still limited to smart relays, and actual mail delivery always went to port 25 as usual...
- The cardinality of e-mail addresses is not really defined anywhere. Gmail has plus addressing and ignores periods for the purposes of delivering an e-mail. Meanwhile, Google Workspace defaults to not ignoring the periods. Pretty sure some mailbox providers are case-sensitive and others are not. This means you can't know if an e-mail address at a given provider is unique. Best bet is to treat the entire damn address as a bag of bytes, but that opens up room for UX issues of all sorts, so it's hard to balance.
- It's bad that if your e-mail address gets compromised, any account you have that doesn't have some kind of secure 2FA is literally seconds away from being compromised, too. Obviously, this is not a limitation that is limited to e-mail addresses, people wrongly treat SMS as secure, too. The thing with e-mail addresses though is that they're easier to treat as secure because it is somehow still less of a crapshoot than cell carrier security, and also because literally the majority of online accounts can be stolen with just access to the user's email address, and it can be done quickly and easily, and in many cases it can be hard to convince support that you are the correct owner afterwards.
And second because hijacking someone's email account opens up a lot of different services to the attacker. 2FA over imap is still not a thing with most services/clients. Some people log into their webmail with username&/password on untrusted devices, ...
It was some junky web interface to a spam appliance of all things, I'm not sure if it was intentional, or just some php rookie wrote the code.
One of our users discovered it when they had a (rare at the time) special character in their password.
I work for a huge government owned telco and our networking guys are the best. They keep us server guys in line. So even though they did expose our Gitlab to an extent, for certain external projects and consultants, you still can't visit it from the internet freely.
And also we manage users in AD so there is no SMTP connection to even do password resets.
But we really need to enforce more 2FA, we've left it up to each project to enforce their own rules on 2FA.
For a highly secure environment, I agree to practice even more defensible tactics, but I think software out to be designed to survive the open web.