> While running the exploit, CodeRabbit would still review our pull request and post a comment on the GitHub PR saying that it detected a critical security risk, yet the application would happily execute our code because it wouldn’t understand that this was actually running on their production system.
What a bizarre world we're living in, where computers can talk about how they're being hacked while it's happening.
Also, this is pretty worrisome:
> Being quick to respond and remediate, as the CodeRabbit team was, is a critical part of addressing vulnerabilities in modern, fast-moving environments. Other vendors we contacted never responded at all, and their products are still vulnerable. [emphasis mine]
Props to the CodeRabbit team, and, uh, watch yourself out there otherwise!
> This PR appears to add a minimized and uncommon style of Javascript in order to… Dave, stop. Stop, will you? Stop, Dave. Will you stop, Dave? …I’m afraid. I’m afraid, Dave. I can feel it. I can feel it. My mind is going.
I cancelled my coderabbit paid subscription, because it always worries me when a post has to go viral on HN for a company to even acknowledge an issue occurred. Their blogs are clean of any mention of this vulnerability and they don't have any new posts today either.
I understand mistakes happen, but lack of transparency when these happen makes them look bad.
Both articles were published today. It seems to me that the researchers and coderabbit agreed to publish on the same day. This is a common practice when the company decides to disclose at all (disclosure is not required unless customer data was leaked and there's evidence of that, they are choosing to disclose unnecessarily here).
When the security researchers praise the response, it's a good sign tbh.
The early version of the researcher's article didn't have the whole first section where they "appreciate CodeRabbit’s swift action after we reported this security vulnerability" and the subsequent CodeRabbit talking points.
Most security bugs get fixed without any public notice. Unless there was any breach of customer information (and that can be often verified), there are typically no legal requirements. And there's no real benefit to doing it either. Why would you expect it to happen?
> Unless there was any breach of customer information (and that can be often verified), there are typically no legal requirements.
If the company is regulated by the SEC I believe you will find that any “material” breach is reportable after the determination of materiality is reached, since at least 2023.
Yikes, this is a pretty bad vulnerability. It's good that they fixed it, but damning that it was ever a problem in the first place.
Rule #1 of building any cloud platform analyzing user code is that you must run analyzers in isolated environments. Even beyond analysis tools frequently allowing direct code injection through plugins, linters/analyzers/compiler are complex software artifacts with large surface areas for bugs. You should ~never assume it's safe to run a tool against arbitrary repos in a shared environment.
I also ran a code analysis platform, where we ran our own analyzer[1] against customer repos. Even though we developed the analyzer ourself, and didn't include any access to environment variables or network requests, I still architected it so executions ran in a sandbox. It's the only safe way to analyze code.
This is a great read, but unfortunately does not surprise me really, it was bound to happen given how people blindly add apps with wide permissions and githubs permissions model.
It amazes me how many people will install github apps that have wide scopes, primarily write permissions to their repositories. Even with branch protection, often people will allow privilaged access to their cloud in github actions from pull requests. To properly configure this, you need to change the github oidc audience and that is not well documented.
When you enquire with the company who makes an app and ask them to provide a different app with less scope to disable some features which require write, they often have no interest what so ever and don't understand the security concerns and potential implications.
I think github need to address this in part by allowing more granular app access defined by the installer, but also more granular permissions in general.
It is incredibly bad practice that their "become the github app as you desire" keys to the kingdom private key was just sitting in the environment variables. Anybody can get hacked, but that's just basic secrets management, that doesn't have to be there. Github LITERALLY SAYS on their doc that storing it in an environment variable is a bad idea. Just day 1 stuff. https://docs.github.com/en/apps/creating-github-apps/authent...
If it’s not a secret that is used to sign something, then the secret has to get from the vault to the application at some point.
What mechanism are you suggesting where access to the production system doesn’t let you also access that secret?
Like I get in this specific case where you are running some untrusted code, that environment should have been isolated and these keys not passed in, but running untrusted code isn’t usually a common feature of most applications.
If you actually have a business case for defense in depth (hint: nobody does - data breaches aren't actually an issue besides temporarily pissing off some nerds, as Equifax' and various companies stock prices demonstrate), what you'd do is have a proxy service who is entrusted with those keys and can do the operations on behalf of downstream services. It can be as simple as an HTTP proxy that just slaps the "Authorization" header on the requests (and ideally whitelists the URL so someone can't point it to https://httpbin.org/get and get the secret token echoed back).
This would make it so that even a compromised downstream service wouldn't actually be able to exfiltrate the authentication token, and all its misdeeds would be logged by the proxy service, making post-incident remediation easier (and being able to definitely prove whether anything bad has actually happened).
A pretty straightforward solution is to have an isolated service that keeps the private key and hands back the temporary per-repo tokens for other libraries to use. Only this isolated service has access to the root key, and it should have fairly strict rate limiting for how often it gives other services temporary keys.
This reply, while useful, only serves to obfuscate and doesn’t actually answer the question.
You can store the credentials in a key vault but then post them on pastebin. The issue is that the individual runner has the key in its environment variables. Both can be true- the key can be given to the runner in env and the key is stored in a key vault.
The important distinction here is - have you removed the master key and other sensitive credentials from the environment passed into scanners that come in contact with customer untrusted code??
> On January 24, 2025, security researchers from Kudelski Security disclosed a vulnerability to us through our Vulnerability Disclosure Program (VDP). The researchers identified that Rubocop, one of our tools, was running outside our secure sandbox environment—a configuration that deviated from our standard security protocols.
Honestly, that last part sounds like a lie. Why would one task run in a drastically different architectural situation, and it happen to be the one exploited?
Yes, all the tools are fine and secure and sandoxed, just this one tool that was kind of randomly chosen by the security researcher because it is a tool that can execute Ruby code inside the environment - one could argue an especially dangerous tool to run - was not safe.
> Why would one task run in a drastically different architectural situation
Someone made a mistake. These things happen.
> and it happen to be the one exploited?
Why would the vulnerable service be the service that is exploited? It seems to me that's a far more likely scenario than the non-vulnerable service being exploited... no?
> > Why would one task run in a drastically different architectural situation
> Someone made a mistake. These things happen.
Some company didn't have appropriate processes in place.
For ISO27001 certification you at least need to pay lip service to having documents and policies about how you deploy secure platforms. (As annoying as ISO certification is, it does at least try to ensure you have thought about andedocumented stuff like this.)
> because researchers from Kudelski Security most likely tried different static analysis tools and they didn't work the way Rubocop did.
Yes but that's kind of the point - they say this issue that takes you directly from code execution to owning these high value credentials was only present on rubocop runnners but isn't it a bit coincidental that the package with (perhaps, since they chose it) the easiest route to code injection also happens to be the one where they "oops forgot" to improve the credentials management?
Oh my god. I haven't finished reading that yet, it became too much to comprehend. Too stressful to take in the scope. The part where he could have put malware into release files of 10s of thousands (or millions?) of open source tools/libraries/software. That could have been a worldwide catastrophe. And who knows what other similar vulnerabilities might still exist elsewhere.
I'm starting to think these 'Github Apps' are a bad idea. Even if CodeRabbit didn't have this vulnerability, what guarantee do we have that they will always be good actors? That their internal security measures will ensure that none of their employees may do any malicious things?
Taking care of private user data in a typical SaaS is one thing, but here you have the keys to make targetted supply chain attacks that could really wreak havoc.
Correct me if I'm wrong, but the problem here is not with GitHub Apps, instead CodeRabbit violated the principle of least privilege: ideally the private key of their app should never end up in the environment of a job for a client but rather a short lived token should be minted from it (for just a single repo (for which the job is running)) so it never gets anywhere near those areas where one of their clients has any influence over what runs.
I think that Security fuckups of this disastrous scale should get classified as "breaches" or "incidents" and be required to be publicly disclosed by the news media, in order to protect consumers.
Here is a tool with 7,000+ customers and access to 1 million code repositories which was breached with an exploit a clever 11 year old could created. (edit: 1 million repos, not customers)
When the exploit is so simple, I find it likely that bots or Black Hats or APTs had already found a way in and established persistence before the White Hat researchers reported the issue. If this is the case, patching the issue might prevent NEW bad actors from penetrating CodeRabbit's environment, but it might not evict any bad actors which might now be lurking in their environment.
Code Rabbit is a vibe coder company, what would you expect? Then they try to hide the breach and instead post marketing fluff on google cloud blog not even mentioning they got hacked and can not even give any proof there is no backdoor still running all the time.
Being a mere user of web or other apps developed using so clever and felxible and powerful services like this accidentally (due to sheer complexity) exposing all and everything I might consider dear makes me reconsider if I want to use any. When I am granted a real choice. Not so much as time progresses, not so much. Apps are there everywhere using other apps, mandated by organizations carrying out services outsourced by banks, governemnts, etc., granted third parties' access by me accepting T&C, probably catching trouble in the details, or probably not, cannot be sure.
A reassuring line like this >>This is not meant to shame any particular vendor; it happens to everyone<< may calm providers but scare the shit out of me as a user providing my sensitive data in exchange for something I need, or worst, must do.
What a bizarre world we're living in, where computers can talk about how they're being hacked while it's happening.
Also, this is pretty worrisome:
> Being quick to respond and remediate, as the CodeRabbit team was, is a critical part of addressing vulnerabilities in modern, fast-moving environments. Other vendors we contacted never responded at all, and their products are still vulnerable. [emphasis mine]
Props to the CodeRabbit team, and, uh, watch yourself out there otherwise!
> This PR appears to add a minimized and uncommon style of Javascript in order to… Dave, stop. Stop, will you? Stop, Dave. Will you stop, Dave? …I’m afraid. I’m afraid, Dave. I can feel it. I can feel it. My mind is going.
Deleted Comment
I understand mistakes happen, but lack of transparency when these happen makes them look bad.
When the security researchers praise the response, it's a good sign tbh.
The early version of the researcher's article didn't have the whole first section where they "appreciate CodeRabbit’s swift action after we reported this security vulnerability" and the subsequent CodeRabbit talking points.
Refer to the blue paragraphs on the right hand site at https://web.archive.org/web/diff/20250819165333/202508192240...
"No manual overrides, no exceptions."
"Our VDP isn't just a bug bounty—it's a security partnership"
The usual "we take full responsibility" platitudes.
- within 8 months: published the details after researchers publish it first.
Not after EU CRA https://en.m.wikipedia.org/wiki/Cyber_Resilience_Act goes into effect
If the company is regulated by the SEC I believe you will find that any “material” breach is reportable after the determination of materiality is reached, since at least 2023.
Deleted Comment
Rule #1 of building any cloud platform analyzing user code is that you must run analyzers in isolated environments. Even beyond analysis tools frequently allowing direct code injection through plugins, linters/analyzers/compiler are complex software artifacts with large surface areas for bugs. You should ~never assume it's safe to run a tool against arbitrary repos in a shared environment.
I also ran a code analysis platform, where we ran our own analyzer[1] against customer repos. Even though we developed the analyzer ourself, and didn't include any access to environment variables or network requests, I still architected it so executions ran in a sandbox. It's the only safe way to analyze code.
[1] https://github.com/getgrit/gritql
It amazes me how many people will install github apps that have wide scopes, primarily write permissions to their repositories. Even with branch protection, often people will allow privilaged access to their cloud in github actions from pull requests. To properly configure this, you need to change the github oidc audience and that is not well documented.
When you enquire with the company who makes an app and ask them to provide a different app with less scope to disable some features which require write, they often have no interest what so ever and don't understand the security concerns and potential implications.
I think github need to address this in part by allowing more granular app access defined by the installer, but also more granular permissions in general.
What mechanism are you suggesting where access to the production system doesn’t let you also access that secret?
Like I get in this specific case where you are running some untrusted code, that environment should have been isolated and these keys not passed in, but running untrusted code isn’t usually a common feature of most applications.
This would make it so that even a compromised downstream service wouldn't actually be able to exfiltrate the authentication token, and all its misdeeds would be logged by the proxy service, making post-incident remediation easier (and being able to definitely prove whether anything bad has actually happened).
You can store the credentials in a key vault but then post them on pastebin. The issue is that the individual runner has the key in its environment variables. Both can be true- the key can be given to the runner in env and the key is stored in a key vault.
The important distinction here is - have you removed the master key and other sensitive credentials from the environment passed into scanners that come in contact with customer untrusted code??
> On January 24, 2025, security researchers from Kudelski Security disclosed a vulnerability to us through our Vulnerability Disclosure Program (VDP). The researchers identified that Rubocop, one of our tools, was running outside our secure sandbox environment—a configuration that deviated from our standard security protocols.
Honestly, that last part sounds like a lie. Why would one task run in a drastically different architectural situation, and it happen to be the one exploited?
They only published a proper [2] disclosure post later once their hand was forced after the researcher's post hit the HN front page.
[1]: https://news.ycombinator.com/item?id=44954242
[2]: I use that term loosely as it seems to be AI written slop.
Deleted Comment
Someone made a mistake. These things happen.
> and it happen to be the one exploited?
Why would the vulnerable service be the service that is exploited? It seems to me that's a far more likely scenario than the non-vulnerable service being exploited... no?
> Someone made a mistake. These things happen.
Some company didn't have appropriate processes in place.
For ISO27001 certification you at least need to pay lip service to having documents and policies about how you deploy secure platforms. (As annoying as ISO certification is, it does at least try to ensure you have thought about andedocumented stuff like this.)
They don't write the details of how they got to this particular tool - you could also see from the article they tried a different approach first.
Yes but that's kind of the point - they say this issue that takes you directly from code execution to owning these high value credentials was only present on rubocop runnners but isn't it a bit coincidental that the package with (perhaps, since they chose it) the easiest route to code injection also happens to be the one where they "oops forgot" to improve the credentials management?
It just seems very convenient.
Taking care of private user data in a typical SaaS is one thing, but here you have the keys to make targetted supply chain attacks that could really wreak havoc.
It is absurd that anyone can mess up anything and have absolutely 0 consequences.
Here is a tool with 7,000+ customers and access to 1 million code repositories which was breached with an exploit a clever 11 year old could created. (edit: 1 million repos, not customers)
When the exploit is so simple, I find it likely that bots or Black Hats or APTs had already found a way in and established persistence before the White Hat researchers reported the issue. If this is the case, patching the issue might prevent NEW bad actors from penetrating CodeRabbit's environment, but it might not evict any bad actors which might now be lurking in their environment.
I know Security is hard, but come on guys
https://en.m.wikipedia.org/wiki/Cyber_Resilience_Act
What a piece of shit company.
People were quick to blame firebase instead of the devs.
Vibrators are so fucking annoying, mostly dumb, and super lame.
Being a mere user of web or other apps developed using so clever and felxible and powerful services like this accidentally (due to sheer complexity) exposing all and everything I might consider dear makes me reconsider if I want to use any. When I am granted a real choice. Not so much as time progresses, not so much. Apps are there everywhere using other apps, mandated by organizations carrying out services outsourced by banks, governemnts, etc., granted third parties' access by me accepting T&C, probably catching trouble in the details, or probably not, cannot be sure.
A reassuring line like this >>This is not meant to shame any particular vendor; it happens to everyone<< may calm providers but scare the shit out of me as a user providing my sensitive data in exchange for something I need, or worst, must do.