Readit News logoReadit News
morgante commented on How we exploited CodeRabbit: From simple PR to RCE and write access on 1M repos   research.kudelskisecurity... · Posted by u/spiridow
smarx007 · 6 days ago
How was the sandbox implemented? Just a one-off Docker container execution or something more substantial?
morgante · 5 days ago
We built on firecracker VMMs but today I'd just use a hosted provider like morph.so or e2b.dev.
morgante commented on How we exploited CodeRabbit: From simple PR to RCE and write access on 1M repos   research.kudelskisecurity... · Posted by u/spiridow
KingOfCoders · 6 days ago
Did I misread the article, or did they take the tool config from the PR not the repo?
morgante · 6 days ago
The exploit is there either way.
morgante commented on How we exploited CodeRabbit: From simple PR to RCE and write access on 1M repos   research.kudelskisecurity... · Posted by u/spiridow
codedokode · 6 days ago
One of the problems is that code analyzers, bundlers, compilers (like Rust compiler) allow running arbitrary code without any warning.

Imagine following case: an attacker pretending to represent a company sends you a repository as a test task before the interview. You run something like "npm install" or run Rust compiler, and your computer is controlled by an attacker now.

Or imagine how one coworker's machine gets hacked, the malicious code is written into a repository and whole G, F or A is now owned by foreign hackers. All thanks to npm and Rust compiler.

Maybe those tools should explicitly confirm executing every external command (with caching allowed commands list in order to not ask again). And maybe Linux should provide an easy to use and safe sandbox for developers. Currently I have to make sandboxes from scratch myself.

Also in maybe cases you don't need the ability to run external code, for example, to install a JS package all you need to do is to download files.

Also this is an indication why it is a bad idea to use environment variables for secrets and configuration. Whoever wrote "12 points app" doesn't know that there are command-line switches and configuration files for this.

morgante · 6 days ago
You should treat running a code analyzer/builder/linter against a codebase as being no safer than running that codebase itself.
morgante commented on How we exploited CodeRabbit: From simple PR to RCE and write access on 1M repos   research.kudelskisecurity... · Posted by u/spiridow
doesnt_know · 6 days ago
If it’s not a secret that is used to sign something, then the secret has to get from the vault to the application at some point.

What mechanism are you suggesting where access to the production system doesn’t let you also access that secret?

Like I get in this specific case where you are running some untrusted code, that environment should have been isolated and these keys not passed in, but running untrusted code isn’t usually a common feature of most applications.

morgante · 6 days ago
A pretty straightforward solution is to have an isolated service that keeps the private key and hands back the temporary per-repo tokens for other libraries to use. Only this isolated service has access to the root key, and it should have fairly strict rate limiting for how often it gives other services temporary keys.
morgante commented on How we exploited CodeRabbit: From simple PR to RCE and write access on 1M repos   research.kudelskisecurity... · Posted by u/spiridow
morgante · 6 days ago
Yikes, this is a pretty bad vulnerability. It's good that they fixed it, but damning that it was ever a problem in the first place.

Rule #1 of building any cloud platform analyzing user code is that you must run analyzers in isolated environments. Even beyond analysis tools frequently allowing direct code injection through plugins, linters/analyzers/compiler are complex software artifacts with large surface areas for bugs. You should ~never assume it's safe to run a tool against arbitrary repos in a shared environment.

I also ran a code analysis platform, where we ran our own analyzer[1] against customer repos. Even though we developed the analyzer ourself, and didn't include any access to environment variables or network requests, I still architected it so executions ran in a sandbox. It's the only safe way to analyze code.

[1] https://github.com/getgrit/gritql

morgante commented on A startup doesn't need to be a unicorn   mattgiustwilliamson.subst... · Posted by u/MattSWilliamson
ilrwbwrkhv · 5 months ago
Also the other thing that I realised after working with a bunch of VCs is that they are all incredibly dumb. Few VCs are founders themselves and you will have better luck with them but the majority of VCs have simply no idea about product and technology and they are simply pattern matching. What that means is that they will cargo cult everything and if your startup doesn't fit the mold they will not respond to you favorably and the sad part is that the actual 10x, 100x returns that their VC firm needs comes from those type of investments but they simply cannot see them.
morgante · 5 months ago
This is way too broad of a statement.

The smartest person and the dumbest person I've met professionally are both investors.

morgante commented on How to gain code execution on hundreds of millions of people and popular apps   kibty.town/blog/todesktop... · Posted by u/xyzeva
hakaneskici · 6 months ago
With privileged access, the attackers can tamper with the evidence for repudiation, so although I'd say "nothing in the logs" is acceptable, not everyone may. These two attack vectors are part of the STRIDE threat modeling approach.
morgante · 6 months ago
They don’t elaborate on the logging details, but certainly must good systems don’t allow log tampering even for admins.
morgante commented on We are the builders   wethebuilders.org/... · Posted by u/ChrisArchitect
filmgirlcw · 6 months ago
This is true based on the conversations I’ve had with my USDS friends too, but I’m under no illusion that DOGE will actually empower people to do the right things.

Like, as someone who is generally fairly process averse, I’ve come to the conclusion that there is a huge middle ground between too much process that hampers getting things done and no process that leads to decisions that either break things, or worse, set disastrous acts in motion because basic checks or conversations with people who have more context didn’t happen.

I think if there was a good-faith attempt from the DOGE folks to audit and understand certain systems and processes, instead of gleefully dismantling and freezing programs, firing people, gleefully announcing how much money was “saved” (and often with incorrect amounts) and reflexively ripping on how terrible everything is, you’d probably get some cooperation from the people who have had to deal with bullshit bureaucracy. But that isn’t what happened.

What’s happened is akin to throwing the baby out with the bath water, all real security issues being completely ignored, under the guise that 19 year old crypto bros have the work experience, social skills, or common sense to foresee what is happening.

Governments are inefficient. That’s as much a feature as it is a bug. But with USDS in particular, you had people who left high paying jobs to work for the government because they wanted to make things better for democracy and the country. That is decidedly not the goal of DOGE employees, who want to out McKinsey McKinsey when it comes to just slashing and burning.

morgante · 6 months ago
Unfortunately nuance is dead. I too wish Musk had tried to empower USDS instead of immediately alienating many of the people best positioned to improve things.

Dead Comment

u/morgante

KarmaCake day854July 27, 2022View Original