Imagine following case: an attacker pretending to represent a company sends you a repository as a test task before the interview. You run something like "npm install" or run Rust compiler, and your computer is controlled by an attacker now.
Or imagine how one coworker's machine gets hacked, the malicious code is written into a repository and whole G, F or A is now owned by foreign hackers. All thanks to npm and Rust compiler.
Maybe those tools should explicitly confirm executing every external command (with caching allowed commands list in order to not ask again). And maybe Linux should provide an easy to use and safe sandbox for developers. Currently I have to make sandboxes from scratch myself.
Also in maybe cases you don't need the ability to run external code, for example, to install a JS package all you need to do is to download files.
Also this is an indication why it is a bad idea to use environment variables for secrets and configuration. Whoever wrote "12 points app" doesn't know that there are command-line switches and configuration files for this.
What mechanism are you suggesting where access to the production system doesn’t let you also access that secret?
Like I get in this specific case where you are running some untrusted code, that environment should have been isolated and these keys not passed in, but running untrusted code isn’t usually a common feature of most applications.
Rule #1 of building any cloud platform analyzing user code is that you must run analyzers in isolated environments. Even beyond analysis tools frequently allowing direct code injection through plugins, linters/analyzers/compiler are complex software artifacts with large surface areas for bugs. You should ~never assume it's safe to run a tool against arbitrary repos in a shared environment.
I also ran a code analysis platform, where we ran our own analyzer[1] against customer repos. Even though we developed the analyzer ourself, and didn't include any access to environment variables or network requests, I still architected it so executions ran in a sandbox. It's the only safe way to analyze code.
The smartest person and the dumbest person I've met professionally are both investors.
https://en.wikipedia.org/wiki/Battle_of_Stalingrad
https://en.wikipedia.org/wiki/Battle_of_Midway
https://en.wikipedia.org/wiki/Battle_of_Pharsalus
Like, as someone who is generally fairly process averse, I’ve come to the conclusion that there is a huge middle ground between too much process that hampers getting things done and no process that leads to decisions that either break things, or worse, set disastrous acts in motion because basic checks or conversations with people who have more context didn’t happen.
I think if there was a good-faith attempt from the DOGE folks to audit and understand certain systems and processes, instead of gleefully dismantling and freezing programs, firing people, gleefully announcing how much money was “saved” (and often with incorrect amounts) and reflexively ripping on how terrible everything is, you’d probably get some cooperation from the people who have had to deal with bullshit bureaucracy. But that isn’t what happened.
What’s happened is akin to throwing the baby out with the bath water, all real security issues being completely ignored, under the guise that 19 year old crypto bros have the work experience, social skills, or common sense to foresee what is happening.
Governments are inefficient. That’s as much a feature as it is a bug. But with USDS in particular, you had people who left high paying jobs to work for the government because they wanted to make things better for democracy and the country. That is decidedly not the goal of DOGE employees, who want to out McKinsey McKinsey when it comes to just slashing and burning.
Dead Comment