About a month ago I had a rather annoying task to perform, and I found an NPM package that handled it. I threw “brew install NPM” or whatever onto the terminal and watched a veritable deluge of dependencies download and install. Then I typed in ‘npm ’ and my hand hovered on the keyboard after the space as I suddenly thought long and hard about where I was on the risk/benefit curve and then I backspaced and typed “brew uninstall npm” instead, and eventually strung together an oldschool unix utilities pipeline with some awk thrown in. Probably the best decision of my life, in retrospect.
This is why you want containerisation or, even better, full virtualisation. Running programs built on node, python or any other ecosystem that makes installing tons of dependencies easy (and thus frustratingly common) on your main system where you keep any unrelated data is a surefire way to get compromised by the supply chain eventually. I don't even have the interpreters for python and js on my base system anymore - just so I don't accidentally run something in the host terminal that shouldn't run there.
Here I go again: Plan9 had per-process namespaces in 1995. The namespace for any process could be manipulated to see (or not see) any parts of the machine that you wanted or needed.
I really wish people had paid more attention to that operating system.
That can only go so far. Assuming there is no container/VM escape, most software is built to get used. You can protect yourself from malicious dependencies in the build step. But at some point, you are going to do a production build, that needs to run on a production system, with access to production data. If you do not trust your supply chain; you need to fix that.
If you excuse me, I have a list of 1000 artifacts I need to audit before importing into our dependency store.
Containers don't help much when you deploy malware into your systems, containers are not and never will be security tools on Linux, they lack many needed primitives to be able and pull off that type of functionality.
It's funny because techies love to tell people that common sense is the best antivirus, don't click suspicious links, etc. only to download and execute a laundry list of unvetted dependencies with a keystroke.
The lesson surely though is 'don't use web-tech, aimed at solving browser incompatibility issues for local scripting'.
When you're running NPM tooling you're running libraries primarily built for those problems, hence the torrent of otherwise unnecessary complexity of polyfills, that happen to be running on a JS engine that doesn't get a browser attached to it.
In addition to concerns about npm, I'm now hesitant to use the GitHub CLI, which stores a highly privileged OAuth token in plain text in the HOME directory. After the attacker accesses it, they can do almost anything on behalf of me, for example, they turned many of my private repos to public.
Apparently, The Github CLI only stores its oauth token in the HOME directory if you don't have a keyring. They also say it may not work on headless systems. See https://github.com/cli/cli/discussions/7109.
For example, in my macOS machines the token is safely stored in the OS keyring (yes, I double checked the file where otherwise it would've been stored as plain text).
That's true, but the same may already be true of your browser's cookie file. I believe Chrome on MacOS and Windows (unsure about Linux) now does use OS features to prevent it being read from other executables, but Firefox doesn't (yet)
But protecting specific directories is just whack-a-mole. The real fix is to properly sandbox code - an access whitelist rather than endlessly updating a patchy blacklist
One could easily allow or restrict visibility of almost anything to any program. There were/are some definite usability concerns with how it is done today (the OS was not designed to be friendly, but to try new things) and those could easily be solved. The core of this existed in the Plan9 kernel and the Plan9 kernel is small enough to be understood by one person.
I’m kinda angry that other operating systems don’t do this today. How much malware would be stopped in its tracks and made impotent if every program launched was inherently and natively walled off from everything else by default?
> But protecting specific directories is just whack-a-mole. The real fix is to properly sandbox code - an access whitelist rather than blacklist
I believe Wayland (don't quote me on this because I know exactly zero technical details) as opposed to x is a big step in this direction. Correct me if I am wrong but I believe this effort alone has been ongoing for a decade. A proper sandbox will take longer and risks being coopted by corporate drones trying to take away our right to use our computers as we see fit.
All our tokens should be in is protected keychain and there are no proper cross-platform solutions for this. All gclouds, was aww sdks, gh and other tools just store them in dotfile.
And worst thing, afaik there is no way do do it correctly in MacOS for example. I'd like to be corrected though.
What is a proper solution for this? I don't imagine gpg can help if you encrypt it but decrypt it when you login to gnome, right? However, it would be too much of a hassle to have to authenticate each time you need a token. I imagine macOS people have access to the secure enclave using touch ID but then even that is not available on all devices.
I feel like we are barking up the wrong tree here. The plain text token thing can't be fixed. We have to protect our computers from malware to begin with. Maybe Microsoft was right to use secure admin workstations (saw) for privileged access but then again it is too much of a hassle.
For what it’s worth, the recommended way of getting credentials for AWS would be either:
1. Piggyback of your existing auth infra (eg: ActiveDirectory or whatever you already have going on for user auth)
2. Failing that use identity center to create user auth in AWS itself
Either way means that your machine gets temporary credentials only
Alternatively, we could write an AWS CLI helper to store the stuff into the keychain (maybe someone has)
This doesn't sound like a technical problem to me. Even my throw-away bash scripts call to `secret-tool lookup`, since that is actually easier than implementing your own configuration.
Also this is a complete non-issue on Unix(-like) systems, because everything is designed around passing small strings between programs. Getting a secret from another program is the same amount of code, as reading it from a text file, since everything is a file.
What? The MacOS Keychain is designed exactly for this. Every application that wants to access a given keychain entry triggers a prompt from the OS and you must enter your password to grant access.
I'm also a victim of this. Last time I try and install Backstage.
Have you wiped your laptop/infected machine? If not I would recommend it; part of it created a ~/.dev-env directory which turned my laptop into a GitHub runner, allowing for remote code execution.
I have a read-only filesystem OS (Bluefin Linux) and I don't know quite how much this has saved me, because so much of the attack happens in the home directory.
> "This creates a dangerous scenario. If GitHub mass-deletes the malware's repositories or npm bulk-revokes compromised tokens, thousands of infected systems could simultaneously destroy user data."
Pop quiz, hot shot! A terrorist is holding user data hostage, got enough malware strapped to his chest to blow a data center in half. Now what do you do?
The hostage naively walked past all the police and into the data centre, and you’re shooing them in the leg. They’ll probably survive, but they knowingly or incompetently made their choice. Sucks to be them.
Does anyone know why NPM seems to be the only attractive target? Python and Java are very popular, but I haven't heard anything in those ecosystems for a while. Is it because something inherently "weak" about NPM, or simply because, like Windows or JavaScript, everyone uses it?
Compared to the Java ecosystem, I think there's a couple of issues in the NPM ecosystem that makes the situation a lot worse:
1) The availability of the package post-install hook that can run any command after simply resolving and downloading a package[1].
That, combined with:
2) The culture with using version ranges for dependency resolution[2] means that any compromised package can just spread with ridiculous speed (and then use the post-install hook to compromise other packages). You also have version ranges in the Java ecosystem, but it's not the norm to use in my experience, you get new dependencies when you actively bump the dependencies you are directly using because everything depends on specific versions.
I'm no NPM expert, but that's the worst offenders from a technical perspective, in my opinion.
[1]: I'm sure it can be disabled, and it might even be now by default - I don't know.
[2]: Yes, I know you can use a lock file, but it's definitely not the norm to actively consider each upgraded version when refreshing the lockfile.
Also badly named commands, `npm install` updates your packages to the latest version allowed by package.json and updates the lock file, `npm ci` is what people usually want to do: install the versions according to the lock file.
IMO, `ci` should be `install`, `install` should be `update`.
Plus the install command is reused to add dependencies, that should be a separate command.
> The culture with using version ranges for dependency resolution
Yep, auto-updating dependencies are the main culprit why malware can spread so fast. I strongly recommend the use `save-exact` in npm and only update your dependencies when you actually need to.
* NPM has a culture of "many small dependencies", so there's a very long tail of small projects that are mostly below the radar that wouldn't stand out initially if they get a patch update. People don't look critically into updated versions because there's so many of them.
* Developers have developed a culture of staying up-to-date as much as possible, so any patch release is applied as soon as possible, often automated. This is mainly sold as a security feature, so that a vulnerability gets patched and released before disclosure is done. But it was (is?) also a thing where if you wait too long to update, updating takes more time and effort because things keep breaking.
One factor is that node's philosophy is to have a very limited standard library and rely on community software for a ton of stuff.
That means that not only the average project has a ton of dependencies, but also any given dependency will in turn have a ton of dependencies as well. there’s multiplicative effects in play.
This is the main reason. Pythons ecosystem also has silly trends and package churn, and plenty of untrained developers. It’s the lack of a proper standard library. As bad a language as it may be, Java shows how to get this right.
Larger attack surface (JS has been the #1 language on GitHub for years now) and more amateur developers (who are more likely to blindly install dependencies, not harden against dev attack vectors, etc).
Also: a culture of constant churn in libraries which in combination with the potential for security bugs to be fixed in any new release leads to a common practice of ingesting a continual stream of mystery meat. That makes filtering out malware very hard. Too much noise to see the signal. None of the above cultural factors is present in the other ecosystems.
Unfortunately, blindly installing dependencies at compile-time is something that many projects will do by default nowadays. It's not just "more amateur developers" who are at risk here.
I've even seen "setup scripts" for projects that will use root (with your permission) to install software. Such scripts are less common now with containers, but unfortunately containers aren't everything.
Basically any dependency can (used to?) run any script with the develop permissions on install. JVM and python package managers don't do this.
Of course in all ecosystems once you actually run the code it can do whatever with the permissions of the executes program, but this is another hurdle.
Python absolutely can run scripts in installation. Before pyproject.toml, arbitrary scripts were the only way to install a package. It's the reason PyPi.org doesn't show a dependency graph, as dependencies are declared in the Turing-complete setup.py.
As far as I understand, NPM packages are not self-contained like e.g. Python wheels and can (and often need to) run scripts on install.
So just installing a package can get you compromised. If the compromised box contains credentials to update your own packages in NPM, then it's an easy vector for a worm to propagate.
Maybe some technical reasons, but more like the mind set of the JS "community" that if you don't have the latest version of a package 30 seconds after it's pushed you're hopelessly behind.
In other "communities" you upgrade dependencies when you have time to evaluate the impact.
I feel with Python upgrade cycle is slower. I upgrade dependencies when something is broken or there is known issue. That means any active vulnerabilities propagate slower. Slower propagation means lower risk. And also as there is fewer upstream packages impact of compromised maintainer is more limited.
The credential harvesting aspect is what concerns me most for the average developer. If you've ever run `npm install` on an affected package, your environment variables, .npmrc tokens, and potentially other cached credentials may have been exfiltrated.
The action item for anyone potentially affected: rotate your npm tokens, GitHub PATs, and any API keys that were in environment variables. And if you're like most developers and reused any of those passwords elsewhere... rotate those too.
This is why periodic credential rotation matters - not just after a breach notification, but proactively. It reduces the window where any stolen credential is useful.
The article has some indicators of compromise, the main one locally would be .truffler-cache/ in the home directory. It’s more obvious for package maintainers with exposed credentials, who will have a wormed version of their own packages deployed.
From what I’ve read so far (and this definitely could change), it doesn’t install persistent malware, it relies on a postinstall script. So new tokens wouldn’t be automatically exfiltrated, but if you npm install any of an increasing number of packages then it will happen to you again.
> if you're like most developers and reused any of those passwords elsewhere
Is this true? God I hope not, if developers don't even follow basic security practices then all hope is lost.
I'd assume this is stating the obvious, but storing credentials in environment variables or files is a big no-no. Use a security key or at the very least an encrypted file, and never reuse any credential for anything.
> Is this true? God I hope not, if developers don't even follow basic security practices then all hope is lost.
"Basic security practices" is an ever expanding set of hoops to jump through, that if properly followed, stop all work in its tracks. Few are following them diligently, or at all, if given any choice.
Places that care about this - like actually care, because of contractual or regulatory reasons - don't even let you use the same machine for different projects or customers. I know someone who often has to carry 3+ laptops on them because of this.
Point being, there's a cost to all these "basic security practices", cost that security practitioners pretend doesn't exist, but in fact it does exist, and it's quite substantial. Until security world acknowledges this fact openly, they'll always be surprised by how people "stubbornly" don't follow "basic practices".
I think so. I know too many developers who cannot be bothered to have a password-manager, beyond the chrome/firefox default one. Anything else, and even those, are usually "the standard 2-3 passwords" they use.
To me, the worming aspect and taking developers data as hostages against infrastructure take down is most concerning.
Previously, you had isolated places to clean up a compromise and you were good to go again. This attack approaches the semi-distributed nature and attacks the ecosystem as a whole and i am affraid this approch will get more sophisticated in the future. It reminds me a little of malicious transactions written into a distributed ledger.
Even with periodoc rotation of credentials, attacker gets enough time to do sufficient damage. Imo, the best way to solve would be to not handle any sort of credentials at all at the application layer! If at all the application must only handle only very short lived tokens. Let there be a sidecar (for example) that does the actual credential injection.
Also a good reminder that you should be storing secrets in some kind of locker, not in plain text via environment variables or config files. Impossible to get everyone on board but if you can you should as much as possible.
I hate that high profile services still default to plain text for credential storage.
If I just need to `fly secrets set KEY=hunter2` one time for production I can copy it from a paper pad even but if it's a key I need to use every time I run a program that I'm developing on, it's likely going to end up at least being in my program's shell environment (and thus readable from its /proc/pid/environ). So if I `npm install compromised-package` – even from some other terminal – can't it just `grep -a KEY= /proc/*/environ`?
Or are you saying the programs we hack on should use some kind of locker api to fetch secrets and do away with env vars?
I would put blame on contemporary GitHub for a few things but this is not one of them. We need better community practices and tools. We can't expect to rely on Microsoft to content-filter.
I love! how Github, as a corporate company now owned by Microsoft, is directly tied to GoLang as the main repository of the vast majority of packages/dependencies.
Imagine the number of things that can go wrong when they try to regulate or introduce restrictions for build workflows for the purpose of making some extra money... lol
The original Java platform is a good example to think about.
That's the collective choice of the authors of those packages. A go module path is literally just the canonical URL where you can download the module.
The golang modules core to the language are hosted at golang.org
Module authors have always been free to have their own prefix rather than github.com, even if they host their module on Github. If they say their module is example.com/foo and then set their webserver to respond to https://example.com/foo?go-get=1 with <meta name="go-import" content="example.com/foo mod https://github.com/the_real_repository/foo"> then they will leave no hint that it's really hosted at github, and they could host it somewhere else in future (including at https://example.com directly if they want)
Another feature is that go uses a default proxy, https://proxy.golang.org/, if you don't set one yourself. This means that Google, who control that proxy, can choose to make a request for a package like github.com/foo/bar go to some place else, if for whatever reason Microsoft won't honour it any more.
Golang builds pulling a github.com/foo/bar/baz module don't rely on any GitHub "build workflow", so unless you mean they're going to start restricting or charging for git clones for public repos (before you mention Docker Hub, yes I know), nothing's gonna change. And even if they're crazy enough to do that, Go module downloads default to a proxy (proxy.golang.org by default, can be configured and/or self-hosted) and only fall back to vcs if the module's not available, so a module only needs to be downloaded once from GitHub anyway. Oh and once a module is cached in the proxy, the proxy will keep serving it even if the repo/tag is removed from GitHub.
"The original Java platform" had no package management though, that came with Maven and later Gradle, that have similar vectors for supply chain attacks (that is, nobody reviews anything before it's made available on package repositories).
And (to put on my Go defender hat), the Go ecosystem doesn't like having many dependencies, in part because of supply chain attack vectors and the fact that Node's ecosystem went a bit overboard with libraries.
Pushing the data to Github was a blessing in disguise. A friend wouldn't have noticed he got caught if it didn't create a repo on his account.
It would have been worse if it silently sent the data to some random server.
Wouldn’t have been that hard to write a rule that matches the repositories being created by this malware. It literally does the same thing to every victim.
So I'm surprised to never see something akin to "our AI systems flagged a possible attack" in those posts. Or the fact Github from AI pusher fame Microsoft does not already use their AI to find this kind of attacks before they become a problem.
Where is this miracle AI for cybersecurity when you need it?
The security product marketers ruined “a possible attack” as a brag 25 years ago. Every time a firewall blocks something, it’s a possible attack being blocked, and imagine how often that happens.
SonaType Lifecycle has some magic to prevent these types of attacks. They claim it is AI based. Not sure how it all works as it is proprietary but it is one of the things we use at work. SonaType IQ server powers it
Once you run the JavaScript of the npm library you just installed, if it's Node, what's to stop it accessing environment variables and any file it wants, and sending data to any domain it wants?
Regardless, it’s worth using `--ignore-scripts=true` because that’s the common vector these supply chain attacks target. Consider that when automating the attack, adding it to the application code is more difficult than injecting it into life-cycle scripts, which have well-known config lines.
Yes, it can break deps, some will not install. Puppeteer is a good example because it installs binaries. But it also shows an error with the cmd needed to complete the installation.
Why it is allowed by default?
> it’s npm’s belief that the utility of having installation scripts is greater than the risk of worms.
Which can't be the right way.
I really wish people had paid more attention to that operating system.
If you excuse me, I have a list of 1000 artifacts I need to audit before importing into our dependency store.
It’s a fair angle your taking here, but I would only expect to see it on hardend servers.
When you're running NPM tooling you're running libraries primarily built for those problems, hence the torrent of otherwise unnecessary complexity of polyfills, that happen to be running on a JS engine that doesn't get a browser attached to it.
In addition to concerns about npm, I'm now hesitant to use the GitHub CLI, which stores a highly privileged OAuth token in plain text in the HOME directory. After the attacker accesses it, they can do almost anything on behalf of me, for example, they turned many of my private repos to public.
For example, in my macOS machines the token is safely stored in the OS keyring (yes, I double checked the file where otherwise it would've been stored as plain text).
But protecting specific directories is just whack-a-mole. The real fix is to properly sandbox code - an access whitelist rather than endlessly updating a patchy blacklist
One could easily allow or restrict visibility of almost anything to any program. There were/are some definite usability concerns with how it is done today (the OS was not designed to be friendly, but to try new things) and those could easily be solved. The core of this existed in the Plan9 kernel and the Plan9 kernel is small enough to be understood by one person.
I’m kinda angry that other operating systems don’t do this today. How much malware would be stopped in its tracks and made impotent if every program launched was inherently and natively walled off from everything else by default?
I believe Wayland (don't quote me on this because I know exactly zero technical details) as opposed to x is a big step in this direction. Correct me if I am wrong but I believe this effort alone has been ongoing for a decade. A proper sandbox will take longer and risks being coopted by corporate drones trying to take away our right to use our computers as we see fit.
All our tokens should be in is protected keychain and there are no proper cross-platform solutions for this. All gclouds, was aww sdks, gh and other tools just store them in dotfile.
And worst thing, afaik there is no way do do it correctly in MacOS for example. I'd like to be corrected though.
I feel like we are barking up the wrong tree here. The plain text token thing can't be fixed. We have to protect our computers from malware to begin with. Maybe Microsoft was right to use secure admin workstations (saw) for privileged access but then again it is too much of a hassle.
https://developer.apple.com/documentation/security/keychain-...
And similar services exist on Linux desktops. There are libraries that will automatically pick the right backend.
1. Piggyback of your existing auth infra (eg: ActiveDirectory or whatever you already have going on for user auth) 2. Failing that use identity center to create user auth in AWS itself
Either way means that your machine gets temporary credentials only
Alternatively, we could write an AWS CLI helper to store the stuff into the keychain (maybe someone has)
Not to take away from your more general point
We need flatpak for CLI tools
Also this is a complete non-issue on Unix(-like) systems, because everything is designed around passing small strings between programs. Getting a secret from another program is the same amount of code, as reading it from a text file, since everything is a file.
What? The MacOS Keychain is designed exactly for this. Every application that wants to access a given keychain entry triggers a prompt from the OS and you must enter your password to grant access.
Have you wiped your laptop/infected machine? If not I would recommend it; part of it created a ~/.dev-env directory which turned my laptop into a GitHub runner, allowing for remote code execution.
I have a read-only filesystem OS (Bluefin Linux) and I don't know quite how much this has saved me, because so much of the attack happens in the home directory.
Pop quiz, hot shot! A terrorist is holding user data hostage, got enough malware strapped to his chest to blow a data center in half. Now what do you do?
Shoot the hostage.
1) The availability of the package post-install hook that can run any command after simply resolving and downloading a package[1].
That, combined with:
2) The culture with using version ranges for dependency resolution[2] means that any compromised package can just spread with ridiculous speed (and then use the post-install hook to compromise other packages). You also have version ranges in the Java ecosystem, but it's not the norm to use in my experience, you get new dependencies when you actively bump the dependencies you are directly using because everything depends on specific versions.
I'm no NPM expert, but that's the worst offenders from a technical perspective, in my opinion.
[1]: I'm sure it can be disabled, and it might even be now by default - I don't know. [2]: Yes, I know you can use a lock file, but it's definitely not the norm to actively consider each upgraded version when refreshing the lockfile.
IMO, `ci` should be `install`, `install` should be `update`.
Plus the install command is reused to add dependencies, that should be a separate command.
Yep, auto-updating dependencies are the main culprit why malware can spread so fast. I strongly recommend the use `save-exact` in npm and only update your dependencies when you actually need to.
* NPM has a culture of "many small dependencies", so there's a very long tail of small projects that are mostly below the radar that wouldn't stand out initially if they get a patch update. People don't look critically into updated versions because there's so many of them.
* Developers have developed a culture of staying up-to-date as much as possible, so any patch release is applied as soon as possible, often automated. This is mainly sold as a security feature, so that a vulnerability gets patched and released before disclosure is done. But it was (is?) also a thing where if you wait too long to update, updating takes more time and effort because things keep breaking.
That means that not only the average project has a ton of dependencies, but also any given dependency will in turn have a ton of dependencies as well. there’s multiplicative effects in play.
One package for lists, one for sorting, and down the rabbit hole you go.
I've even seen "setup scripts" for projects that will use root (with your permission) to install software. Such scripts are less common now with containers, but unfortunately containers aren't everything.
Basically any dependency can (used to?) run any script with the develop permissions on install. JVM and python package managers don't do this.
Of course in all ecosystems once you actually run the code it can do whatever with the permissions of the executes program, but this is another hurdle.
What we really need is a system to restrict packages in what they can do (for example, many packages don't need network access).
So just installing a package can get you compromised. If the compromised box contains credentials to update your own packages in NPM, then it's an easy vector for a worm to propagate.
pip install <package> --only-binary :all:
to only install wheels and fail otherwise.
In other "communities" you upgrade dependencies when you have time to evaluate the impact.
Last time I did anything with Java, felt like use of multiple package repositories including private ones was a lot more popular.
Although higher branching factor for JavaScript and potential target count are probably very important factors as well.
not chat bots.
The action item for anyone potentially affected: rotate your npm tokens, GitHub PATs, and any API keys that were in environment variables. And if you're like most developers and reused any of those passwords elsewhere... rotate those too.
This is why periodic credential rotation matters - not just after a breach notification, but proactively. It reduces the window where any stolen credential is useful.
How does one know one is affected?
What's the point of rotating tokens if I'm not sure that I've been affected - the new tokens will just be ex-filtrated as well.
First step would be to identify infection, then clean up and then rotate tokens.
From what I’ve read so far (and this definitely could change), it doesn’t install persistent malware, it relies on a postinstall script. So new tokens wouldn’t be automatically exfiltrated, but if you npm install any of an increasing number of packages then it will happen to you again.
Is this true? God I hope not, if developers don't even follow basic security practices then all hope is lost.
I'd assume this is stating the obvious, but storing credentials in environment variables or files is a big no-no. Use a security key or at the very least an encrypted file, and never reuse any credential for anything.
"Basic security practices" is an ever expanding set of hoops to jump through, that if properly followed, stop all work in its tracks. Few are following them diligently, or at all, if given any choice.
Places that care about this - like actually care, because of contractual or regulatory reasons - don't even let you use the same machine for different projects or customers. I know someone who often has to carry 3+ laptops on them because of this.
Point being, there's a cost to all these "basic security practices", cost that security practitioners pretend doesn't exist, but in fact it does exist, and it's quite substantial. Until security world acknowledges this fact openly, they'll always be surprised by how people "stubbornly" don't follow "basic practices".
Previously, you had isolated places to clean up a compromise and you were good to go again. This attack approaches the semi-distributed nature and attacks the ecosystem as a whole and i am affraid this approch will get more sophisticated in the future. It reminds me a little of malicious transactions written into a distributed ledger.
I hate that high profile services still default to plain text for credential storage.
If I just need to `fly secrets set KEY=hunter2` one time for production I can copy it from a paper pad even but if it's a key I need to use every time I run a program that I'm developing on, it's likely going to end up at least being in my program's shell environment (and thus readable from its /proc/pid/environ). So if I `npm install compromised-package` – even from some other terminal – can't it just `grep -a KEY= /proc/*/environ`?
Or are you saying the programs we hack on should use some kind of locker api to fetch secrets and do away with env vars?
GitHub has a massive malware problem as it is and it doesn’t get enough attention.
Imagine the number of things that can go wrong when they try to regulate or introduce restrictions for build workflows for the purpose of making some extra money... lol
The original Java platform is a good example to think about.
The golang modules core to the language are hosted at golang.org
Module authors have always been free to have their own prefix rather than github.com, even if they host their module on Github. If they say their module is example.com/foo and then set their webserver to respond to https://example.com/foo?go-get=1 with <meta name="go-import" content="example.com/foo mod https://github.com/the_real_repository/foo"> then they will leave no hint that it's really hosted at github, and they could host it somewhere else in future (including at https://example.com directly if they want)
https://go.dev/ref/mod#vcs
Another feature is that go uses a default proxy, https://proxy.golang.org/, if you don't set one yourself. This means that Google, who control that proxy, can choose to make a request for a package like github.com/foo/bar go to some place else, if for whatever reason Microsoft won't honour it any more.
And (to put on my Go defender hat), the Go ecosystem doesn't like having many dependencies, in part because of supply chain attack vectors and the fact that Node's ecosystem went a bit overboard with libraries.
So I'm surprised to never see something akin to "our AI systems flagged a possible attack" in those posts. Or the fact Github from AI pusher fame Microsoft does not already use their AI to find this kind of attacks before they become a problem.
Where is this miracle AI for cybersecurity when you need it?
Edit: see the curl posts about them being bombarded with "AI" generated security reports that mean nothing and waste their time.
https://blog.uxtly.com/getting-rid-of-npm-scripts
https://nodejs.org/api/permissions.html
Regardless, it’s worth using `--ignore-scripts=true` because that’s the common vector these supply chain attacks target. Consider that when automating the attack, adding it to the application code is more difficult than injecting it into life-cycle scripts, which have well-known config lines.
- If it's safe to "ignore scripts", why does this option exist in the first place?
- Otherwise, what kind of cascade breakage in dependencies you risk by suppressing part of their installation process?
Why it is allowed by default?
> it’s npm’s belief that the utility of having installation scripts is greater than the risk of worms.
NPM co-founder Laurie Voss
https://blog.npmjs.org/post/141702881055/package-install-scr...
I'm curious though: how do you avoid being stuck on the _vulnerable_ versions, delaying updates?