- debug@4.4.2 (appears to have been yanked as of 8 Sep 18:09 CEST)
- chalk@5.6.1
- supports-color@10.2.1
- strip-ansi@7.1.1
- ansi-regex@6.2.1
- wrap-ansi@9.0.1
- color-convert@3.1.1
- color-name@2.0.1
- is-arrayish@0.3.3
- slice-ansi@7.1.1
- color@5.0.1
- color-string@2.1.1
- simple-swizzle@0.2.3
- supports-hyperlinks@4.1.1
- has-ansi@6.0.1
- chalk-template@1.1.1
- backslash@0.2.1
It looks and feels a bit like a targeted attack.
Will try to keep this comment updated as long as I can before the edit expires.
---
Chalk has been published over. The others remain compromised (8 Sep 17:50 CEST).
NPM has yet to get back to me. My NPM account is entirely unreachable; forgot password system does not work. I have no recourse right now but to wait.
Email came from support at npmjs dot help.
Looked legitimate at first glance. Not making excuses, just had a long week and a panicky morning and was just trying to knock something off my list of to-dos. Made the mistake of clicking the link instead of going directly to the site like I normally would (since I was mobile).
Just NPM is affected. Updates to be posted to the `/debug-js` link above.
While it sucks that this happened, the good thing is that the ecosystem mobilized quickly. I think these sorts of incidents really show why package scanning is essential for securing open source package repositories.
Just want to agree with everyone who is thanking you for owning up (and so quickly). Got phished once while drunk in college (a long time ago), could have been anyone. NPM being slowish to get back to you is a bit surprising, though. Seems like that would only make attacks more lucrative.
Hey, no problem, man. You do a lot for the community, and it's not all your fault. We learn from our mistakes. I was thinking of having a public fake profile to avoid this type of attack, but I'm not sure how it would work on the git tracking capabilities. Probably keeo it only internally for you&NPM ( the real one ) and have some fake ones open for public but not sure, just an obfuscated idea.
Thanks for taking the responsibility and working in fixing ASAP. God bless you.
Wow, that's actually kinda genius not gonna lie. Honestly, I would love seeing some 2fa or some other way to prevent pwning. Maybe having a sign up with google with all of its flaws still might make sense given how it might be 2fa.
Tbh, it's not your fault per se; everybody can fall for phishing emails. The issue, IMO, lies with npmjs which publishes to everyone all at the same time. A delayed publish that allows parties like Aikido and co to scan for suspicious package uploads first (e.g. big changes in patch releases, obfuscated code, code that intercepts HTTP calls, etc), and a direct flagging system at NPM and / or Github would already be an improvement.
Thank you, I appreciate it! I did so as well and even called their support line to have them escalate it. Hopefully they'll treat this as an urgent thing; I'd imagine I'm far from the only one getting these.
It's been almost two hours without a single email back from npm. I am sitting here struggling to figure out what to do to fix any of this. The packages that have Sindre as a co-publisher have been published over but even he isn't able to yank the malicious versions AFAIU.
If there's any ideas on what I should be doing, I'm all ears.
EDIT: I've heard back, they said they're aware and are on it, but no further details.
Hey, you're doing an exemplary response, transparent and fast, in what must be a very stressful situation!
I figure you aren't about to get fooled by phishing anytime soon, but based on some of your remarks and remarks of others, a PSA:
TRUSTING YOUR OWN SENSES to "check" that a domain is right, or an email is right, or the wording has some urgency or whatever is BOUND TO FAIL often enough.
I don't understand how most of the anti-phishing advice focuses on that, it's useless to borderline counter-productive.
What really helps against phishing :
1. NEVER EVER login from an email link. EVER. There are enough legit and phishing emails asking you to do this that it's basically impossible to tell one from the other. The only way to win is to not try.
2. U2F/Webauthn key as second factor is phishing-proof. TOTP is not.
That is all there is. Any other method, any other "indicator" helps but is error-prone, which means someone somewhere will get phished eventually. Particularly if stressed, tired, or in a hurry. It just happened to be you this time.
Or you know, get a password manager like the rest of us. If your password manager doesn't show the usual autofill, since the domain is different than it should, take a step back and validate everything before moving on.
Have the TOTP in the same/another password manager (after considering the tradeoffs) and that can also not be entered unless the domain is right :)
Thank you for the swift and candid response, this has to suck. :/
> The author appears to have deleted most of the compromised package before losing access to his account. At the time of writing, the package simple-swizzle is still compromised.
Is this quote from TFA incorrect, since npm hasn’t yanked anything yet?
The fact that NPMs entire ecosystem relies on this not happening regularly is very scary.
I’m extremely security conscious and that phishing email could have easily gotten me. All it takes is one slip up. Tired, stressed, distracted. Bokm, compromised
I hate that kind of email when sent out legitimately. Google does this crap all the time pretty much conditioning their customers to click those links. And if you're really lucky it's from some subdomain they never bothered advertising as legit.
Atlassian and MS are terrible for making email notifications that are really hard to distinguish from phishing emails. Using hard to identify undocumented random domains in long redirect chains, obfuscating links etc etc.
I’ve started ignoring these types of emails and wait to do any sort of credentials reset until I get an alert when I log in (or try to) for just this reason.
That it had been more than 12 months since last updating them. Npm has done outreach before about doing security changes/enhancements in the past so this didn't really catch me.
I am not very sophisticated npm user on MacOS, but I installed bunch of packages for Claude Code development. How do we check if computer has a problem?
Do we just run:
npm list -g #for global installs
npm list #for local installs
And check if any packages appear that are on the above list?
Hey, new dev here. Sorry if this is a common knowledge and I am asking a stupid question. How does you getting phished affect these NPM packages? aren't these handled by NPM or the developers of them?
The guy is actually the maintainer of those packages. So whoever got his credentials became able to perform releases on those packages. NPM itself does not build any package, it's just a place where people can publish stuff
OP is the developer & maintainer of the affected packages, so the attacker was able to use their phished credentials to upload compromised versions to NPM.
Thanks for leaving a transparent response with what happened, how you responded, what you're doing next, and concisely taking accountability Great work!
I'm sorry that you're having to go through this. Good luck sorting out your account access.
I actually got hit by something that sounds very similar back in July. I was saved by my DNS settings where "npNjs dot com" wound up on a blocklist. I might be paranoid, but it felt targeted and was of a higher level of believability than I'd seen before.
I also more recently received another email asking for an academic interview about "understanding why popular packages wouldn't have been published in a while" that felt like elicitation or an attempt to get publishing access.
Sadly both of the original emails are now deleted so I don't have the exact details anymore, but stay safe out there everyone.
maybe you should work with feross to make a website-api that simply gives you a "true/false" on "can I safely update my dependencies right now" that gives an outofband way to mark the current or all versions thereof, of compromised packages.
mistakes happen. owning them doesn't always happen, so well done.
phishing is too easy. so easy that I don't think the completely unchecked growth of ecosystems like NPM can continue. metastasis is not healthy. there are too many maintainers writing too many packages that too many others rely on.
Ignore anything coming from npm you didn't expect. Don't click links, go to the website directly and address it there. That's what I should have done, and didn't because I was in a rush.
Don't do security things when you're not fully awake, too. Lesson learned.
The email was a "2FA update" email telling me it's been 12 months since I updated 2FA. That should have been a red flag but I've seen similarly dumb things coming from well-intentioned sites before. Since npm has historically been in contact about new security enhancements, this didn't smell particularly unbelievable to my nose.
The email went to the npm-specific inbox, which is another way I can verify them. That address can be queried publicly but I don't generally count on spammers to find that one but instead look at git addresses etc
The domain name was `npmjs dot help` which obviously should have caught my eye, and would have if I was a bit more awake.
The actual in-email link matched what I'd expect on npm's actual site, too.
I'm still trying to work out exactly how they got access. They didn't technically get a real 2FA code from the actual, I don't believe. EDIT: Yeah they did, nevermind. Was a TOTP proxy attack, or whatever you'd call it.
Will post a post-mortem when everything is said and done.
Can't really tell you what not to do, but if you're not already using a password manager so you can easily avoid phishing scams, I really recommend you to look into starting doing so.
In the case of this attack, if you had a password manager and ended up on a domain that looks like the real one, but isn't, you'd notice something is amiss when your password manager cannot find any existing passwords for the current website, and then you'd take a really close look at the domain to confirm before moving forward.
man. anyone and everyone can get fished in a targeted attack. good luck on the cleanup and thanks for being forward about it.
want to stress everyone it can happen to. no one has perfect opsec or tradecraft as a 1 man show. its simply not possible. only luck gets one through and that often enough runs out.
Not your fault. Thanks for posting and being proactive about fixing the problem. It could happen to anyone.
And because it could happen to anyone that we should be doing a better job using AI models for defense. If ordinary people reading a link target URL can see it as suspicious, a model probably can too. We should be plumbing all our emails through privacy-preserving models to detect things like this. The old family of vulnerability scanners isn't working.
One of the most insidious parts of this malware's payload, which isn't getting enough attention, is how it chooses the replacement wallet address. It doesn't just pick one at random from its list.
It actually calculates the Levenshtein distance between the legitimate address and every address in its own list. It then selects the attacker's address that is visually most similar to the original one.
This is a brilliant piece of social engineering baked right into the code. It's designed to specifically defeat the common security habit of only checking the first and last few characters of an address before confirming a transaction.
I'm a little confused on one of the excerpts from your article.
> Our package-lock.json specified the stable version 1.3.2 or newer, so it installed the latest version 1.3.3
As far as I've always understood, the lockfile always specifies one single, locked version for each dependency, and even provides the URL to the tarball of that version. You can define "x version or newer" in the package.json file, but if it updates to a new patch version it's updating the lockfile with it. The npm docs suggest this is the case as well: https://arc.net/l/quote/cdigautx
And with that, packages usually shouldn't be getting updated in your CI pipeline.
Am I mistaken on how npm(/yarn/pnpm) lockfiles work?
Not the parent, but the default `npm install` / `yarn install` builds will ignore the lock file unless everything can be satisfied, if you want the lock file to be respected you must use `npm ci` / `yarn install --frozen-lockfile`.
In my experience, it's common for CI pipelines to be misconfigured in this way, and for Node developers to misunderstand what the lock file is for.
As others have noted, npm install can/will change your lockfile as it installs, and one caveat for the clean-install command they provide is that it is SLOW, since it deletes the entire node_modules directory. Lots of people have complained but they have done nothing: https://github.com/npm/cli/issues/564
The npm team eventually seemed to settle on requiring someone to bring an RFC for this improvment, and the RFC someone did create I think has sat neglected in a corner ever since.
You’re right and the excerpt you quoted was poorly worded and confusing. A lockfile is designed to do exactly what you said.
The package.json locked the file to ^1.3.2. If a newer version exists online that still satisfies the range in package.json (like 1.3.3 for ^1.3.2), npm install will often fetch that newer version and update your package-lock.json file automatically.
That’s how I understand it / that’s my current knowledge. Maybe there is someone here who can confirm/deny that. That would be great!
We should be displaying hashes in a color scheme determined by the hash (foreground/background colors for each character determined by a hash of the hash, salted by that character's index, adjusted to ensure sufficient contrast).
That way it's much harder to make one hash look like another.
As someone with red/green vision deficiency: if you do this, please don’t forget people like me are unable to distinguish many shades of colours, which would be very disadvantageous here!
Not sure why you're being downvoted, OpenSSH implemented randomart which gives you a little ascii "picture" of your key to make it easier for humans to validate. I have no idea if your scheme for producing keyart would work but it sounds like it would make a color "barcode".
> This is a brilliant piece of social engineering baked right into the code. It's designed to specifically defeat the common security habit ...
I don't agree that the exuberance over the brilliance of this attack is warranted if you give this a moment's thought. The web has been fighting lookalike attacks for decades. This is just a more dynamic version of the same.
To be honest, this whole post has the ring of AI writing, not careful analysis.
I've been thinking about using Levenshtein to make hexadecimal strings look more similar. Levenshtein might be useful for correcting typos, but not so when comparing hashes (specifically the start or end sections of it). Kinda odd.
Again, this is not the failure of a single person. This is a failure of the software industry. Supply chain attacks have gigantic impacts. Yet these are all solved problems. Somebody has to just implement the standard security measures that prevents these compromises. We're software developers... we're the ones to implement them.
Every software packaging platform on the planet should already require code signing, artifact signing, user account attacker access detection heuristics, 2FA, etc. If they don't, it's not because they can't, it's because nobody has forced them to.
These attacks will not stop. With AI (and continuous proof that they work) they will now get worse. Mandate software building codes now.
For a package with thousands of downloads a week, does the publishing pace need to be so fast? New version could be uploaded to NPM, then perhaps a notification email to the maintainer saying it will go live on XX date and click here to cancel?
A standard release process for Linux distro packages is 1) submitting a new revision, 2) having it approved by a repository maintainer, 3) it cooks a while in unstable, 4) then in testing, and finally 5) is released as stable. So there's an approval process, a testing phase, and finally a release. And since it's impossible for people to upload a brand new package into a package repository without this process, typosquatting never happens.
Sadly, programming language package managers have normalized the idea that everyone who uses the package manager should be exposed to every random package and release from random strangers with no moderation. This would be unthinkable for a Linux distribution. (You can of course add 3rd-party Linux package repositories, unstable release branches, etc, which should enforce the same type of rules, but they don't have to)
Linux distros are still vulnerable to supply chain attacks though. It's very rare but it has happened. So regardless of the release process, you need all the other mitigations to secure the supply chain. And once they're set up it's all pretty automatic and easy (I use them all day at work).
A lot of these security measures have trade offs, particularly when we start looking at heuristics or attestation-like controls.
These can exclude a lot of common systems and software, including automations. If your heuristic is quite naive like "is using Linux" or "is using Firefox" or "has an IP not in the US" you run into huge issues. These sound stupid, because they are, but they're actually pretty common across a lot of software.
Similar thing with 2FA. Sms isn't very secure, email primes you to phishing, TOTP is good... but it needs to be open standard otherwise we're just doing the "exclude users" thing again. TOTP is still phishable, though. Only hardware attestation isn't, but that's a huge red flag and I don't think NPM could do that.
I have a hard time arguing that 2FA isn't a massive win in almost every circumstance. Having a "confirm that you have uploaded a new package" thing as the default seems good! Someone like npm mandating that a human being presses a button with a recaptcha for any package downloaded by more than X times per week just feels almost mandatory at this point.
The attacks are still possible, but they're not going to be nearly as easy here.
We are engineers. Much like an artist could draw the rest of the owl, it’s not an unreasonable ask towards a field that each day seems to grow more accustomed to the learned helplessness.
Part of the owl can be how consumers upgrade. Don't get the latest patches but keep things up to date. Secondary sources of information about good versions to upgrade to and when. Allows time for vulns to be discovered like this before upgrading. Assumption is people can detect vulns before mass of people installing, which I think is true. Then you just need exceptions for critical security fixes.
Interesting. According to https://www.wiz.io/blog/s1ngularity-supply-chain-attack the initial entry point was a "flawed GitHub Actions workflow that allowed code injection through unsanitized pull request titles" — which was detected and mitigated on August 29.
That was more than ten days ago, and yet major packages were compromised yesterday. How?
> Somebody has to just implement the standard security measures that prevents these compromises.
It's not that simple. You can implement the most stringent security measures, and ultimately a human error will compromise the system. A secure system doesn't exist because humans are the weakest link.
So while we can probably improve some of the processes within npm, phishing attacks like the ones used in this case will always be a vulnerability.
You're right that AI tools will make these attacks more common. That phishing email was indistinguishable from the real thing. But AI tools can also be used to scan and detect such sophisticated attacks. We can't expect to fight bad actors with superhuman tools at their disposal without using superhuman tools ourselves. Fighting fire with fire is the only reasonable strategy.
People focus on attacking windows because there are more windows users. What if I told you the world now has a lot more people involved in programming with JavaScript and Python?
NPM deserves some blame here, IMO. Countless third party intel feeds and security startups can apparently detect this malicious activity, yet NPM, the single source of truth for these packages, with access to literally every data event and security signal, can't seem to stop falling victim to this type of attack? It's practically willful ignorance at this point.
NPM is owned by GitHub and therefore Microsoft, who is too busy putting in Copilot into apps that have 0 reason to have any form of generative AI in them
But Github does loads of things with security, including reporting compromised NPM packages. I didn't know NPM is owned by Microsoft these days though, now that I think about it, Microsoft of all parties should be right on top of this supply chain attack vector - they've been burned hard by security issues for decades, especially in the mid to late 90's, early 2000s as hundreds of millions of devices were connected to the internet, but their OS wasn't ready for it yet.
Dozens of businesses have been built to try fixing the npm security problem. There's clearly money in it, even if MS were to charge an access fee for security features.
Identical, highly obfuscated (and thus suspicious looking) payload was inserted into 22+ packages from the same author (many dormant for a while) simultaneously and published.
What kind of crazy AI could possible have noticed that on the NPM side?
This is frustrating as someone that has built/published apps and extensions to other software providers for years and must wait days or weeks for a release to be approved while it's scanned and analyzed.
For all the security wares that MS and GitHub sell, NPM has seen practically no investment over the years (e.g. just go review the NPM security page... oh, wait, where?).
I blame the prevalence of package mangers in the first place. Never liked em, just for this reason. Things were fine before they became mainstream. Another annoying reason is package files that are set to grab the latest version, randomly breaking your environment. This isn't just npm of course, I hate them all equally.
I would agree if this were one of those `curl | sh` scenarios, but don't we consider things like `brew` to be sufficiently low-risk, akin to `apt`, `dnf`, and the like?
ripgrep is quite well known. It’s not some obscure tool. Brew is a well-established package manager.
(I get that the same can be said for said for npm and the packages in question, but I don’t really see how the context of the thread matters in this case).
If it produces no output, does that mean that there's no code that could act in the future?
I first acted out of nerves and deleted the whole node-modules and package.lock in a couple of freshly opened Astro projects, curious if I should considered my web surfing to still be potentially malicious
The malware introduced here is a crypto address swapper. It's possible that even after deleting node_modules that some malicious code could persist in a browser cache.
If you have crypto wallets on the potentially compromised machine, or intend to transfer crypto via some web client, proceed with caution.
I've come to the conclusion that avoiding the npm registry is a great benefit. The alternative is to import packages directly from the (git) repository. Apart from being a major vector for supply-chain attacks like this one, it is also true that there is little or no coupling between the source of a project and its published code. The 'npm publish' step takes pushes local contents into the registry, meaning that a malefactor can easily make changes to code before publishing.
As a C developer, having being told for a decade that minimising dependencies and vendoring stuff straight from release is obsolete and regressive, and now seeing people have the novel realisation that it's not, is so so surreal.
Although I'll still be told that using single-header libraries and avoiding the C standard library are regressive and obsolete, so gotta wait 10 more years I guess.
NPM dev gets hacked, packages compromised, it's detected within couple of hours.
XZ got hacked, it reached development versions of major distributions undetected, right inside an _ssh_, and it only got detected due to someone luckily noticing and investigated slow ssh connections.
Still some C devs will think it's a great time to come out and boast about their practices and tooling. :shrug:
This isn't part of the current discussion, but what is the appeal of single-header libraries?
Most times they actually are a normal .c/.h combo, but the implementation was moved to the "header" file and is simply only exposed by defining some macro. When it is actually a like a single file, that can be included multiple times, there is still code in it, so it is only a header file in name.
What is the big deal in actually using the convention like it is intended to and name the file containing the code *.c ? If is intended to only be included this can be still done.
> avoiding the C standard library are regressive and obsolete
I don't understand this as well, since the one half of libc are syscall wrappers and the other half are primitives which the compiler will use to replace your hand-rolled versions anyway. But this is not harming anyone and picking a good "core" library will probably make your code more consistent and readable.
Yeah lol I’m making a C package manager for exactly this. No transitive dependencies, no binaries served. Just pulling source code, building, and being smart about avoiding rebuilds.
npm's recent provenance feature fixes this, and it's pretty easy to setup. It will seriously help prevent things like this from ever happening again, and I'm really glad that big packages are starting to use it.
> When a package in the npm registry has established provenance, it does not guarantee the package has no malicious code. Instead, npm provenance provides a verifiable link to the package's source code and build instructions, which developers can then audit and determine whether to trust it or not
You can do some weird verify thing on your GitHub builds now when they publish to npm, but I've noticed you can still publish from elsewhere even with it pegged to a build?
Do you do this in your CI as well? E.g. if you have a server somewhere that most would run `npm install` on builds, you just `git clone` into your node_modules or what?
> The alternative is to import packages directly from the (git) repository.
That sounds great in theory. In practice, NPM is very, very buggy, and some of those bugs impact pulling deps from git repos. See my issue here: https://github.com/npm/cli/issues/8440
Somehow no one thought to test this until 2020, and the entire NPM user base either didn't use the feature, or couldn't be arsed to raise the issue until 2020.
I say kinda sorta fixed, because somehow they only fixed (part of) the problem when installing package from git non-globally -- `npm install -g whatever` is still completely broken. Again, somehow no one thought to test this, I guess. The issue I opened, which I mentioned at the very beginning of this comment, addresses this bug.
Now, I say "part of of the problem" was fixed because the npm docs blatantly lie to you about how prepack scripts work, which requires a workaround (which, again, only helps when not installing globally -- that's still completely broken); from https://docs.npmjs.com/cli/v8/using-npm/scripts:
prepack
- Runs BEFORE a tarball is packed (on "npm pack", "npm publish", and when installing a git dependencies).
Yeah, no. That's a lie. The prepack script (which would normally be used for triggering a build, e.g. TypeScript compilation) does not run for dependencies pulled directly from git.
Speaking of TypeScript, the TypeScript compiler developers ran into this very problem, and have adopted this workaround, which is to invoke a script from the npm prepare script, which in turn does some janky checks to guess if the execution is occuring from a source tree fetched from git, and if so, then it explicitly invokes the prepack script, which then kicks off compiler and such. This is the workaround they use today:
Yes, if the workaround calls `npm run prepack` and the prepack script fails for some reason (e.g. a compiler error), the exit code is not propagated, so `npm install` will silently install the respective git dependency in a broken state.
How no one looks at this and comes to the conclusion that NPM is in need of better stewardship, or ought to be entirely supplanted by a competing package manager, I dunno.
After all these incidents, I still can't understand why package registries don't require cryptographic signatures on every package. It introduces a bit more friction (developers downloading CI artifacts and manually signing and uploading them), but it prevents most security incidents. Of course, this can fail if it's automated by some CI/CD system, as those are apparently easily compromised.
In all fairness—npm belongs to GitHub, which belongs to Microsoft. Amateur-hour is both not a valid excuse anymore, and also a boring explanation. GitHub is going to great lengths to enable SLSA attestations for secure tool chains; there must be systemic issues in the JS ecosystem that make an implementation of proper attestations infeasible right now, everything else wouldn't really make sense.
So if we're discussing anything here, why not what this reason is, instead of everyone praising their favourite package registry?
It sure hasn’t been forbidden in any enterprise I’ve been in! And they, in my experience, have it even worse because they never bother to update dependencies. Every install has lots of npm warnings.
Mmm. But how does the package registry know which signing keys to trust from you? You can't just log in and upload a signing key because that means that anyone who stole your 2FA will log in and upload their own signing key, and then sign their payload with that.
I guess having some cool down period after some strange profile activity (e.g. you've suddenly logged from China instead of Germany) before you're allowed to add another signing key would help, but other than that?
Supporting Passkeys would improve things; not allowing releases for a grace period after adding new signing keys and sending notifications about this to all known means of contact would improve them some more. Ultimately, there will always be ways; this is as much a people problem as it is a technical one.
I suppose you'd register your keys when signing up and to change them, you'd have some recovery passphrase, kind of like how 2FA recovery codes work. If somebody can phish _that_, congratulations.
That still requires stealing your 2FA again. In this attack they compromised a one-time authenticator code, they'd have to do it a second time in a row, and the user would be looking at a legitimate "new signing key added" email alongside it.
< developers downloading CI artifacts and manually signing and uploading them
Hell no. CI needs to be a clean environment, without any human hands in the loop.
Publishing to public registries should require a chain of signatures. CI should refuse to build artifacts from unsigned commits, and CI should attach an additional signature attesting that it built the final artifact based on the original signed commit. Public registries should confirm both the signature on the commit and the signature on the artifact before publishing. Developers without mature CI can optionally use the same signature for both the source commit and the artifact (i.e. to attest to artifacts they built on their laptop). Changes to signatures should require at least 24 hours to apply and longer (72 hours) for highly popular foundation packages.
I'm a fan of post-facto confirmation. Allow CI/CD to do the upload automatically, and then have a web flow that confirms the release. Release doesn't go out unless the button is pressed.
It removes _most_ of the release friction while still adding the "human has acknowledged the release" bit.
Yeah I know "everyone can be pwned" etc. but at this point if you are not using a password manager and still entering passwords on random websites whose domains don't match the official one then you have no business doing anything of value on the internet.
This is true, but I've also run into legitimate password fields on different domains. Multiple times. The absolute worst offender is mobile app vs browser.
Why does the mobile app use a completely different domain? Who designed this thing?
Yeah, a password manager/autofill would have set off some alarms and likely prevented this, because the browser autofill would have detected a mismatch for the domain npmjs.help.
I get the sentiment behind 'just use a password manager', but I don’t think victim-blaming should be the first reflex. Anyone can be targeted, and anyone can fail, even people who do 'everything right'.
Password managers themselves have had vulnerabilities, browser autofill can fail, and phishing can bypass even well-trained users if the attack is convincing enough.
Good hygiene (password managers, MFA, domain awareness) certainly reduces risk, but it doesn’t eliminate it. Framing security only as a matter of 'individual responsibility' ignores that attackers adapt, and that humans are not perfect computers. A healthier approach would be: encourage best practices, but also design systems that are resilient when users inevitably make mistakes.
Thinking you're above getting pwned is often step one :)
It's not easy to be 100% vigilant 100% of the time against attacks deliberatly crafted to fall for them. All it takes is a single well crafted attack that strikes when you're tired and you're done.
Numbers game. Plenty of people got the email and deleted it. Only takes one person distracted and thinking "oh yeah my 2FA is pretty old" for them to get pwned.
More info:
- https://github.com/chalk/chalk/issues/656
- https://github.com/debug-js/debug/issues/1005#issuecomment-3...
Affected packages (at least the ones I know of):
- ansi-styles@6.2.2
- debug@4.4.2 (appears to have been yanked as of 8 Sep 18:09 CEST)
- chalk@5.6.1
- supports-color@10.2.1
- strip-ansi@7.1.1
- ansi-regex@6.2.1
- wrap-ansi@9.0.1
- color-convert@3.1.1
- color-name@2.0.1
- is-arrayish@0.3.3
- slice-ansi@7.1.1
- color@5.0.1
- color-string@2.1.1
- simple-swizzle@0.2.3
- supports-hyperlinks@4.1.1
- has-ansi@6.0.1
- chalk-template@1.1.1
- backslash@0.2.1
It looks and feels a bit like a targeted attack.
Will try to keep this comment updated as long as I can before the edit expires.
---
Chalk has been published over. The others remain compromised (8 Sep 17:50 CEST).
NPM has yet to get back to me. My NPM account is entirely unreachable; forgot password system does not work. I have no recourse right now but to wait.
Email came from support at npmjs dot help.
Looked legitimate at first glance. Not making excuses, just had a long week and a panicky morning and was just trying to knock something off my list of to-dos. Made the mistake of clicking the link instead of going directly to the site like I normally would (since I was mobile).
Just NPM is affected. Updates to be posted to the `/debug-js` link above.
Again, I'm so sorry.
https://socket.dev/blog/npm-author-qix-compromised-in-major-...
While it sucks that this happened, the good thing is that the ecosystem mobilized quickly. I think these sorts of incidents really show why package scanning is essential for securing open source package repositories.
Deleted Comment
Dead Comment
Most people who get phished aren’t using password managers, or they would notice that the autofill doesn’t work because the domain is wrong.
Additionally, TOTP 2FA (numeric codes) are phishable; stop using them when U2F/WebAuthn/passkeys are available.
I have never been phished because I follow best practices. Most people don’t.
Dead Comment
But google comes with its own privacy nightmares.
If there's any ideas on what I should be doing, I'm all ears.
EDIT: I've heard back, they said they're aware and are on it, but no further details.
Please take care and see this as things that happen and not your own personal failure.
I figure you aren't about to get fooled by phishing anytime soon, but based on some of your remarks and remarks of others, a PSA:
TRUSTING YOUR OWN SENSES to "check" that a domain is right, or an email is right, or the wording has some urgency or whatever is BOUND TO FAIL often enough.
I don't understand how most of the anti-phishing advice focuses on that, it's useless to borderline counter-productive.
What really helps against phishing :
1. NEVER EVER login from an email link. EVER. There are enough legit and phishing emails asking you to do this that it's basically impossible to tell one from the other. The only way to win is to not try.
2. U2F/Webauthn key as second factor is phishing-proof. TOTP is not.
That is all there is. Any other method, any other "indicator" helps but is error-prone, which means someone somewhere will get phished eventually. Particularly if stressed, tired, or in a hurry. It just happened to be you this time.
Good luck and well done again on the response!
Login using one off email links (instead of username + password) is increasingly common which means its the only option.
Have the TOTP in the same/another password manager (after considering the tradeoffs) and that can also not be entered unless the domain is right :)
Folks from multi-billion dollar companies with multimillion dollar packages should learn a few things from this response.
> The author appears to have deleted most of the compromised package before losing access to his account. At the time of writing, the package simple-swizzle is still compromised.
Is this quote from TFA incorrect, since npm hasn’t yanked anything yet?
npm does appear to have yanked a few, slowly, but I still don't have any insight as to what they're doing exactly.
I’m extremely security conscious and that phishing email could have easily gotten me. All it takes is one slip up. Tired, stressed, distracted. Bokm, compromised
Great of you to own up to it.
Screenshot here: https://imgur.com/a/q8s235k
Do we just run:
npm list -g #for global installs
npm list #for local installs
And check if any packages appear that are on the above list?
Thanks!
Does anyone know how this attack works? Is it a CSRF against npmjs.com?
It wasn't a single-click attack, sorry for the confusion. I logged into their fake site with a TOTP code.
You login with your credentials, the attacker logins to the real site.
You get an SMS with a one time code from the real site and input it to the fake site.
The attacker takes the code andc finishes the login to the real site.
I actually got hit by something that sounds very similar back in July. I was saved by my DNS settings where "npNjs dot com" wound up on a blocklist. I might be paranoid, but it felt targeted and was of a higher level of believability than I'd seen before.
I also more recently received another email asking for an academic interview about "understanding why popular packages wouldn't have been published in a while" that felt like elicitation or an attempt to get publishing access.
Sadly both of the original emails are now deleted so I don't have the exact details anymore, but stay safe out there everyone.
thanks for your efforts!
phishing is too easy. so easy that I don't think the completely unchecked growth of ecosystems like NPM can continue. metastasis is not healthy. there are too many maintainers writing too many packages that too many others rely on.
Don't do security things when you're not fully awake, too. Lesson learned.
The email was a "2FA update" email telling me it's been 12 months since I updated 2FA. That should have been a red flag but I've seen similarly dumb things coming from well-intentioned sites before. Since npm has historically been in contact about new security enhancements, this didn't smell particularly unbelievable to my nose.
The email went to the npm-specific inbox, which is another way I can verify them. That address can be queried publicly but I don't generally count on spammers to find that one but instead look at git addresses etc
The domain name was `npmjs dot help` which obviously should have caught my eye, and would have if I was a bit more awake.
The actual in-email link matched what I'd expect on npm's actual site, too.
I'm still trying to work out exactly how they got access. They didn't technically get a real 2FA code from the actual, I don't believe. EDIT: Yeah they did, nevermind. Was a TOTP proxy attack, or whatever you'd call it.
Will post a post-mortem when everything is said and done.
Can't really tell you what not to do, but if you're not already using a password manager so you can easily avoid phishing scams, I really recommend you to look into starting doing so.
In the case of this attack, if you had a password manager and ended up on a domain that looks like the real one, but isn't, you'd notice something is amiss when your password manager cannot find any existing passwords for the current website, and then you'd take a really close look at the domain to confirm before moving forward.
want to stress everyone it can happen to. no one has perfect opsec or tradecraft as a 1 man show. its simply not possible. only luck gets one through and that often enough runs out.
And because it could happen to anyone that we should be doing a better job using AI models for defense. If ordinary people reading a link target URL can see it as suspicious, a model probably can too. We should be plumbing all our emails through privacy-preserving models to detect things like this. The old family of vulnerability scanners isn't working.
It actually calculates the Levenshtein distance between the legitimate address and every address in its own list. It then selects the attacker's address that is visually most similar to the original one.
This is a brilliant piece of social engineering baked right into the code. It's designed to specifically defeat the common security habit of only checking the first and last few characters of an address before confirming a transaction.
We did a full deobfuscation of the payload and analyzed this specific function. Wrote up the details here for anyone interested: https://jdstaerk.substack.com/p/we-just-found-malicious-code...
Stay safe!
> Our package-lock.json specified the stable version 1.3.2 or newer, so it installed the latest version 1.3.3
As far as I've always understood, the lockfile always specifies one single, locked version for each dependency, and even provides the URL to the tarball of that version. You can define "x version or newer" in the package.json file, but if it updates to a new patch version it's updating the lockfile with it. The npm docs suggest this is the case as well: https://arc.net/l/quote/cdigautx
And with that, packages usually shouldn't be getting updated in your CI pipeline.
Am I mistaken on how npm(/yarn/pnpm) lockfiles work?
In my experience, it's common for CI pipelines to be misconfigured in this way, and for Node developers to misunderstand what the lock file is for.
The npm team eventually seemed to settle on requiring someone to bring an RFC for this improvment, and the RFC someone did create I think has sat neglected in a corner ever since.
The package.json locked the file to ^1.3.2. If a newer version exists online that still satisfies the range in package.json (like 1.3.3 for ^1.3.2), npm install will often fetch that newer version and update your package-lock.json file automatically.
That’s how I understand it / that’s my current knowledge. Maybe there is someone here who can confirm/deny that. That would be great!
That way it's much harder to make one hash look like another.
This is smart, but not really unusual.
I don't agree that the exuberance over the brilliance of this attack is warranted if you give this a moment's thought. The web has been fighting lookalike attacks for decades. This is just a more dynamic version of the same.
To be honest, this whole post has the ring of AI writing, not careful analysis.
No it doesn't?
It has been what, hours? since the discovery? Are you expecting them to spend time analysing it instead of announcing it?
Also, nearly everyone has AI editing content these days. It doesn’t mean it wasn’t written by a human.
Again, this is not the failure of a single person. This is a failure of the software industry. Supply chain attacks have gigantic impacts. Yet these are all solved problems. Somebody has to just implement the standard security measures that prevents these compromises. We're software developers... we're the ones to implement them.
Every software packaging platform on the planet should already require code signing, artifact signing, user account attacker access detection heuristics, 2FA, etc. If they don't, it's not because they can't, it's because nobody has forced them to.
These attacks will not stop. With AI (and continuous proof that they work) they will now get worse. Mandate software building codes now.
Sadly, programming language package managers have normalized the idea that everyone who uses the package manager should be exposed to every random package and release from random strangers with no moderation. This would be unthinkable for a Linux distribution. (You can of course add 3rd-party Linux package repositories, unstable release branches, etc, which should enforce the same type of rules, but they don't have to)
Linux distros are still vulnerable to supply chain attacks though. It's very rare but it has happened. So regardless of the release process, you need all the other mitigations to secure the supply chain. And once they're set up it's all pretty automatic and easy (I use them all day at work).
These can exclude a lot of common systems and software, including automations. If your heuristic is quite naive like "is using Linux" or "is using Firefox" or "has an IP not in the US" you run into huge issues. These sound stupid, because they are, but they're actually pretty common across a lot of software.
Similar thing with 2FA. Sms isn't very secure, email primes you to phishing, TOTP is good... but it needs to be open standard otherwise we're just doing the "exclude users" thing again. TOTP is still phishable, though. Only hardware attestation isn't, but that's a huge red flag and I don't think NPM could do that.
The attacks are still possible, but they're not going to be nearly as easy here.
I don't disagree, but this sentence is doing a lot of heavy lifting. See also "draw the rest of the owl".
That was more than ten days ago, and yet major packages were compromised yesterday. How?
It's not that simple. You can implement the most stringent security measures, and ultimately a human error will compromise the system. A secure system doesn't exist because humans are the weakest link.
So while we can probably improve some of the processes within npm, phishing attacks like the ones used in this case will always be a vulnerability.
You're right that AI tools will make these attacks more common. That phishing email was indistinguishable from the real thing. But AI tools can also be used to scan and detect such sophisticated attacks. We can't expect to fight bad actors with superhuman tools at their disposal without using superhuman tools ourselves. Fighting fire with fire is the only reasonable strategy.
You’re right, this will only get a lot worse.
Why in the world would they NEED to stop? It apparently doesn't harm their "business"
What kind of crazy AI could possible have noticed that on the NPM side?
This is frustrating as someone that has built/published apps and extensions to other software providers for years and must wait days or weeks for a release to be approved while it's scanned and analyzed.
For all the security wares that MS and GitHub sell, NPM has seen practically no investment over the years (e.g. just go review the NPM security page... oh, wait, where?).
> Things were fine before they became mainstream
As in, things were fine before we had commonplace tooling to fetch third party software?
> package files that are set to grab the latest version
The three primary Node.js package managers all create a lockfile by default.
You can run the following to check if you have the malware in your dependency tree:
`rg -u --max-columns=80 _0x112fa8`
Requires ripgrep:
`brew install rg`
https://github.com/chalk/chalk/issues/656#issuecomment-32668...
Dead Comment
(I get that the same can be said for said for npm and the packages in question, but I don’t really see how the context of the thread matters in this case).
npm cache clean --force pnpm cache delete
`Get-ChildItem -Recurse | Select-String -Pattern '_0x112fa8' | ForEach-Object { $_.Line.Substring(0, [Math]::Min(80, $_.Line.Length)) }`
Breakdown of the Command:
- Get-ChildItem -Recurse: This command retrieves all files in the current directory and its subdirectories.
- Select-String -Pattern '_0x112fa8': This searches for the specified pattern in the files.
- ForEach-Object { ... }: This processes each match found.
- Substring(0, [Math]::Min(80, $_.Line.Length)): This limits the output to a maximum of 80 characters per line.
---
Hopefully this should work for Windows devs out there. If not, reply and I'll try to modify it.
Deleted Comment
If you have crypto wallets on the potentially compromised machine, or intend to transfer crypto via some web client, proceed with caution.
https://gist.github.com/edgarpavlovsky/695b896445c19b6f66f14...
Dead Comment
Although I'll still be told that using single-header libraries and avoiding the C standard library are regressive and obsolete, so gotta wait 10 more years I guess.
XZ got hacked, it reached development versions of major distributions undetected, right inside an _ssh_, and it only got detected due to someone luckily noticing and investigated slow ssh connections.
Still some C devs will think it's a great time to come out and boast about their practices and tooling. :shrug:
Most times they actually are a normal .c/.h combo, but the implementation was moved to the "header" file and is simply only exposed by defining some macro. When it is actually a like a single file, that can be included multiple times, there is still code in it, so it is only a header file in name.
What is the big deal in actually using the convention like it is intended to and name the file containing the code *.c ? If is intended to only be included this can be still done.
> avoiding the C standard library are regressive and obsolete
I don't understand this as well, since the one half of libc are syscall wrappers and the other half are primitives which the compiler will use to replace your hand-rolled versions anyway. But this is not harming anyone and picking a good "core" library will probably make your code more consistent and readable.
But maybe I'm misunderstanding the feature
That sounds great in theory. In practice, NPM is very, very buggy, and some of those bugs impact pulling deps from git repos. See my issue here: https://github.com/npm/cli/issues/8440
Here's the history behind that:
Projects with build steps were silently broken as late as 2020: https://github.com/npm/cli/issues/1865
Somehow no one thought to test this until 2020, and the entire NPM user base either didn't use the feature, or couldn't be arsed to raise the issue until 2020.
The problem gets kinda sorta fixed in late 2020: https://github.com/npm/pacote/issues/53
I say kinda sorta fixed, because somehow they only fixed (part of) the problem when installing package from git non-globally -- `npm install -g whatever` is still completely broken. Again, somehow no one thought to test this, I guess. The issue I opened, which I mentioned at the very beginning of this comment, addresses this bug.
Now, I say "part of of the problem" was fixed because the npm docs blatantly lie to you about how prepack scripts work, which requires a workaround (which, again, only helps when not installing globally -- that's still completely broken); from https://docs.npmjs.com/cli/v8/using-npm/scripts:
Yeah, no. That's a lie. The prepack script (which would normally be used for triggering a build, e.g. TypeScript compilation) does not run for dependencies pulled directly from git.Speaking of TypeScript, the TypeScript compiler developers ran into this very problem, and have adopted this workaround, which is to invoke a script from the npm prepare script, which in turn does some janky checks to guess if the execution is occuring from a source tree fetched from git, and if so, then it explicitly invokes the prepack script, which then kicks off compiler and such. This is the workaround they use today:
https://github.com/cspotcode/workaround-broken-npm-prepack-b...
... and while I'm mentioning bugs, even that has a nasty bug: https://github.com/cspotcode/workaround-broken-npm-prepack-b...
Yes, if the workaround calls `npm run prepack` and the prepack script fails for some reason (e.g. a compiler error), the exit code is not propagated, so `npm install` will silently install the respective git dependency in a broken state.
How no one looks at this and comes to the conclusion that NPM is in need of better stewardship, or ought to be entirely supplanted by a competing package manager, I dunno.
[1] https://www.debian.org/doc/manuals/securing-debian-manual/de...
So if we're discussing anything here, why not what this reason is, instead of everyone praising their favourite package registry?
Dead Comment
I guess having some cool down period after some strange profile activity (e.g. you've suddenly logged from China instead of Germany) before you're allowed to add another signing key would help, but other than that?
Hell no. CI needs to be a clean environment, without any human hands in the loop.
Publishing to public registries should require a chain of signatures. CI should refuse to build artifacts from unsigned commits, and CI should attach an additional signature attesting that it built the final artifact based on the original signed commit. Public registries should confirm both the signature on the commit and the signature on the artifact before publishing. Developers without mature CI can optionally use the same signature for both the source commit and the artifact (i.e. to attest to artifacts they built on their laptop). Changes to signatures should require at least 24 hours to apply and longer (72 hours) for highly popular foundation packages.
It removes _most_ of the release friction while still adding the "human has acknowledged the release" bit.
Why does the mobile app use a completely different domain? Who designed this thing?
Password managers themselves have had vulnerabilities, browser autofill can fail, and phishing can bypass even well-trained users if the attack is convincing enough.
Good hygiene (password managers, MFA, domain awareness) certainly reduces risk, but it doesn’t eliminate it. Framing security only as a matter of 'individual responsibility' ignores that attackers adapt, and that humans are not perfect computers. A healthier approach would be: encourage best practices, but also design systems that are resilient when users inevitably make mistakes.
It's not easy to be 100% vigilant 100% of the time against attacks deliberatly crafted to fall for them. All it takes is a single well crafted attack that strikes when you're tired and you're done.