The nx supply chain attack via npm was the bullet many companies did not doge. I mean, all you needed was to have the VS Code nx plugin installed — which always checked for the latest published nx version on npm. And if you had a local session with GitHub (eg logged into your company’s account via the GH CLI), or some important creds in a .env file… that was exfiltrated.
This happened even if you had pinned dependencies and were on top of security updates.
I avoid anything to do with NPM, except for the typescript compiler, and I'm looking forward to the rewrite in Go where I can remove even that. For this reason.
As a comparison, in Go, you have minimum version spec, and it takes great pains to never execute anything you download, even during compilation stage.
NPM will often have different source then the github repo source. How does anyone even trust the system?
Before we all conclude that supply chain attacks only happen on NPM, last time I used VS Code I discovered that it auto-installed, with no apparent opt-out, Python typing stubs for any package (e.g., Django in my case) from whatever third-party, unofficial PyPI accounts it saw fit. (Yes, this is why it was the last time I used VS Code.)
The obscurity of languages other than JavaScript will only work as a security measure for so long.
It's already solved by pnpm, which refuses to execute any postinstall scripts except those you whitelist manually. In most projects I don't enable any and everything works fine, in the worst case I had to enable two scripts (out of two dozen or so) that download prebuilt native components, although even those aren't really necessary and it could have been solved through other means (proven by typescript-go, swc, and other projects led by competent maintainers).
None of it will help you when you're executing the binaries you built, regardless of which language they were written in.
I am now using the type remover in Node to run TupeScript natively. It’s great and so fast. Even still I continue to include the TypeScript compiler in my projects so that I can run TSC with the no compile option just for the type auditing.
You are lying to yourself. In this attack, nothing was executed by npm; it "just" replaced some global functions. A Go package can't do that, but you can definitely execute malware at runtime anyway. It can also expose new imports that will be imported by mistake when using an IDE.
I have seen so many takes lamenting how this kind of supply chain attack is such a difficult problem to fix.
No it really isn't. It's an ecosystem and cultural problem that npm encourages huge dependency trees that make it impractical to review dependency updates so developers just don't.
Honestly, the same is true in a lot of other areas of computing.
Whenever you download an open-source program and you don't have to compile it first, you're at risk of running code that is not necessarily what's in the publicly-available source code.
This can even apply to source code itself when distributed through two different channels, as we saw in the xz backdoor attempt. (The release tarball contained different code to the repository.)
Yeah, Editor extensions are both auto-updated and installed in high risk dev environments. Quite a juicy target and I am surprised we haven’t seen large scale purchases by bad actors similar to browser extensions yet. However, I remember reading that the VsCode team puts a lot of effort in catching malware. But do all editors (with auto-updates) such as Sublime have such checks?
I checked has-ansi. What's the reason that this library would exist and be popular? Most of the work is done by the library it imports, ansi-regex and then it just return ansi-regex.test(string), yet it has 5% of the weekly downloads of ansi-regex. ansi-regex also has fewer than 10 lines of code.
I don't know anything about the npm ecosystem, what's the benefit of importing these libraries compared to including these code in the project?
The VS Code ecosystem has too much complexity for my tastes. I do keep a copy around with a few code formatting plugins installed but I feel more comfortable with Emacs (or Vim for my friends who are on that side of the fence).
I am a consumer of apps using npm, not a developer, and I simply don’t like the auto updates and seeing a zillion things updated. I use uv and Python a lot, and I get a similar uneasy feeling there also, but (perhaps incorrectly) I feel more in control.
All docs are local too, like we used to do with man pages and paper reference books or do you use another system for them? A second computer, a tablet, a phone?
Seriously, this is one of my key survival mechanisms. By the time I became system administrator for a small services company, I had learned to let other people beta test things. We ran Microsoft Office 2000 for 12 years, and saved soooo many upgrade headaches. We had a decade without the need to retrain.
That, and like other have said... never clicking links in emails.
This is how I feel about my Honda, and to some extent, Kubernetes. In the former case I kept a 2006 model in good order for so long I skipped at least two (automobile) generation's worth of car-to-phone teething problems, and after years of hearing people complain about their woes I've found the experience of connecting my iphone to my '23 car pretty hassle-free.
In the latter, I am finally moving a bunch of workloads out of EC2 after years of nudging from my higher-ups and, while it's still far from a simple matter I feel like the managed solutions in EKS and GKE have matured and greatly lessen the pain of migrating to K8S. I can only imagine what I would have gotten bogged down with had I promptly acted on my bosses' suggestion to do this six or seven years ago. (I also feel very lucky that the people I work for let me move on these things in my own due time.)
In the meantime you had for years a car without connecting your iphone, so you completely didn't have that feature!
There are pros and cons everywhere, but I'm more prone to change often and fix things that wait for feature to be stable and meantime do without them.
Of course, when I can afford it, e.g. not in changing my car every two years :')
At $PAST_DAYJOB we've adopted Docker "only" around 2016, and importantly, we've used it almost identically to how we used to deploy "plain" uWSGI or Apache apps: a bunch of VMs, run some Ansible roles, pull the code (now image), restart, done.
The time to move to k8s is when you have a k8s-sized problem. [Looks at Github: 760 releases, 3866 contributors.] Yeah, not now.
Sorry, the "npm ecosystem" command has been deprecated. You can instead use npm environment (or npm under-your-keyboard because we helpfully decided it should autocorrect and be an alias)
Is there some sort of easy operational way to do this? There are well known tech companies that do this internally but afaik this isn't a feature of OSS registries like verdaccio
I ran office xp on my desktop and 2000 on my laptop until I got to college and _needed_ to upgrade so I could do work with others. Block it with the firewall and you're good. Now I mostly use WordPad, and use a recent (but rarely updated) version of open office on the rare occasions I actually need an office suite or spreadsheet.
If you're worried about vulnerabilities in older software these days, Windows has built-in security features that can help with that, from the sandbox to controlled folders access (intended for ransomware protection, I believe; I use it to prevent my media server from modifying tags)
I find it insane that someone would get access to a package like this, then just push a shitty crypto stealer.
You're a criminal with a one-in-a-million opportunity. Wouldn't you invest an extra week pushing a more fledged out exploit?
You can exfiltrate API keys, add your SSH public key to the server then exfiltrate the server's IP address so you can snoop in there manually, if you're on a dev's machine maybe the browser's profiles, the session tokens common sales websites? My personal desktop has all my cards saved on Amazon. My work laptop, depending on the period of my life, you could have had access to stuff you wouldn't believe either.
You don't even need to do anything with those, there's forums to sell that stuff.
Surely there's an explanation, or is it that all the good cybercriminals have stable high paying jobs in tech, and this is what's left for us?
> You're a criminal with a one-in-a-million opportunity. Wouldn't you invest an extra week pushing a more fledged out exploit?
Because the way this was pulled off, it was going to be found out right away. It wasn't a subtle insertion, it was a complete account take over. The attacker had only hours before discovery - so the logical thing to do is a hit and run. They asked what is the most money that can be extracted in just a few hours in an automated fashion (no time to investigate targets manually one at a time) and crypto is the obvious answer.
Unless the back doors were so good they weren't going to be discovered even though half the world would be dissecting the attack code, there was no point in even trying.
"found out right away"... by people with time to review security bulletins. There's loads of places I could see this slipping through the cracks for months.
is that so? from the email it looks like they MITM'd the 2FA setup process, so they will have qix's 2FA secret. they don't have to immediately start taking over qix's account and lock him out. they should have had all the time they need to come up with a more sophisticated payload.
> They asked what is the most money that can be extracted in just a few hours in an automated fashion (no time to investigate targets manually one at a time) and crypto is the obvious answer.
A decade ago my root/123456 ssh password got pwned in 3-4 days. (I was gonna change to certificate!)
Hetzner alerted me saying that I filled my entire 1TB/mo download quota.
Apparently, the attacker (automation?) took over and used it to scrape alibaba, or did something with their cloud on port 443. It took a few hours to eat up every last byte. It felt like this was part of a huge operation. They also left a non-functional crypto miner in there that I simply couldn't remove.
So while they could cryptolock, they just used it for something insidious and left it alone.
To be fair, this wasn't a super demanding 0-day attack, it was a slightly targeted email phish. Maybe the attacker isn't that sophisticated and just went with what is familiar?
Stolen cryptocurrency is a sure thing because fraudulent transactions can't be halted, reversed, or otherwise recovered. Things like a random dev's API and SSH keys are close to worthless unless you get extremely lucky, and even then you have to find some way to sell or otherwise make money from those credentials, the proceeds of which will certainly be denominated in cryptocurrency anyway.
Agreed. I think we're all relieved at the harm that wasn't caused by this, but the attacker was almost certainly more motivated by profit than harm. Having a bunch of credentials stolen en masse would be a pain in the butt for the rest of us, but from the attacker's perspective your SSH key is just more work and opsec risk compared to a clean crypto theft.
Putting it another way: if I'm a random small-time burglar who happens to find himself in Walter White's vault, I'm stuffing as much cash as I can fit into my bag and ignoring the barrel of methylamine.
Ultimately, stolen cryptocurrency doesn't cause real world damage for real people, it just causes a bad day for people who gamble on questionable speculative investments.
The damage from this hack could have been far worse if it was stealing real money people rely on to feed their kids.
Get in, steal a couple hundred grand, get out, do the exact same thing a few months later. Repeat a few times and you can live worry free until retirement if you know to evade the cops.
Even if you steal other stuff, you're going to need to turn it all into cryptocurrency anyway, and how much is an AWS key really going to bring in.
There are criminals that focus on extracting passwords and password manager databases as well, though they often also end up going after cryptocurrency websites.
There are probably criminals out there biding their time, waiting for the perfect moment to strike, silently infiltrating companies through carefully picked dependencies, but those don't get caught as easily as the ones draining cryptocurrency wallets.
The pushed payload didn't generate any new traffic. It merely replaced the recipient of a crypto transaction to a different account. It would have been really hard to detect. Ex-filtrating API keys would have been picked up a lot faster.
OTOH, this modus operandi is completely inconsistent with the way they published the injected code: by taking over a developer's account. This was going to be noticed quickly.
If the payload had been injected in a more subtle way, it might have taken a long time to figure out. Especially with all the levenshtein logic that might convince a victim they'd somehow screwed up.
Not only that, but it picked an address from a list which had similar starting/ending characters so if you only checked part of the wallet address, you'd still get exploited.
It is not a one-in-a-million opportunity though. I hate to take this to the next level, but as criminal elements wake up to the fact that a few "geeks" can possibly get them access to millions of dollars expect much worse to come. As a maintainer of any code that could gain bad guys access, I would be seriously considering how well my physical identity is hidden on-line.
I just made a very similar comment. Spot on. It's laughable to think that this trivial opportunity that literally any developer could pull off with a couple of thousand dollars is a one-in-a-million. North Korea probably has enough money to buy up a significant percentage of all popular npm dependencies and most people would sell willingly and unwittingly.
In the case of North Korea, it's really crazy because hackers over there can do this legally in their own country, with the support of their government!
You give an example of an incredibly targeted attack of snooping around manually on someone's machine so you can exfiltrate yet more sensitive information like credit card numbers (how, and then what?)
But (1) how do you do that with hundreds or thousands of SSH/API keys and (2) how do you actually make money from it?
So you get a list of SSH or specific API keys and then write a crawler that can hopefully gather more secrets from them, like credit card details (how would that work btw?) and then what, you google "how to sell credentials" and register on some forum to broker a deal like they do in movies?
Sure sounds a hell of a lot more complicated and precarious than swapping out crypto addresses in flight.
API/SSH keys can easily be swapped, it's more hassle than it's worth. Be glad they didn't choose to spread the payload of one of the 100 ransomware groups with affiliate programs.
> My work laptop, depending on the period of my life, you could have had access to stuff you wouldn't believe either.
What gets me is everyone acknowledges this, yet HN is full of comments ripping on IT teams for the restrictions & EDR put in place on dev laptops.
We on the ops side have known these risks for years and that knowledge of those risks are what drives organizational security policies and endpoint configuration.
Security is hard, and it is very inconvenient, but it's increasingly necessary.
I think people rip on EDR and security when 1. They haven’t had it explained why it does what it does or 2. It is process for process sake.
To wit: I have an open ticket right now from an automated code review tool that flagged a potential vulnerability. I and two other seniors have confirmed that it is a false alarm so I asked for permission to ignore it by clicking the ignore button in a separate security ticket. They asked for more details to be added to the ticket, except I don’t have permissions to view the ticket. I need to submit another ticket to get permission to view the original ticket to confirm that no less than three senior developers have validated this as a false alarm, which is information that is already on another ticket. This non-issue has been going on for months at this point. The ops person who has asked me to provide more info won’t accept a written explanation via Teams, it has to be added to the ticket.
Stakeholders will quickly treat your entire security system like a waste of time and resources when they can plainly see that many parts of it are a waste of time and resources.
The objection isn’t against security. It is against security theater.
At least at $employer a good portion of those systems are intended to stop attacks on management and the average office worker. The process is not geared towards securing dev(arbitrary code execution)-ops(infra creds).
They're not even handing out hardware security keys for admin accounts. I use my own, some other devs just use TOTP authenticator apps on their private phones.
All their EDR crud runs on Windows, but as a dev I'm allowed to run WSL but the tools do not reach inside WSL so if that gets compromised they would be none the wiser.
There is some instrumentation for linux servers and cloud machines, but that too is full of blind spots.
And as a sibling comment says, a lot of the policies are executed without anyone being able to explain their purpose, being able to grant "functionally equivalent security" exceptions or them even making sense in certain contexts.
It feels like dealing with mindless automatons, even though humans are involved. For example a thing that happened a while ago: We were using scrypt as KDF, but their scanning flagged it as unknown password encryption and insisted that we should use SHA2 as a modern, secure hashing function. Weeks of long email threads, escalation and several managers suggesting "just change it to satisfy them" followed. That's a clear example of mindless rule-following making a system less secure.
Blocking remote desktop forwarding of security keys also is a fun one.
Maybe their goal was just surviving, not getting rich.
Also, you underestimate how trivial this 'one-in-a-million opportunity' is; it's definitely not a one-in-a-million! Almost anybody with basic coding ability and a few thousand dollars could pull off this hack. There are thousands of libraries which are essentially worthless with millions of downloads and the author who maintains is basically broke and barely uses their npm account anymore. Anybody could just buy those npm accounts under false pretenses for a couple of thousands and then do whatever they want with tens of thousands (or even hundreds of thousands) of compromised servers. The library author is legally within their rights to sell their digital assets and it's not their business what the acquirer does with them.
> find it insane that someone would get access to a package like this, then just push a shitty crypto stealer
Consumer financial fraud is quite big and relatively harmless. Industrial espionage, otoh, can potentially put you in the cross hairs of powerful and/or rouge elements, and so, only the big actors get involved, but in a targeted way, preferring to not leave much if any trace of compromise.
i fell for this malware once. had the malware on my laptop even with mb in the background. i copy paste and address and didn't even check it. my bad indeed. those guys makes a lot of money from this "one shot" moments
What makes you so sure that the exploit is over? Maybe they wanted their secondary exploit to get caught to give everyone a sense of security? Their primary exploit might still be lurking somewhere in the code?
Maybe one in a million is hyperbolic but that’s sorta the game with these attacks isn’t it? Registering thousands upon thousands of domains + tens of thousands of emails until you catch something from the proverbial pond.
That post fails to address the main issue, its not that we don't have time to vet dependencies, its that nodejs s security and default package model is absurd and how we use it even more. Even most deno posts i see use “allow all” for laziness which i assume will be copy pasted by everyone because its a major pain of UX to get to the right minimal permissions. The only programming model i am aware if that makes it painful enough to use a dependency, encourages hard pinning and vetted dependency distribution and forces explicit minimal capability based permission setup is cloudflares workerd. You can even set it up to have workers (without changing their code) run fully isolated from network and only communicate via a policy evaluator for ingress and egress. It is apache licensed so it is beyond me why this is not the default for use-cases it fits.
Another main issue is how large (deep and wide) this "supply chain" is in some communities. JavaScript and python notable for their giant reliance on libs.
If I compare a typical Rust project, with a same JavaScript one, JavaScript project itself often has magnitudes more direct dependencies (wide supply chain?). The rust tool will have three or four, the JavaScript over ten, sometimes ten alone to help with just building the typescript in dev. Worsened by the JavaScript dependencies own deps (and theirs, and theirs, all the way down to is_array or left_pad). Easily getting in the hundreds. In rust, that graph will list maybe ten more. Or, with some complex libraries, a total of several tens.
This attitude difference is also clear in Python community. Where the knee-jerk reaction is to add an import, rather than think it through, maybe copy paste a file, and in any case, being very conservative. Do we really need colors in the terminal output? We do? Can we not just create a file with some constants that hold the four ANSI escape codes instead?
I'm trying to argue that there's also an important cultural problem with supply chain attacks to be considered.
> [...] python notable for their giant reliance on libs.
I object. You can get a full-blown web app rolling with Django alone. Here's it's list of external dependencies, including transitive: asgiref, sqlparse, tzdata. (I guess you can also count jQuery, if you're using the _builtin_ admin interface.)
The standard library is slowly swallowing the most important libraries & tools in the ecosystem, such as json or venv. What was once a giant yield-hack to get green threads / async, is now a part of the language. The language itself is conservative in what new features it accepts, 20yro Python code still reads like Python.
Sure, I've worked on a Django codebase with 130 transitive dependencies. But it's 7yro and powers an entire business. A "hello world" app in Express has 150, for Vue it's 550.
> If I compare a typical Rust project, with a same JavaScript one, JavaScript project itself often has magnitudes more direct dependencies (wide supply chain?).
This has more to do with the popularity of a language than anything else, I think. Though the fact that Python and JS are used as "entry level" languages probably encourages some of these "lazy" libraries cough cough left-pad cough cough.
I know this isn't really possible for smaller guys but larger players (like NPM) really should buy up all the TLD versions of "npm" (that is: npm.io, npm.sh, npm.help, etc). One of the reasons this was so effective is that the attacker managed to snap up "npm.help"
Then you have companies like AWS, they were sending invoices from `no-reply-aws@amazon.com` but last month they changed it to `no-reply@tax-and-invoicing.us-east-1.amazonaws.com`.
That looks like a phishing attempt from someone using a random EC2 instance or something, but apparently it's legit. I think. Even the "heads-up" email they sent beforehand looked like phishing, so I was waiting for the actual invoice to see if they really started using that address, but even now I'm not opening these attached PDFs.
These companies tell customers to be suspicious of phishing attempts, and then they pull these stunts.
> These companies tell customers to be suspicious of phishing attempts, and then they pull these stunts.
Yep. At every BigCo I've worked at, nearly all of the emails from Corporate have been indistinguishable from phishing. Sometimes, they're actual spam!
Do the executives and directors responsible for sending these messages care? No. They never do, and get super defensive and self-righteous when you show them exactly how their precious emails tick every "This message is phishing!" box in the mandatory annual phishing-detection-and-resistance training.
There's like 1500 TLDs, now some of them are restricted and country-code TLDs but now it makes me wonder how much it would actual cost per year to maintain registration of every non-restricted TLD. I'm sure theres some SaaS company that'll do it.
OTOH, doesn't ICANN already sometimes restrict who has access to a given TLD? Would it really be that crazy for them to say "maybe we shouldn't let registrars sell npm.<TLD> regardless of the TLD", and likewise for a couple dozen of the most obvious targets (google., amazon., etc.)? No one needs to pay for these domains if no one is selling them in the first place. I don't love the idea of special treatment for giant companies in terms of domains, but we're already kind of there with the whole process they did when initially allowing companies to compete for exclusive access to TLDs, so we might as well use that process for something actually useful (unlike, say, letting companies apply for exclusive ownership of ".music" and have a whole legal process to determine that maybe that isn't actually beneficial for the internet as whole: https://en.wikipedia.org/wiki/.music)
I agree that especially larger players should be proactive and register all similar-sounding TLDs to mitigate such phishing attacks, but they can't be outright prevented this way.
That seems like a bad idea compared to just having a canonical domain - people might become used to seeing "npm.<whatever>" and assuming it is legit. And then all it takes is one new TLD where NPM is a little late registering for someone to do something nefarious with the domain.
Just because you buy them doesn't mean that you have to use them. Squatting on them is no more harmful (except financially) than leaving them available for potentially hostile 3rd parties.
This won't work - npm.* npmjs.* npmjs-help.* npm-help.* node.* js.* npmpackage.*. The list is endless.
You can't protect against people clicking links in emails in this way. You might say `npmjs-help.ph` is a phishy domain, but npmjs.help is a phishy domain and people clicked it anyway.
First thing I do is check any domain that I don't recognize as official.
Domain: NPMJS.HELP (85 similar domains)
Registrar: Porkbun, LLC (4.84 million domains)
Query Time: 8 Sep 2025 - 4:14 PM UTC [1 DAY BACK] [REFRESH]
Registered: 5th September 2025 [4 days back]
Expiry: 5th September 2026 [11 months, 25 days left]
I'd be suspicious of anything registered with Porkbun discount registrar. 4 days ago, means it's fake.
> It sets a deadline a few days in the future. This creates a sense of urgency, and when you combine urgency with being rushed by life, you are much more likely to fall for the phishing link.
Any time I feel like I'm being rushed, I check deeper. It would help if everyone's official communications only came from the most well known domain (or subdomain).
I don't think that particular measure would help but NPM are the people who brought us the LPad crisis and their wikipedia page has a long string of security failures mentioned on it. Given this, it seems likely their attitude is "we don't care, we don't have to" and their relative success as the world's largest package manager seems to echo that (not that I have any idea whether they make any money).
As the post mentions wallets like MetaMask being the targets, AFAIK MetaMask in particular might be one of the best protected (isolated) applications from this kind of attack due to their use of LavaMoat https://x.com/MetaMask/status/1965147403713196304 -- though I'd love to read a detailed analysis of whether they actually are protected. No affiliation with MetaMask, just curious about effectiveness of seemingly little adopted measures (relative to scariness of attacks).
> If you were targeted with such a phishing attack, you'd fall for it too and it's a matter of when not if. Anyone who claims they wouldn't is wrong.
I like to think I wouldn't. I don't put credentials into links from emails that I didn't trigger right then (e.g. password reset emails). That's a security skill everyone should be practicing in 2025.
Yeah, I feel that bit is just wrong, in three ways for me:
1. Like you, I never put credentials into links from emails that I didn’t trigger/wasn’t expecting. This is a generally-sensible practise.
2. Updating 2FA credentials is nonsense. I don’t expect everyone to know this, this is the weakest of the three.
3. If my credentials don’t autofill due to origin mismatch, I am not filling it manually. Ever. I would instead, if I thought it genuine, go to their actual site and log in there, and then see nothing about what the phish claimed. I’ve heard people talking about companies using multiple origins for their login forms and how having to deal with that undermines this aspect, but for myself I don’t believe I’ve ever seen that, not even once. It’s definitely not common, and origin-locked second factors should make that practice disappear altogether.
Now these three are not of equal strength. The second requires specific knowledge, and a phish could conceivably use something similar that isn’t such nonsense anyway. The first is a best practice that seems to require some discipline, so although everyone should do it, it is unfortunately not the strongest. But the third? When you’re using a password manager with autofill, that one should be absolutely robust. It protects you! You have to go out of your way to get phished!
> 2. Updating 2FA credentials is nonsense. I don’t expect everyone to know this, this is the weakest of the three.
The problem with this is that companies often send out legit emails saying things like "update your 2FA recovery methods". Most people don't know well enough how 2FA works to spot the difference.
"'such' a phishing attack" makes it sound like a sophisticated, indepth attack, when in reality it's a developer yet again falling for a phishing email that even Sally from finance wouldn't fall for, and although anyone can make mistakes, there is such a thing as negligent, amateur mistakes. It's astonishing to me.
Every time I bite my tongue (literal not figurative) it's also astonishing to me. Last time I did was probably 3 years ago and it was probably 10 years earlier for the time before that. Would it be fair to call me a negligent eater? Have you been walking and tripped over nothing? Humans are fallible and unless you are in an environment where the productivity loss of a rigorous checklist and routine system makes sense these mistakes happen.
It would be just as easy to argue that anyone who uses software and hasn't confirmed their security certifications include whatever processes you imagine avoids 'human makes 1 mistake and continues with normal workflow' error or holds updates until evaluated is negligent.
Yes, that was a bit defeatist about phishing and tolerant of poor security. Anyone employing the "hang up, look up, call back" technique would be safe. It sounds like the author doesn't even know that technique and avoids phishing by using intuition.
I've had emails like that from various places, probably legitimate, but I absolutely never click the bloody link from an email and enter my credentials into it! That's internet safety 101.
Anyone can be fallible in the right circumstances. Maybe you're tired, unwell, in a rush, or otherwise distressed and not thinking straight. Maybe a malicious actor accidentally crafts a scam that coincides with specific details from your life. Perhaps the scam centres around some system you have less expertise in.
The point of not assigning blame isn't to absolve people of the need to have their guard up but to recognise that everyone is capable of mistakes.
This happened even if you had pinned dependencies and were on top of security updates.
We need some deeper changes in the ecosystem.
https://github.com/nrwl/nx/security/advisories/GHSA-cxm3-wv7...
I avoid anything to do with NPM, except for the typescript compiler, and I'm looking forward to the rewrite in Go where I can remove even that. For this reason.
As a comparison, in Go, you have minimum version spec, and it takes great pains to never execute anything you download, even during compilation stage.
NPM will often have different source then the github repo source. How does anyone even trust the system?
The obscurity of languages other than JavaScript will only work as a security measure for so long.
None of it will help you when you're executing the binaries you built, regardless of which language they were written in.
I have seen so many takes lamenting how this kind of supply chain attack is such a difficult problem to fix.
No it really isn't. It's an ecosystem and cultural problem that npm encourages huge dependency trees that make it impractical to review dependency updates so developers just don't.
Whenever you download an open-source program and you don't have to compile it first, you're at risk of running code that is not necessarily what's in the publicly-available source code.
This can even apply to source code itself when distributed through two different channels, as we saw in the xz backdoor attempt. (The release tarball contained different code to the repository.)
I don't know anything about the npm ecosystem, what's the benefit of importing these libraries compared to including these code in the project?
I am a consumer of apps using npm, not a developer, and I simply don’t like the auto updates and seeing a zillion things updated. I use uv and Python a lot, and I get a similar uneasy feeling there also, but (perhaps incorrectly) I feel more in control.
Deleted Comment
Seriously, this is one of my key survival mechanisms. By the time I became system administrator for a small services company, I had learned to let other people beta test things. We ran Microsoft Office 2000 for 12 years, and saved soooo many upgrade headaches. We had a decade without the need to retrain.
That, and like other have said... never clicking links in emails.
At $PAST_DAYJOB we've adopted Docker "only" around 2016, and importantly, we've used it almost identically to how we used to deploy "plain" uWSGI or Apache apps: a bunch of VMs, run some Ansible roles, pull the code (now image), restart, done.
The time to move to k8s is when you have a k8s-sized problem. [Looks at Github: 760 releases, 3866 contributors.] Yeah, not now.
If you're worried about vulnerabilities in older software these days, Windows has built-in security features that can help with that, from the sandbox to controlled folders access (intended for ransomware protection, I believe; I use it to prevent my media server from modifying tags)
"Hey, is it still broken? No? Great!"
I find it insane that someone would get access to a package like this, then just push a shitty crypto stealer.
You're a criminal with a one-in-a-million opportunity. Wouldn't you invest an extra week pushing a more fledged out exploit?
You can exfiltrate API keys, add your SSH public key to the server then exfiltrate the server's IP address so you can snoop in there manually, if you're on a dev's machine maybe the browser's profiles, the session tokens common sales websites? My personal desktop has all my cards saved on Amazon. My work laptop, depending on the period of my life, you could have had access to stuff you wouldn't believe either.
You don't even need to do anything with those, there's forums to sell that stuff.
Surely there's an explanation, or is it that all the good cybercriminals have stable high paying jobs in tech, and this is what's left for us?
Because the way this was pulled off, it was going to be found out right away. It wasn't a subtle insertion, it was a complete account take over. The attacker had only hours before discovery - so the logical thing to do is a hit and run. They asked what is the most money that can be extracted in just a few hours in an automated fashion (no time to investigate targets manually one at a time) and crypto is the obvious answer.
Unless the back doors were so good they weren't going to be discovered even though half the world would be dissecting the attack code, there was no point in even trying.
is that so? from the email it looks like they MITM'd the 2FA setup process, so they will have qix's 2FA secret. they don't have to immediately start taking over qix's account and lock him out. they should have had all the time they need to come up with a more sophisticated payload.
A decade ago my root/123456 ssh password got pwned in 3-4 days. (I was gonna change to certificate!)
Hetzner alerted me saying that I filled my entire 1TB/mo download quota.
Apparently, the attacker (automation?) took over and used it to scrape alibaba, or did something with their cloud on port 443. It took a few hours to eat up every last byte. It felt like this was part of a huge operation. They also left a non-functional crypto miner in there that I simply couldn't remove.
So while they could cryptolock, they just used it for something insidious and left it alone.
Putting it another way: if I'm a random small-time burglar who happens to find himself in Walter White's vault, I'm stuffing as much cash as I can fit into my bag and ignoring the barrel of methylamine.
The damage from this hack could have been far worse if it was stealing real money people rely on to feed their kids.
Even if you steal other stuff, you're going to need to turn it all into cryptocurrency anyway, and how much is an AWS key really going to bring in.
There are criminals that focus on extracting passwords and password manager databases as well, though they often also end up going after cryptocurrency websites.
There are probably criminals out there biding their time, waiting for the perfect moment to strike, silently infiltrating companies through carefully picked dependencies, but those don't get caught as easily as the ones draining cryptocurrency wallets.
A couple hundred grand is not what these attackers are after.
step 1: live in a place where the cops do not police this type of activity
step 2: $$$$
> one-in-a-million opportunity
OTOH, this modus operandi is completely inconsistent with the way they published the injected code: by taking over a developer's account. This was going to be noticed quickly.
If the payload had been injected in a more subtle way, it might have taken a long time to figure out. Especially with all the levenshtein logic that might convince a victim they'd somehow screwed up.
In the case of North Korea, it's really crazy because hackers over there can do this legally in their own country, with the support of their government!
And most popular npm developers are broke.
https://xkcd.com/538/
But (1) how do you do that with hundreds or thousands of SSH/API keys and (2) how do you actually make money from it?
So you get a list of SSH or specific API keys and then write a crawler that can hopefully gather more secrets from them, like credit card details (how would that work btw?) and then what, you google "how to sell credentials" and register on some forum to broker a deal like they do in movies?
Sure sounds a hell of a lot more complicated and precarious than swapping out crypto addresses in flight.
The plot of Office Space might offer clues.
Also isn't it crime 101 that greedy criminals are the ones who are more likely to get caught?
What gets me is everyone acknowledges this, yet HN is full of comments ripping on IT teams for the restrictions & EDR put in place on dev laptops.
We on the ops side have known these risks for years and that knowledge of those risks are what drives organizational security policies and endpoint configuration.
Security is hard, and it is very inconvenient, but it's increasingly necessary.
To wit: I have an open ticket right now from an automated code review tool that flagged a potential vulnerability. I and two other seniors have confirmed that it is a false alarm so I asked for permission to ignore it by clicking the ignore button in a separate security ticket. They asked for more details to be added to the ticket, except I don’t have permissions to view the ticket. I need to submit another ticket to get permission to view the original ticket to confirm that no less than three senior developers have validated this as a false alarm, which is information that is already on another ticket. This non-issue has been going on for months at this point. The ops person who has asked me to provide more info won’t accept a written explanation via Teams, it has to be added to the ticket.
Stakeholders will quickly treat your entire security system like a waste of time and resources when they can plainly see that many parts of it are a waste of time and resources.
The objection isn’t against security. It is against security theater.
All their EDR crud runs on Windows, but as a dev I'm allowed to run WSL but the tools do not reach inside WSL so if that gets compromised they would be none the wiser.
There is some instrumentation for linux servers and cloud machines, but that too is full of blind spots.
And as a sibling comment says, a lot of the policies are executed without anyone being able to explain their purpose, being able to grant "functionally equivalent security" exceptions or them even making sense in certain contexts. It feels like dealing with mindless automatons, even though humans are involved. For example a thing that happened a while ago: We were using scrypt as KDF, but their scanning flagged it as unknown password encryption and insisted that we should use SHA2 as a modern, secure hashing function. Weeks of long email threads, escalation and several managers suggesting "just change it to satisfy them" followed. That's a clear example of mindless rule-following making a system less secure.
Blocking remote desktop forwarding of security keys also is a fun one.
Deleted Comment
For anything else you need a fiat market, which is hard to deal with remotely.
Also, you underestimate how trivial this 'one-in-a-million opportunity' is; it's definitely not a one-in-a-million! Almost anybody with basic coding ability and a few thousand dollars could pull off this hack. There are thousands of libraries which are essentially worthless with millions of downloads and the author who maintains is basically broke and barely uses their npm account anymore. Anybody could just buy those npm accounts under false pretenses for a couple of thousands and then do whatever they want with tens of thousands (or even hundreds of thousands) of compromised servers. The library author is legally within their rights to sell their digital assets and it's not their business what the acquirer does with them.
Consumer financial fraud is quite big and relatively harmless. Industrial espionage, otoh, can potentially put you in the cross hairs of powerful and/or rouge elements, and so, only the big actors get involved, but in a targeted way, preferring to not leave much if any trace of compromise.
The attacker had access to the user's npm repository only.
Your ideas are potentially lubricative over time, but first it creates more work and risk for the attacker.
nobody cares about your trade secrets, or some nation's nuclear program, just take the crypto
Dead Comment
If I compare a typical Rust project, with a same JavaScript one, JavaScript project itself often has magnitudes more direct dependencies (wide supply chain?). The rust tool will have three or four, the JavaScript over ten, sometimes ten alone to help with just building the typescript in dev. Worsened by the JavaScript dependencies own deps (and theirs, and theirs, all the way down to is_array or left_pad). Easily getting in the hundreds. In rust, that graph will list maybe ten more. Or, with some complex libraries, a total of several tens.
This attitude difference is also clear in Python community. Where the knee-jerk reaction is to add an import, rather than think it through, maybe copy paste a file, and in any case, being very conservative. Do we really need colors in the terminal output? We do? Can we not just create a file with some constants that hold the four ANSI escape codes instead?
I'm trying to argue that there's also an important cultural problem with supply chain attacks to be considered.
I object. You can get a full-blown web app rolling with Django alone. Here's it's list of external dependencies, including transitive: asgiref, sqlparse, tzdata. (I guess you can also count jQuery, if you're using the _builtin_ admin interface.)
The standard library is slowly swallowing the most important libraries & tools in the ecosystem, such as json or venv. What was once a giant yield-hack to get green threads / async, is now a part of the language. The language itself is conservative in what new features it accepts, 20yro Python code still reads like Python.
Sure, I've worked on a Django codebase with 130 transitive dependencies. But it's 7yro and powers an entire business. A "hello world" app in Express has 150, for Vue it's 550.
This has more to do with the popularity of a language than anything else, I think. Though the fact that Python and JS are used as "entry level" languages probably encourages some of these "lazy" libraries cough cough left-pad cough cough.
But in the end, we should all rely on fewer dependencies. It's certainly the philosophy I'm trying to follow with https://mastrojs.github.io – see e.g. https://jsr.io/@mastrojs/mastro/dependencies
That looks like a phishing attempt from someone using a random EC2 instance or something, but apparently it's legit. I think. Even the "heads-up" email they sent beforehand looked like phishing, so I was waiting for the actual invoice to see if they really started using that address, but even now I'm not opening these attached PDFs.
These companies tell customers to be suspicious of phishing attempts, and then they pull these stunts.
Yep. At every BigCo I've worked at, nearly all of the emails from Corporate have been indistinguishable from phishing. Sometimes, they're actual spam!
Do the executives and directors responsible for sending these messages care? No. They never do, and get super defensive and self-righteous when you show them exactly how their precious emails tick every "This message is phishing!" box in the mandatory annual phishing-detection-and-resistance training.
I agree that especially larger players should be proactive and register all similar-sounding TLDs to mitigate such phishing attacks, but they can't be outright prevented this way.
You can't protect against people clicking links in emails in this way. You might say `npmjs-help.ph` is a phishy domain, but npmjs.help is a phishy domain and people clicked it anyway.
> It sets a deadline a few days in the future. This creates a sense of urgency, and when you combine urgency with being rushed by life, you are much more likely to fall for the phishing link.
Any time I feel like I'm being rushed, I check deeper. It would help if everyone's official communications only came from the most well known domain (or subdomain).
Heuristics like this one should be performed automatically by the email client.
Added: story dedicated to this topic more or less https://news.ycombinator.com/item?id=45179889
I like to think I wouldn't. I don't put credentials into links from emails that I didn't trigger right then (e.g. password reset emails). That's a security skill everyone should be practicing in 2025.
1. Like you, I never put credentials into links from emails that I didn’t trigger/wasn’t expecting. This is a generally-sensible practise.
2. Updating 2FA credentials is nonsense. I don’t expect everyone to know this, this is the weakest of the three.
3. If my credentials don’t autofill due to origin mismatch, I am not filling it manually. Ever. I would instead, if I thought it genuine, go to their actual site and log in there, and then see nothing about what the phish claimed. I’ve heard people talking about companies using multiple origins for their login forms and how having to deal with that undermines this aspect, but for myself I don’t believe I’ve ever seen that, not even once. It’s definitely not common, and origin-locked second factors should make that practice disappear altogether.
Now these three are not of equal strength. The second requires specific knowledge, and a phish could conceivably use something similar that isn’t such nonsense anyway. The first is a best practice that seems to require some discipline, so although everyone should do it, it is unfortunately not the strongest. But the third? When you’re using a password manager with autofill, that one should be absolutely robust. It protects you! You have to go out of your way to get phished!
The problem with this is that companies often send out legit emails saying things like "update your 2FA recovery methods". Most people don't know well enough how 2FA works to spot the difference.
It would be just as easy to argue that anyone who uses software and hasn't confirmed their security certifications include whatever processes you imagine avoids 'human makes 1 mistake and continues with normal workflow' error or holds updates until evaluated is negligent.
I've had emails like that from various places, probably legitimate, but I absolutely never click the bloody link from an email and enter my credentials into it! That's internet safety 101.
The point of not assigning blame isn't to absolve people of the need to have their guard up but to recognise that everyone is capable of mistakes.