E.g. malware might be executed when you test code which uses the library, or when you run a dev server, or on a deployed web site.
The entire stack is built around trusting a code, letting it do whatever it wants. That's the problem.
Detecting outbound network connection during an npm install is quite cheap to implement in 2025. I think it comes down to tenant and incentives, if security is placed as first priority as it should, for any computing service and in particular for supply chain like package management, this would be built in.
One thing that comes to mind that would make it a months long deabte is the potential breakage of many packages. In that case as a first step just make an eye catching summary post install, with gradual push to totally restriction with something like a strict mode, we've done this before.
Which, reminds me of another long standing issue with node ecosystem toolings, information overload. It's easy to bombard devs with thesis character count then blame them for eventually getting fatigue and not reading the output. It takes effort to summarize what's most important with layered expansion of detail level, show some.
Although from what I've read 8GB of VRAM seems insufficiently near-future proof, so I've always been eyeing 5070ti+ laptops. I wonder if there's any technical blocker that prevents offering 5070ti or the amd equivalent.
I think lots of windows antivirus come with features like this? Perhaps with vast crystalized kno eledge nowadays we can afford to create OSS system level package that offers some level of protection.
I might actually do it, any down side?
What prevents anyone else? robots.txt is a request, not an access policy.
Does information no longer wants to be free now? Maybe internet, just like social media was just a social experiment at the end, albeit a successful one. Thanks GenAI.
If I were Visa/Mastercard leadership I think at least part of me would be happy to see this blow up, long term wise. Hey it's not me pushing back now, it's prigs versus the people, with a much higher chance of legislation change come out of it. Which IMO is just in this case, common carrier status as it should have, open to judicial requested blockages based on laws that are draft by folks elected by the population.
We've a buncha RFCs specifying the architecture with three branches to deal with these problems in the most agreeable way to most people, as good as we could come up with as a species. Rather than drafting new RFCs without understanding the why those three branches needed to exist, how about patching them. Complete rewrite works too but that should incorporate all the crystalized knowledge in the legacy version, which we all know is hard.
The intersection of the two seems to be quite hard to find.
At the state that we're in the AIs we're building are just really useful input/output devices that respond to a stimuli (e.g., a "prompt"). No stimuli, no output.
This isn't a nuclear weapon. We're not going to accidentally create Skynet. The only thing it's going to go nuclear on is the market for jobs that are going to get automated in an economy that may not be ready for it.
If anything, the "danger" here is that AGI is going to be a printing press. A cotton gin. A horseless carriage -- all at the same time and then some, into a world that may not be ready for it economically.
Progress of technology should not be artitrarily held back to protect automateable jobs though. We need to adapt.
It was true before we allowed them to access external systems, disregarding certain rule which I forgot the origin.
The more general problem is a mix between the tradegy of the common; we have better understanding every passing day yet still don't understand exacly why LLM perform that well emergently instead of engineered that way; and future progress.
Do you think you can find a way around access boundaries to masquerade your Create/Update requests as Read in the log system monitoring it, when you have super intelligence?