Readit News logoReadit News
flysand7 · 6 hours ago
This article, although is trying to provide some arguments as for why package managers are "evil", I found the argumentation pretty weak/non-descriptive. It's good if you have the experiences that confirm a specific point of view, but I think these experiences need to be explained in some more detail, because people reading your article may have similar experiences and therefore would find it hard to agree with your points - just like me.

To give a concrete example, you said that javascript does not have a definition of a "package" in its langauge. But what does that really mean, and why should it lead to package manager managers? Because for me, a person who has worked with javascript just a little bit, I know package.json exists and most of the package managers I've worked with agree on what the contents of this file mean. If we limit our understanding to just npm, yarn and probably bun, we don't see how that causes or contributes to the dependency hell problem (sure it exists, but how?).

You said that Go mitigates the issue of dependency hell to some degree, but this is an interesting thought, give it more exploration! Why should something like Go not have this problem not be not as severe as in Javascript?

I may not remember the details of what you said in the article and I would like to check, but currently I can't access the site because it times-out for me.

izzylan · 9 hours ago
I don't see the value in making it even harder to build software. I want to make things. Downloading a dependency manually and then cursing at the compiler because "it's right there! why won't it load!!" is just gonna make me want to build software less.

Anyone I want to work with on a project is going to have to have the same frustration and want to work on the project less. Only even more because you see they downloaded version 2.7.3-2 but the version I use is 2.7.3-1.

forrestthewoods · 6 hours ago
This is an argument for a good build system, not a package manager.
BobbyTables2 · 4 hours ago
These aren’t always separate.

Some distos might try to support multiple versions of a library. That could require installing it to different prefixes instead of the default. Thus, the build system will have to comprehend that.

dismalaf · 8 hours ago
> Downloading a dependency manually and then cursing at the compiler because "it's right there! why won't it load!!"

Odin's compiler knows what a package is and will compile it into your program automatically.

lifthrasiir · 2 hours ago
Isn't that a (built-in) package manager if it works for general packages? Or does it work only for selected dependencies?
smw · 18 hours ago
"When using Go for example, you don’t need any third-party libraries to make a web server, Go has it all there and you are done."

Fine, now what if you need to connect to a database, or parse a PDF, or talk to a grpc backend. What a hilariously short-sighted example.

To me, this whole article just screams inexperience.

1GZ0 · 18 hours ago
The Author isn't arguing for not using third party dependencies. He's arguing for developers to be more conscious of the dependencies they use, by manually vetting and handling them. That screams "I've been down the package manager route and paid the price". Not inexperience.
pipes · 18 hours ago
But titled the post "package managers are evil"
SideburnsOfDoom · 18 hours ago
> He's arguing for developers to be more conscious of the dependencies they use

"be careful all the time" doesn't scale. Half of all developers have below-average diligence, and that's a low bar. No-one is always vigilant, don't think that you're immune to human error.

No, you need tooling, automation to assist. It needs to be supported at the package manager side. Managing a site where many files are uploaded, and then downloaded many times is not a trivial undertaking. It comes with oversight responsibilities. If it's video you have to check for CSAM. If it's executable code, then you have to check for malware.

Package managers are not evil, but they are a tempting target and need to be secured. This can't just be an individual consumer responsibility.

I can't speak for other ecosystems, but some NuGet measures are here:

https://devblogs.microsoft.com/dotnet/building-a-safer-futur...

https://learn.microsoft.com/en-us/nuget/concepts/security-be...

I believe that there have been (a few) successful compromises of packages in NuGet, and that these have been mitigated. I don't know how intense the arms race is now.

kunley · 18 hours ago
Inexperience of an author who develops quite successful programming language for like 10 years? Quite a bold statement.

Actually his perspective is quite reasonable. Go is in the other part of the spectrum than languages encouraging "left-pad"-type of libraries, and this is a good thing.

Ygg2 · 18 hours ago
I've seen plenty of intelligent people acting pretty stupid.

As my psychology professor used to say. "Smart is how efficiently use your intelligence. Or don't."

So someone pretty low IQ can be smart - Forrest Gump. Or someone high IQ can be dumb occasionally - a professor so very attuned to his research topic at expense of everything else.

tialaramex · 18 hours ago
Is it "quite successful"? How would I distinguish such a "quite successful" language from say Hare or V or are these all "successful" in your mind?
coldtea · 10 hours ago
To me, this whole comment just screams inability to steelman.
rob74 · 18 hours ago
Sure... and, to prove your point, Go has a package manager too (although it's a relatively new addition). But Go still follows a "batteries included" approach, where "standard" stuff (yes, even database handling) is handled by the standard library. Which still leaves lots of other things for which you need third party packages, but those will be typically far fewer than in other languages.
torginus · 18 hours ago
I think the argument presented, is that whatever a Go package does, it does well.

Btw the Js ecosystem also has quite a few good packages (and a ton of terrible ones, including some which everyone seems to consider as the gold standard).

morsecodist · 18 hours ago
In general, I think the dependency hate is overblown. People hear about problems with dependencies because dependencies are usually open source code used by a lot of people so it is public and relevant. You don't hear as much about problems in the random code of one particular company unless it ends up in a high profile leak. For example, something like the heartbleed bug was a huge deal and got a lot of press, but imagine how many issues we would be in if everyone was implementing their own SSL. Programmers often don't follow best practices when they do things on their own. That is how you end up with things like SQL injection attacks in 2025.

Dependencies do suck but it is because managing a lot of complicated code sucks. You need some way to find issues over time and keep things up to date. Dependencies and package managers at least offer us a path to deal with problems. If you are managing your own dependencies, which I imagine would mean vendoring, then you aren't going to keep these dependencies up to date. You aren't going to find out about exploits in the dependencies and apply them.

pessimizer · 10 hours ago
> imagine how many issues we would be in if everyone was implementing their own SSL.

No, the alternative is to imagine how many issues we would be in if every project pulled in 5 different SSL libraries. Having one that everybody uses and that is already installed on everyone's system is avoiding dependency hell. Even better if it's in stdlib.

NoboruWataya · 18 hours ago
I see this a lot with Rust where I will depend on one or two external crates for a simple application and then I am shocked to see dozens of dependencies being pulled in when I go to build. I actually think Cargo's support for feature gates and conditional compilation could in theory be a strong mitigation against this as crates can avoid pulling in dependencies unless you actually need a feature that relies on them, but in practice it doesn't seem to work that way as I often see these complaints about Rust.

I sympathise with the arguments but IMO laziness will always win out. If Rust didn't have Cargo to automate dependency hell, someone would create a third party script to fill the gap.

account42 · 17 hours ago
> If Rust didn't have Cargo to automate dependency hell, someone would create a third party script to fill the gap.

Possibly but not guaranteed. Some other languages without a built in package manager haven't had an external one manage to take over the ecosystem, most (in)famously C and C++, while others have.

alexvitkov · 15 hours ago
Most language users will follow the "spirit" of the language - e.g. Bill is against package managers, people who use his language mostly agree with his ideas, and there's not a huge standard Odin package manager.

I rather appreciate that C and C++ don't have a default package manager that took over - yes, integrating libraries is a bit more difficult, but we also have a lot of small, self-contained libraries that just "do the thing" without pulling in a library that does colored text for logging, which pulls in tokio, which pulls in mio, which pulls in wasi, which pulls in serde, which is insane.

Macha · 8 hours ago
The package manager for C/C++ is apt, or rpm, or whatever package manager your system uses. These package managers were designed for the world of C/C++ software so it's less surprising that these languages haven't found as much of a push towards language package managers.

Deleted Comment

cmrdporcupine · 18 hours ago
It is an organizational not a technical problem.

When I worked at Google every single dependency was strictly vendored (and not in the mostly useless way that Cargo vendors things). There was generally only one version of a dep in the mono repo, and if you wanted something.. you generally got to own maintaining it, and you had to make sure it worked for every "customer" -- the giant CI system made sure that you knew if an upgrade would break things. And you reached out to stakeholders to manage the process. Giant trains of dependencies were not a thing. You can do that when you have seemingly infinite budget.

But technology can indeed make it worse. I love Rust, but I'm not a fan of the loose approach in Cargo and esp Crates.io, which seems to have pulled inspiration from NPM -- which I think is more of a negative than positive example. It's way too easy to make a mess. Crates.io is largely unmoderated, and its namespace is full of abandoned or lightly maintained projects.

It's quite easy to get away with a maze of giant transitive deps w/ Cargo because Rust by default links statically, so you don't usually end up in DLL hell. But just doing cargo tree on the average large Rust project is a little depressing -- to see how many separate versions of random number generators, SHA256, MD5, etc libs you end up with in a single linkage. It may not be the case that every single one is contributing to your binary size... but it's also kind of hard to know.

Understanding the blast radius of potential issues that come from unmoderated 3rd-party deps is I think something that many engineers have to learn the hard way. When they deal with a security vulnerability, or a fundamental incompatibility issue, or have to deal with build time and binary size explosions.

I wish there was a far more mature approach to this in our industry. The trend seems to be going in the opposite direction.

zokier · 9 hours ago
In many ways traditional Linux distros operate on similar model as I imagine googles monorepo. Both aim to this "globally consistent" dependency situation where you have one version of each library and you patch up things from upstream when they don't fit.

I feel we need more of these kinds of distros so you don't need to manage dependencies directly from upstream and deal with the integration effort yourself. What if we had a Rust disto following this same model, where there is only one version of each dep, some reasonable curation, and also you had nice clear release cycles? I feel that could real boon for the industry.

jitl · 18 hours ago
Rust’s big issue here is the anemic standard library. I think overall the strategy makes some amount of sense; since there’s so much crazy alchemy like depending on nightly, no_std, etc in Rust, including stuff in std has more downside in Rust than in a language that’s more stable like Go.

But it’s annoying to have to deal with 3 different time libraries and 3 different error creation libraries and 2 regex libraries somehow in my dependency tree. Plus many packages named stuff like “anyhow” or “nom” or other nonsense words where you need to google for a while to figure out what a package is supposed to do. Makes auditing more difficult than if your library is named structured-errors or parser-combinator.

I don’t like go programming language but I do like go tooling & go ecosystem. I wish there was a Rust with Go Principles. Swift is kinda in the right ballpark, packages are typically named stuff that makes sense and Swift is closer to Rust perf and Rust safety than Go perf and Go safety. But Swift is a tiny ecosystem outside of stuff that depends on the Apple proprietary universe, and the actual APIs in packages can be very magical/clever. ¯\_(ツ)_/¯

bigstrat2003 · 8 hours ago
The very sparse std is one of the few genuine mistakes I think Rust has made. I know the arguments for it, but I don't find them persuasive. A batteries included standard library, in my view, is just plain better and every modern language should have one.
rich_sasha · 16 hours ago
I agree, though also I note Python has an extensive standard library and isn't much better in terms of package sprawl.
hliyan · 18 hours ago
I don't know what the solution to this problem is, but I do remember a time (around 20 years ago) when this wasn't a big problem. Was working on a fairly large (each module between 50k - 100k LOC) C++ system. The process for using libraries:

1) Have problem that feels too complicated to hand-code.

2) Go on Internet/forums, find a library. The library is usually a small, flat collection of atomic functions.

3) A senior engineer vets the library and approves it for use.

4) Download the stable version: header file, and the lib file for our platform (on rare occasions, build it from source)

5) Place the .h file in the header path, and the lib file in the lib path; update the Makefile.

6) #include the header and call functions.

7) Update deployment scripts (bash script) to scp the lib file to target environment, or in some cases, use static linking.

8) Subscribe to a mailing list and very occasionally receive news of a breaking change that requires a rebuild.

This may sound like a lot of work, but somehow, it was a lot less stressful than dealing with NPM and node_modules today.

saulpw · 8 hours ago
I think the main thing that makes this workable is "The library is usually a small, flat collection of atomic functions."

I find that it's the hell of transitive dependencies--you as a developer can reasonably vet a single layer of 10-30 standalone libraries. But if those libraries depend on other libraries, etc, then it balloons into hundreds or thousands of dependencies, and then you're sunk.

For what it's worth, I don't think much of this is essential complexity. Often a library is complicated because it supports 10 different ways of using it, but when you use the library, you're only using 1 of those ways. If everyone is only using 10% of thousands of transitive dependencies, the overall effect is incredibly complicated, but could have been achieved with 10-100% more short-term effort. Sure, "it took twice as long to develop but at least we don't have 10x the dependencies" is a hard sell to management (and often to ourselves), but that's because we usually choose to ignore the costs of depending on software we don't understand and don't control. We think that we're cleverly avoiding having to maintain and secure those libraries we outsourced, but most open-source developers aren't doing a great job of that anyway.

Often it really is easier to develop something from scratch, rather than learn and integrate a library. Not always though, of course.

1718627440 · an hour ago
In C and C++ you don't need the transitive dependencies for compilation, you only need the header of the direct dependencies. As for linking they are only needed when linking dynamically, which was much less prevalent 20 years ago.
sombragris · 18 hours ago
> How do I manage my code without a “package manager”? [...] Through manual dependency management.

Slackware Linux does precisely that.

I'm a Slackware user. Slackware does have a package manager that can install or remove packages, and even a frontend that can use repositories (slackpkg), but it does have manual dependency resolution. Sure, there are 3rd-party managers that can add dependency resolution, but they do not come with the distro as default.

This is a very personal opinion, but manual dependency management is a feature. Back in the day, I remember installing Mandrake Linux 9.2 and activating the (then new-ish) framebuffer console. The distro folks had no better idea than to force a background "9.2" image on framebuffer consoles, which I hated. I finally found the package responsible for that. Removing it with urpmi, however, meant removing all the graphical desktop components (including X11) because that stupid package was listed as a dependency of everything graphical.

That prompted me to seek alternatives to Mandrake and ended up using Slackware. Its simplicity had the added bonus of offering manual dependency resolution.

seba_dos1 · 17 hours ago
Sounds like "alias dpkg=dpkg --force-depends"?
sombragris · 16 hours ago
Perhaps; I'm not really knowledgeable on the ways of Debian.
torginus · 18 hours ago
This reads much more like a critique of traditional open-source development than package managers themselves.

The author asserts that most open-source projects don't hit the quality standards so that their libraries can be just included, and they'll do what they say.

I assert that this is because there's no serious product effort behind most libraries (as in no dedicated QA/test/release cycle), no large commercial products use it (or if they do, either they do it in a very limited fashion, or just fork it).

Hobbyists do QA as long as it interests them/fits their usecase, but only the big vendors do bulletproof releases (which in the desktop realm seems to be only MS/Apple)

This might have to do with the domain the author chose - desktop development has unfortunately had the life sucked out of it with every dev either being a fullstack/cloud/ML/mobile dev, its mindshare and the resources going toward it have plummeted.

(I also have a sneaking suspicion the author might've encountered those bugs on desktop Linux, which, despite all the cheerleading (and policing negative opinions), is as much as a buggy mess as ever.

In my experience, it's quite likely to run into a bug that nobody has written about on the internet ever.

gingerBill · 17 hours ago
This critique applies to even closed-source development that uses open-source code bases.

I have an article on my unstructured thoughts on the problems of OSS/FOSS which goes into more depth about this: https://www.gingerbill.org/article/2025/04/22/unstructured-t...

acoustics · 4 hours ago
This is why I'm so glad that I work in a closed monorepo now. There is no package management, only build tooling.

I find myself nodding along to many of the technical and organizational arguments. But I get lost in the licensing discussion.

If it is a cultural problem that people insist on giving things away for free (and receiving them for free), then viral licenses can be very helpful, not fundamentally pernicious.

Outside of the megaprojects, my mental model for GPL is similar to proprietary enterprise software with free individual licenses. The developer gets the benefits of open projects: eyeballs, contributors, adoption, reputational/professional benefits, doing a good deed (if that motivates them) while avoiding permissively giving everything away. The idea that it's problematic that you can't build a business model on their software is akin to the "forced charity" mindset—"why did you make something that I can't use for free?"

If you see a GPL'd bit of code that you really want to use in your business, email the developers with an offer of $X,000 for a perpetual commercial license and a $Y,000/yr support contract. Most are not so ideologically pure to refuse. It's a win-win-win: your business gets the software, the developers don't feel exploited, noncommercial downstream users can enjoy the fruits of open software, and everybody's contributed to a healthier attitude on open source.