This comes up a lot when people discuss anything related to npm modules. It's easy to simply dismiss these trivial one-line modules as "insanity" and move on, but there's actually plenty of good reasons as to why many prefer to work with multiple small modules in this manner. This GitHub comment by Sindre Sorhus (author of over 600 modules on npm) is my favorite writeup on the topic:
TL;DR: Small modules are easy to reason about, and encourage code reuse and sharing across the entire community. This allows these small modules to get a tremendous amount of real world testing under all sorts of use cases, which can uncover many corner cases that an alternative naive inlined solution would never have covered (until it shows up as a bug in production). The entire community benefits from the collective testing and improvements made to these modules.
I also wanted to add that widespread use of these small modules over inlining everything makes the new module-level tree-shaking algorithms (that have been gaining traction since the advent of ES6 modules) much more effective in reducing overall code size, which is an important consideration in production web applications.
Yes they are, in the same way that a book in which every page consists of a single word is easier to understand than one with more content per page.
By focusing on the small-scale complexity to such an extreme, you've managed to make the whole system much harder to understand, and understanding the big picture is vital to things like debugging and making systems which are efficient overall.
IMHO this hyperabstraction and hypermodularisation (I just made these terms up, but I think they should be used more) is a symptom of a community that has mainly abandoned real thought and replaced it with dogmatic cargo-cult adherence to "best practices" which they think will somehow magically make their software awesome if taken to extremes. It's easy to see how advice like "keep functions short" and "don't implement anything yourself" could lead to such absurdity when taken to their logical conclusions. The same mentality with "more OOP is better" is what lead to Enterprise Java.
They really aren't, when I'm reading is-positive-integer(x) and wonder if 0 is positive I need to hunt down the definition of positive through two packages and as many files. And it gets wrose if both your code and one of your dependencies required 'is-positive-integer' and I have to also figure out which version each part of the code base is using.
If you had written (x > 0) I would have known immediately, it also wouldn't be doing the same thing as is-positive-integer(x) but how many calls to is-positive-integer are actually correct in all the corner cases that is-positive-integer covers?
And then there's the other problem with dependencies: you are trusting some unknown internet person to not push a minor version that breaks your build because you and Dr. is-positive-integer had different definitions of 'backwards compatibility'.
You'd have a good point, if all of those tiny but probably useful modules were given the use you're describing.
Discoverability though is so poor that most of those modules are most likely just used by the author and the author's co-workers.
If a typical npm user writes hundreds of packages, how the hell am I supposed to make use of them, when I can't even find them? Npm's search is horrendous, and is far from useful when trying to get to "the best/most supported module that does X" (assuming that random programmers rely on popularity to make their choice, which in itself is another problem...).
If anything looking at sindresorhus's activity feed: (https://github.com/sindresorhus) perfectly supports the author's point. Maybe some people have so little to do that they can author or find a relevant package for every single ~10 line function they need to use in their code and then spend countless commits bumping project-versions and updating package.json files. I have no idea how they get work done though..
How is that superior to copypasting from a stackoverflow answer?
If it's a popular issue, lots of people had the same issue, many will be nice enough to add their edge cases and make the answer better, most will not. Same goes for contributing to a package.
With a package you would be able to update when someone adds an edge case, but it might break your existent code and that edge case may be something that is not particular to your system.
If you don't want to get too deep in the issue, you can copy paste from SO, just the same you can just add a package.
If you want to understand the problem, you can read the answers, comments, etc. With the package you rely on reading code, I don't know how well those small packages are documented but I wouldn't count on it.
The only arguments that stands are code reuse and testability. But code reuse at the cost of the complexity the dependencies add, which IMO is not worth the time it'll take you to copy and paste some code from SO. Testability is cool but with an endless spiral of dependencies that quite often use one or more of the different (task|package|build) (managers|tools) that the node ecosystem has, I find it hard to justify adding a dependency for something trivial.
The simple rebuttal to that is that modules that are collections of small functions are easy to reason about as well, and don't the downside of increasing the metadata management to useful function ratio nearly as much. Why have just an average (mean) function, when it makes sense to provide a median and mode as well? Even then, you might find that there's a bunch of extra math operations that are useful, and you might want to include some of those.
Just sugar-coated kool aid I'm hearing. Community benefits? First of all, I'm coding to get paid and this recent madness proved that JS ecosystem is semi-garbage. Back to the original question - do people really can't program that they need packages like left-pad or is-integer which had their own dependencies? Before writing those cool toolchains (which would likely work in a specific machine with a specific setting with all the real world testing the community has) can we at least pretend that we know the computer science basics?
They do NOT encourage code re-use because the effort required to understand your need, refrain from writing code, and hunt down a module in npm, far outweighs the effort to just write the code and stick it in your in-house utils library.
I think that there's a certain wishfulness bordering on naïveté to this pursuit. We tell ourselves that we are managing complexity by having small, well-tested units that are reused by an entire community. But software complexity continues to exist. We think we are mitigating the complexity of our own software. But the complexity really just shifts to the integration of components.
Anyone that has been around long enough has war stories about getting two relatively simple pieces of software working with each other. In my experience, integration problems are often the most difficult to deal with.
I'm not at all clear why this blog post is touted as evidence that the tiny modules approach is correct. I think it might be all the people after it congratulating him.
"It's all about containing complexity." - this completely ignores the complexity of maintaining dependencies. The more dependencies I have the more complicated my project is to maintain.
Dependencies are pieces of software managed by separate entities. They have bugs and need updates. It's hard to keep up to date.
When I update a piece of software I read the CHANGELOG, how am I expected to read the CHANGELOG for 1,000 packages?
Depending on a bigger package (handled by the same entities, who write one changelog, in the same form) is more straight forward.
I'm not saying this is wrong - but there's a balance here, and you must not ignore the complexity of increasing your number of dependencies. It does make things harder.
My problem with this, as an occasional JavaScript developer, is "discoverability" (as many others have mentioned). If I decide I need a left-pad function, and search on NPM, how do I choose which one is best? The one with the most downloads? Not always the best indicator of quality; perhaps it's just the oldest.
Not to mention the cognitive overhead of stopping programming, going to NPM, searching/finding/installing the module, then reading the documentation to understand its API. Isn't it simpler to `while (str.length < endLength) str = padChar + str;`? How can there be a bug in that "alternative naive inlined solution"? Either it works or it doesn't!
I don't see how your linked comment brings more to the table than the basic arguments for code reuse.
But naturally, with any code reuse there's a benefit and a cost to instance of internal or external reuse.
The Benefits for external reuse include ideal reliability as your describe as well as not having to create the code. The costs for external reuse include having your code tied to not just an external object but also the individuals and organizations creating that object.
I think that means that unless someone takes their hundreds of modules from the same person or organization and is capable of monitoring that person, that someone is incorporating a layer of risk to their code that they don't anticipate at all.
Percentage of module owners who you can't trust to not screw up their module: H
Risk of indirectly hosing a project with N module owners providing dependencies: 1-((1-H)^N)
Let's say H is very small, like 0.05% of module owners being the type who'd hose their own packages.
3 module owners: 0.15% chance your own project gets hosed
30 module owners: 1.49% chance your own project gets hosed
300 module owners: 13.93% chance your own project gets hosed
Keep in mind it's not just your dependencies, but your entire dependency chain. And if you think a module owner might hose some modules but not others, maybe H is actually the number of modules in which case 300 starts getting pretty attainable.
Upshot:
Not everyone is trustworthy enough to hang your project on. The more packages you include, the more risk you incur. And the more module owners you include, definitely more risk.
The micromodule ecosystem is wonderful for all the reasons described, but it's terrible for optimizing against dependency risk.
Takeaways:
Host your own packages. That makes you the module owner for the purposes of your dependency chain.
If you're not going to do that, don't use modules dynamically from module owners you don't trust directly with the success of your project.
I love collaborative ecosystems, but some people suck and some perfectly non-sucky people make sucky decisions, at least from your perspective. The ecosystem has to accommodate that. Trust is great...in moderation.
I agree with you, besides the tree-shaking (nice word btw).
It's like NPM shaking the Christmas tree and then say "Have fun cleaning up the floor". Remember that NPM is not like apt-get, where the packages are Managed for you by an expert. In NPM you have to manage the packages! And where you can't have NPM and build dependencies, like in production, maintenance now becomes much harder!
My problem is one of productivity. There's already a standard library, and if it's a language I've been using for a while, I probably remember most of it. I can pretty much go straight from my thought of what I want done to typing it out, much like typing English. If you force a 'cache miss' and force me out of my head and into a documentation search, well, that's going to have a significant effect on my performance. If the function is well-named, it has less of a cost in reading the code, but there's still a cost, because what if it's slightly different? I have a pretty good feel for the gotchas in much of the standard library of the languages I use. I have to stop and check what the gotchas of your function are.
Yes, at some point the complexity cost of gluing together the standard library functions to do something becomes greater than the lookup cost of finding a function that does what I want; but I am saying that adding more functions is not costless.
Small modules are also often the result of dealing with different javascript implementations over the years. I've recently seen a simpler version of the left pad that would not have worked on multiple Safari versions or <IE6
The derision is unwarranted, due to a failure in critical thinking from otherwise smart people.
It's interesting to me that people find this convincing. I find it to be complete insanity. People need their libraries, but putting everything in tiny buckets is just not working. Why aren't people working on good utility libraries instead?
There's even some guy calling for a "micro-lodash". To me, as a Python engineer, lodash [1] is already a tiny utility library.
I guess it's also about the fact that JS is a pretty bad language. That you need a one-line `isArray` dependency to `toString.call(arr) == '[object Array]'` is crazy.
it reminds an apt proverb: missing forest for the trees.
Too many modules do not necessarily become a good thing. They may appear to get rid of complexity but in reality, you will have to face the complexity some level above and in fact, the sheer number of small modules will most probably add more complexity of themselves.
The reasoning makes sense for small modules that might change in the future, but as he says himself, most of his modules are finished and will never change. That makes many arguments in his post moot and the modules should probably be snippets instead that are implemented directly.
Author of "is-positive-integer" here. I will admit the implementation is pretty funny, but I move-out all single-purpose utils out of projects to modules for a bunch of reasons. DRY is the most obvious one, but one that may be less obvious is for better testing.
I move out modules so I can write really nice tests for the independent of the projects I am using them in. Also, I tend to write projects w/ 100% test coverage, breaking out utils allows me to test projects easier and faster.
Also note, the implementation of this module changed a few times today. With it being open source and having the collaboration of other engineers, we ended up with a very performant version, and discovered interesting quirks about "safe integers" in JS.
- Breaking out is-positive-integer hasn't reduced the number of paths to test. You have not gained anything, you've added overhead.
- 100% test coverage is rarely a good thing. It is required for safety critical areas like avionics. I can guarantee that your JS code is not making into any safety critical environment!
Just to pick at a nit with you, it's a little meaningless to say "100% test coverage", without specifying whether you're talking about line coverage, branch coverage, path coverage...
This is especially true for one-liner modules in js, where any test at all might let you claim 100% statement coverage, without accounting for branches or loops within method calls.
Actually, thats a good reason to use trivial functions like the one described. Hopefully the author has discovered all of the quirks in Javascript that might affect this. It will likely be a lot more tested than any version I would write.
As someone who spends 80% on the back end, I often get bit on the ass from JavaScripts quirks when I need to add some front end stuff.
It would be really, really great if this function was not in its own module, but was part of a larger library in which all such functions of related modules were captured, without the (cognitive and other) overhead of the separate packaging.
var average = require('average');
var result = average([2, 5, 0, 1, 25, 7, 3, 0, 0, 10]);
console.log('The average for all the values is:', result);
It's hard to not stare at that in complete disbelief; someone thought that it was worthwhile to create a package for determining the mean of an array of numbers.
You know what's worse? Javascript numbers are all floating point numbers, which means integers are 53 bits long. So, you might think this library would try to address issues this can cause, but nope, this is the average statement you'd write if you didn't know was a mantissa was and had never heard of big.js, bignumber.js, decimal.js, crunch.js or even strint (which represents integers as strings because wtf not).
Well a bunch of standard libraries have sum defined, right?
JavaScript has suffered from a lack of a standard library for a while. Having a small package like this means that (in theory) everyone is using the same bug free version of summing instead of writing their own.
Honestly having JS start building a standard library at this point would be wonderful.
Sounds like the author was aiming for something to put on his resume: e.g., "Author of 25 libraries on NPM, some with more than 500K downloads." etc...
This sounds like a symptom of an inadequate standard library. I do expect to be able to call "average" on a list of numbers without writing it myself, but I expect that to be part of the language not a 3rd party package.
I'm thinking that someone wanted to learn about building and publishing a package and the ecosystem so they made this computationally trivial thing as a practical exercise.
Pretty much every package management system gets cruft in it like this. Example: for a long time someone had uploaded a random Wordpress core install into Bower.
While it demonstrates the problem of npm lacking namespaces (such that the word "average" is wasted on such a trivial implementation)...it doesn't seem anyone was using that library
I wanted to use a javascript tool that would make my life easier, and when I looked at the npm dependency tree it had 200+ dependencies in total.
If I used that javascript tool, I'd be trusting hundreds of strangers, lots of which had absolutely no clout in github (low number of stars, single contributor projects) with my stuff.
And not just them, I'd be trusting that no one steals their github credentials and commits something harmful (again, these projects are not very popular).
It doesn't help that npm doesn't (AFAIK) implement code signing for packages which at least would let me manage who I choose to trust
In all the debate about this, why is the trust-dependency-fuck-show not getting more attention?
Every dependency you take is another degree of trust in someone else not getting compromised then suddenly finding all sorts of horribleness making it into your production environment.
This is more a reflection of how bad the JS language is than anything. Real programming languages have means of standardizing the most common UX and simple patterns into a standard library. Javascript is a consortium hell that never gets updated sufficiently, and has no good standard library, so NPM basically replaces the standard library with a thousand micropackages.
Also it is a lot easier to get it wrong in JS. Is it null? Is it undefined? Is it bird? Is it a plane? No it's a string!(but sometimes a number). Good programming languages make it easy to write is negative e.g. isNegative = (<0) where implicitly by the 0 and < it will take a Num and return a bool and this is type checked at compile time.
Yeah, there are a few shitty examples on npm. It's an open system and anyone can upload anything. The market speaks on how valuable those are. Cherry picking poor modules says nothing about the rest.
Plus, if you think that's too small, write your own broader module that does a bunch of stuff. If people find it valuable, they'll use it. If they find it more valuable than a bunch of smaller modules, you'll get 10,000 downloads and they'll get 10.
The module you roundly ridicule has had 86 downloads in the last month, 53 of which were today (at the time of this writing). I imagine most of those 53 were after you posted. So that's 40 downloads in a month, as compared to the express framework which has had 5,653,990 downloads in the last month.
The wailing and gnashing of teeth over this module is ridiculous.
DRY taken to the dogmatic extreme where everything is made up and the github stars are the only thing that matters.
This article touches on things that are wrong in the javascript culture. I always had this nagging feeling when working with NPM, this article brings it to light. For what it's worth I never felt this while writing Ruby, C# or Go.
It's the -culture- that needs to change here, not the tools.
The recursive folder structure in npm-modules was the first indication. At least Java had a single tree with com.bigco.division.application.framework.library.submodule.NIHObject.java
That recursive node_modules or whatever it is called was what made me hate this whole npm thing, specially because it is not centralized somewhere in my computer.
And that means the same files a few times repeated on my drive just eating space.
Being a Java developer I don't understand why the approach was not more like maven.
It’s written in such a way that every time you call...
passAll(f1, f2, ..., fn)(args..)
... there are something like 5 + 2n attribute accesses, 5 + 3n function calls, 3 + n new functions created, as well as some packing and unpacking of arguments, not including the actual application of the functions to the arguments that we care about. That’s in addition to the several functions defined in the dependent submodules, which you only have to pay for constructing once.
[From my quick eyeball count. These numbers could be a bit off.]
I'm more disappointed that it doesn't short-circuit the operation at all. It applies all the functions, THEN it determines whether all of them passed. Even worse, it uses `every` (which does short-circuit) to determine that all the functions are, indeed, functions, but apparently the ability to use that function to determine whether every predicate passes was missed.
Lol, nice, I wrote "pass-any" first. I then copied the code and replaced "or" w/ "and" to create "pass-all".
I will probably have go back and change this now that I know about it. In general though, not gonna a lie, I am not very concerned about micro performance optimizations.
I think it actually is not. That's from some years ago and my memory of it is fuzzy, but at that time it was surprisingly hard to check whether a variable is a positive integer – maybe it was a negative one though and that was harder? You'd think it is just checking whether it is an integer and bigger than 0, or just checking whether it is bigger than 0. And it is. But to get that code to work reliably, regardless of whether it gets a string or float or an undefined, with the JS type system of that time and in multiple browsers, even the crappy ones, that took some time. There was one specific edge case involved.
Not that it was impossible, but I still remember having to search for it and being astonished that that was necessary.
Yea. I don't think many argue against abstracting complexity into more easily understood orthogonal modules. But some of these modules aren't abstracting complexity. They are ludicrous. They are nonsense. They are a symptom of a very real problem with how JS is programmed, how people expect to program in JS, and the difficulty people have with decomposing problems.
So many people on this page have written about how these are well tested, performant, and correct modules. But, the modules aren't even correct in many cases, let alone providing incomplete coverage over edge cases or the slow performance and horrendous dependencies of many of the modules.
Formerly 9 dependent modules... but who cares maybe it was a homework assignment or a joke.
I don't use NPM so maybe it doesn't really matter aside from the level of abstraction being implemented being relatively ridiculous.
However, if my build system had to go out and grab build files for every X number of basic functions I need to use, grab Y number of dependencies for those functions, run X * Y number of tests for all those dependent packages, AND then also fell apart if someone threw a tantrum and removed any one of those packages basically shutting me down for a day... then I'd question every single thing about my decisions to use that technology.
[Quick Edit] Basically I'm saying "Get off my lawn ya kids!"
This implementation reads almost as parody, although I don't suspect that the author meant it as such. If you didn't have a sense of what lurked behind the abstraction, it would be kinda beautiful.
I can't decide what's crazier to me: that such a package exists, or that JavaScript is such a ridiculously bad language that an "is positive integer" function is non-trivial to write.
I end up spending most of my working life working on other peoples code, rather than new features I end up debugging and fixing bad code. (I actually rather like it)
The majority of code I have ever seen is awful (20 years across large + small companies) but that is why I am hired to fix awful code so I am skewed. The amount of times I have seen people implement something simple, in a convoluted error prone was is unbelievable.
I know this seems ridiculous but when you see time and time again how people fail to do the simplest things it seems like a good idea.
I had several arguments about JS and I was shocked how many developers consider this platform great. I am not sure why these extremely bad practices are defended by devs, what are they getting out of it? I am only hoping we are moving towards more sensible development environment, there are many of them with better best practices and more sane libraries.
> If npm was invoked with root privileges, then it will change the uid to the user account or uid specified by the user config, which defaults to nobody. Set the unsafe-perm flag to run scripts with root privileges.
I can't stop laughing. I think you have to admire the elegance of the concept as performance art though: this is cheap insanity. In fact, I've got to hand it to them, this is the most fun I've had looking at something programming related in a while. I recall the opening lines of SICP,
> I think that it's extraordinarily important that we in computer science keep fun in computing. When it started out, it was an awful lot of fun. Of course, the paying customers got shafted every now and then, and after a while we began to take their complaints seriously. We began to feel as if we really were responsible for the successful, error-free perfect use of these machines. I don't think we are. I think we're responsible for stretching them, setting them off in new directions, and keeping fun in the house. I hope the field of computer science never loses its sense of fun. Above all, I hope we don't become missionaries. Don't feel as if you're Bible salesmen. The world has too many of those already. What you know about computing other people will learn. Don't feel as if the key to successful computing is only in your hands. What's in your hands, I think and hope, is intelligence: the ability to see the machine as more than when you were first led up to it, that you can make it more.
Quoted in The Structure and Interpretation of Computer Programs by Hal Abelson, Gerald Jay Sussman and Julie Sussman (McGraw-Hill, 2nd edition, 1996).
The same could be said for your post. When people ask why commenting on message boards is terrible, show them this.
If you don't want to write modules this way, don't. Nothing about javascript requires that you even read articles about modules that you don't want to use. Or read articles and then follow up by posting on message boards about articles about modules you aren't going to use.
Bang some code out instead. Your opinion of javascript is about as valuable as the opinion of the person who wrote the modules.
A good micro-module removes complexity. It has one simple purpose, is tested, and you can read the code yourself in less than 30 seconds to know what's happening.
Take left-pad, for example. Super simple function, 1 minute to write, right? Yes.
The fact of the matter is: every line of code I write myself is a commitment: more to keep in mind, more to test, more to worry about.
If I can read left-pad's code in 30 seconds, know it's more likely to handle edge cases, and not have to write it myself, I'm happy.
The fault in this left-pad drama is not "people using micro-modules". The fault is in npm itself: all of this drama happened only because npm is mutable. We should focus on fixing that.
> every line of code I write myself is a commitment
That's true. However:
Every dependency you add to your project is also a commitment.
When you add a dependency, you're committing to deal with the fallout if the library you're pulling in gets stale, or gets taken over by an incompetent dev, or conflicts with something else you're using, or just plain disappears. If you add a dependency for just a few lines of code, you're making a way bigger commitment than if you'd just copy/pasted the code and maintained it yourself. That's why so many people are shaking our heads at a 17-line dependency. It's way more risk than it's worth. If you need a better stdlib for your language (some of us write PHP and feel your pain) then find one library that fills in the gaps and use that.
> If you add a dependency for just a few lines of code, you're making a way bigger commitment than if you'd just copy/pasted the code and maintained it yourself.
This is a problem with NPM, not with dependencies. With different package management systems with stable builds and lockfiles, then you pin to a specific version and there is no way upstream can cause problems. A lockfile is a pure win over vendoring.
Maintaining a dependency on a library should be much less effort than maintaining 17 lines of code. If it isn't that's a deficiency in your dependency infrastructure.
I wouldn't mind so much if these micro-modules were written in a style of thoroughness; heavily commented, heavily documented with pre-conditions, post-conditions and all imaginable inputs and outputs explicitly anticipated and formally reasoned about. I don't mind over-engineering when it comes to quality assurance.
Looking at that left-pad module though - no comments, abbreviated variable names, no documentation except a readme listing the minimally intended usage examples. This is not good enough, in my opinion, to upload to a public repository with the objective that other people will use it. It is indistinguishable from something one could throw up in a couple of minutes; I certainly have no reason to believe that the future evolution of this code will conform to any "expectation" or honour any "commitment" that I might have hopefully ascribed to it.
[EDIT: I've just noticed that there are a handful of tests as well. I wouldn't exactly call it "well tested", as said elsewhere in this thread, but it's still more than I gave it credit for. Hopefully my general point still stands.]
The benefits of reusing other people's code, to a code reuser, are supposed to be something like:
(a) It'll increase the quality of my program to reuse this code - the writer already hardened and polished this function to a greater extent than I would be bothered to do myself if I tried right now
(b) It'll save me time to reuse this code - with the support of appropriate documentation, I shouldn't need to read the code myself, yet still be able to use it correctly and safely.
Neither of those things are true for this module. It's not that the module is small, it's that it is bad.
(True that npm's mutability is a problem too - this is just a side-track.)
Completely agree here - the problem isn't micro-modules. It's partly just a lacking standard library for javascript and largely just exposing issues in npm that the community was pretty ignorant of until just now.
The whole "amaga, it's a whole package for just ten lines of code" is just elitism. Given the number of downloads on things like left-pad, it's clearly useful code.
Agreed as well. In fact, I would posit that this wasn't even really a problem until npm@3 came out and made installing dependencies far, far slower. Yet it was necessary; a standard project using babel + webpack installs nearly 300MB (!!!) of dependencies under npm@2, and about 120MB under npm@3. Both are unacceptable, but at least npm3 helps.
1) JS is unique in that it is delivered over the wire, so there is a benefit in having micro-modules instead of a bigger "string helpers" module. Things like webpack are changing that now (you can require lodash, and use only lodash.padStart).
2) JS's "standard" library is so small, because it's the intersection of all of the browser implementations of JS dating as far back as you care to support. As pointed out in sibling, a proposal for padLeft is included in ECMA2016. But we'll still need Left-Pad for years after it's adopted.
Uh, no Left Padding is NOT built-in in JavaScript. The proposal to add `String.prototype.padLeft()` was just added to ECMAScript 2016.
JavaScript had a very minimal standard library, it's pretty asinine of you to compare it to C or any other language with a pretty extensive standard library.
Wow, I feel like I could have written this. Back when I used Python, I had a folder full of functions I would copy-paste between my projects. (And maybe some of the projects contained unit-tests for the function. I didn't always keep those tests in sync.) Updating them was a pain because inevitably each one would get slightly modified over time in each project separately. Eventually, I bundled all of them into a bundle of completely unrelated utility functions in a folder on my computer somewhere, and I would import the folder with an absolute path. Sharing the code I wrote was a pain because of how much it referenced files on my local computer outside of the project. I never considered publishing my utility module because all of the stuff was completely unrelated. I'd rather publish nothing than a horrifying random amalgram that no single project of mine was even related to all of the subject matter present in it.
With npm and the popularity of small modules, it was obvious that I could just cheaply publish each of my utility functions as separate modules. Some of them are about a few dozen lines, but have hundreds of lines of tests and have had significant bugfixes that I am very happy that I haven't had to manually port to dozens of projects. I don't miss copy-pasting code across projects, no matter how many claim I've "forgotten how to program".
What I see is that a module has a non-zero overhead in complexity in itself. That is, ten 10 line modules and twenty 5 line modules do not yield the same complexity. The modules themselves have a complexity overhead associated, and submodules have their own complexity overhead associated, albeit smaller than first party modules. That complexity is easily seen from the recent situation of unpublishing modules, which resulting in modules multiple steps removed having problems building.
So, when I read "It doesn't matter if the module is one line or hundreds." I call bullshit. There is overhead, it's usually just fairly small (at may event begin to rival the gains from using a module at that level), but that small amount adds up. Once you've had to deal with a dependency graph that's 10 levels deep and contains hundreds or thousands of modules, that small extra complexity imposed by using a module is no longer in total, and comes at a real cost, as we've just seen.
Other module ecosystems have gone through some of the same problems. There was a movement in Perl/CPAN a few years back to supply smaller, more tightly focused modules a while back, to combat the sprawling dependencies that were popping up. The module names were generally suffixed with "Tiny"[1] and the goals where multiple:
- Where possible, clean up APIs where consensus had generally been built over what the most convenient usage idioms were.
- Try to eliminate or reduce non-core dependencies where possible.
- Try to keep the modules themselves and their scope fairly small.
- Remove features in comparison to the "everything included" competitor modules.
This has yielded quite a few very useful and strong modules that are commonly includes in any project. They aren't always tiny, but they attack their problem space efficiently and concisely. Even so, I'm not sure there's ever a module that's a single line of code (or less than 10, given the required statements to namespace, etc), as the point is to serve a problem, not an action.
It doesn't handle edge cases, it doesn't perform well and it isn't well tested. There is also no documentation. Obviously 30 seconds wasn't enough for you to verify anything at all about this module (namely that it's complete garbage).
And just because some random guy didn't get something as trivial as this right the first time, doesn't mean nobody else can. Also the de facto standard library lodash already has padding utilities, made by people who have a proven track record.
I don't agree with the explosion of micro-modules. There's a reason the vast majority of languages doesn't have them, at least not at function level.
IMO in the Javascript world they're only there in order to minimize script size for front end work. See lodash & lodash-something1 / lodash-something2 / ..., where there's an option of using the whole module or just including 1-function long scripts, precisely to avoid the script size issue.
Is there a solution for this? I know that the Google Closure compiler can remove dead code, ergo making inclusion of large modules less costly in terms of code size. Am I missing some ES6 feature that also helps with this?
You're just trading in the complexity of the code you'd have to write for the delayed complexity of dealing with dependency issues down the line. It's a waste of a trade off for tiny things like this.
I agree with the points you've made, but I would also posit that adding a dependency using this mutable package manager is making a commitment to maintain the integrity of that dependency, which is arguably more work than maintaining the handful of lines of code.
Nobody has forgotten. These people never knew to begin with.
NPM/JS has subsumed the class of programmer who would have previously felt at home inside PHPs battery-included ecosystem. Before that, a similar set of devs would have felt at home with Visual Basic. Seriously, go visit the comments section on archived copies of the PHP documentation. You'll find code of a similar nature. If PHP had had a module system 10+ years ago you would have seen this phenomenon then. Instead it was copy and paste.
This isn't elitism, it's just the way it is. The cost of a low barrier to entry in to a software ecosystem is taking in those who don't yet have software engineering experience.
Nobody should be surprised that NPM, which I believe has more packages than any other platform, is 90% garbage. There are only so many problems to solve and so few who can solve them well, in any language. Put 100 programmers in a room, each with 10 years experience, and you'll be lucky to find 1 who has written a good library. Writing libraries is really hard.
This is the answer, 100%. All it takes to publish an npm package, is the command npm publish, and you're done. So of course it is no surprise that there are tons upon tons of seemingly useless or tiny projects (gotta pad out that github profile for those recruiters!), or that there are then plenty of packages that use them.
Add into that the fact that:
1) Javascript has a huge number of developers, and is often an entry-level language
2) The developers on this thread (I like to think of HN as at least slightly above average) are divided whether having small packages / large dependencies trees is a good or bad thing
3) Dependency management is something that matters mostly to long term (professional / enterprise / etc) applications, which is a subset of programming, and I wonder if not a minority subset of node.js projects in general.
4) If I'm writing a throwaway app or proof of concept, and therefore don't care about dependency maintenance, using as many dependencies as possible is a major time savor,
and of course you get this situation, and it seems to make perfect sense.
Personally, I wish there was an NPM Stable, where packages underwent much more scrutiny and security in order to get it, but nonetheless, nothing I've read so far about npm really scares me given the the above context. If you are a dev creating an unmanageable dependency tree for your enterprise app, you're a shitty dev. That doesn't necessarily mean that NPM is wrong for being so open in allowing others to publish their packages, or that smaller / more worthless packages shouldn't be allowed to publish.
That said, I would really like to hear a response to this post, as I have limited experience with different package management systems.
The huge difference is that PHP package manager support namespaces and dependencies ARE FLAT. You cannot import 2 versions of the same package under the same namespace. Which
1/ forces package authors to write stable libraries
2/ forces dependencies to narrow the versions of their dependencies
3/ prevents name squatting to some extent. You cannot have a package named "forms" and then sell the name for real money, like seen on NPM. your package needs to be "namespace"/"name". NPM made a huge mistake with its gems like global namespace and it explains half the problems it is having today.
For a post that is claiming to not be elitist, it reads pretty elitist.
Can you expand on how to identify the class of programmers you're referring to? Are they the type that copy / paste code directly from StackOverflow? They lack a classical computer science education? They haven't worked on a large, enterprise-grade project?
From what I've seen, there's one division between programmers that's hard to overcome. Some see it as a tool to get certain results for a job. Some of them are bad, some of them are lazy, but most of them are good enough to get to their objective.
Others see programming more as an art. They take care to make the code not only efficient but also elegant. They'll read up on new and interesting algorithms and incorporate them in novel ways. They might often be behind deadlines, but when they are, they create things like GNU Hurd that inspire a lot of interest and lead to interesting results, maybe even teach people a few things. Their code is interesting to read. They tend to write the libraries that the first group uses.
Both groups contribute a lot, but it's not easy to get them to understand that about each other.
Comparing NPM to PECL/PEAR doesn't make much sense when talking about PHP developers. With PECL, the overhead of building a module in C is waay too high to make it viable for micro packages. And PEAR didn't just accept any random stuff, they were shooting for the one-solution-fits-all libraries and not tons of user-defined micro libraries like ecosystems like NPM encourage.
Compare NPM to Composer/Packagist and you get a better comparision. I've personally seen only very few micro packages on Packagist, thankfully this never seemed to gain traction in the PHP world.
Going down the "lots of tiny modules" route is about these three things:
a) No standard lib in JS
b) JS is delivered over the internet to web pages in a time sensitive manner ... so we don't want to bundle huge "do everything" libs. Sometimes its convenient to just grab a tiny module that does one thing well. There isn't the same restriction on any other platform
c) Npm makes it really easy to publish/consume modules
d) And because of c) the community is going "all in" with the approach. It's a sort of experiment. I think that's cool ... if the benefits can be reaped, while the pitfalls understood and avoided then JS development will be in an interesting and unique place. Problems like today can help because they highlight the issues, and the community can optimise to avoid them.
Everyone likes to bash the JS community around, we know that. And this sort of snafu gives a good opportunity. But there many JS developers working happily every day with their lots of tiny modules and being hugely productive. These are diverse people from varied technical backgrounds getting stuff done. We're investigating an approach and seeing how far we can take it.
We don't use tiny modules because we're lazy or can't program, we use them because we're interested in a grand experiment of distributing coding effort across the community.
I can't necessarily defend some of the micro modules being cited as ridiculous in this thread, but you can't judge an entire approach by the most extreme examples.
I think b) is true only because JavaScript tooling cannot perform dead code elimination. Other languages have big grab-bag utility libraries like lodash that don't hinder performance because a linker or runtime can avoid loading unused portions.
Note for b): If you include libraries such as jQuery on you website via CDN, I believe browsers will be able to use the cached version even if they never visited your website before (given that they've cached this version from the same CDN before).
I don't see anything wrong with using a pre-made left pad function. Why waste time and lines of code implementing something so trivial when there is already a solution available?
However, I agree it is ridiculous to have a dedicated module for that one function. For most nontrivial projects I just include lodash, which contains tons and tons of handy utility functions that save time and provide efficient, fast implementations of solutions for common tasks.
I think the article's thesis is essentially that every dependency your project pulls in -- which includes all the dependencies your dependencies pull in -- is a point of potential failure. I understand the "don't re-invent the wheel" defense, but the Node/JavaScript ecosystem tacitly encourages its users to build vehicles by chaining together dozens of pre-made wheels, all of which depend on more wheels, and each and every one of those wheels has a small but non-zero chance of exploding the next time you type "npm update."
(And, y'know, maybe it's because I'm not a JS programmer, but the notion of looking for a module to implement a string padding function would never have even occurred to me.)
The problem is not that, the problem is depending on unreleased versions, instead of simply depending on the version that was written when u wrote your code.
a Git submodule like approach would be much better
> Why waste time and lines of code implementing something so trivial when there is already a solution available?
Because it's so trivial? I can't wrap my head around why this is an argument in the first place. It makes no sense to bring in a module from a third party adding yet another dependency and potential point of failure when reimplementing it yourself literally takes as long as it takes to find the module, add it to package.json and run npm install.
People should be trying to limit dependencies where possible. Reproducible builds are really important if it costs you almost no time you should have it in your code base IMO.
People taking the DRY principle to the most extreme degree always makes for the worst code to debug and maintain.
This entire comment thread is such a breath of fresh air. I was beginning to think that I was that guy who was crazy for thinking that all of the people doing this were crazy. This thread is like my new support group.
> It makes no sense to bring in a module from a third party adding yet another dependency and potential point of failure when reimplementing it yourself literally takes as long as it takes to find the module, add it to package.json and run npm install.
Even if it does take the same amount of time (which it shouldn't), a 1-line call to a standard module imposes less of a future maintenance burden than 14 lines of custom code.
> People should be trying to limit dependencies where possible. Reproducible builds are really important if it costs you almost no time you should have it in your code base IMO.
That's a non sequitur. Reproducible builds are important, but unless you write code with 0 external dependencies you already have a system in place for handling library dependencies in a reproducible way. So why not use it?
> People taking the DRY principle to the most extreme degree always makes for the worst code to debug and maintain.
> However, I agree it is ridiculous to have a dedicated module for that one function. For most nontrivial projects I just include lodash, which contains tons and tons of handy utility functions that save time and provide efficient, fast implementations of solutions for common tasks.
I think that was largely the OP's point tbh. Using something like lodash [a utility library] is fine while using a module [for a single function] is not.
It might have gotten lost in the ranting from on high but I don't think the author truly meant more than that.
Some of the libraries people were mentioning broke yesterday already have lodash as dependencies, I have no idea why they wouldn't have just been using this...
Someone mentioned below that lodash had some breaking changes related to the padding functions a couple times, which could be a totally valid reason to avoid using those. I was under the impression that the lodash API was more stable than to have that kind of thing happening.
> I don't see anything wrong with using a pre-made left pad function. Why waste time and lines of code implementing something so trivial when there is already a solution available?
I'll tell you why.
The least important ones is that downloading such trivial module wastes
bandwidth and resources in general (now multiply this by several hundred
times, because of dependency fractal JS sloshes in). I would also spend much
more time searching for such module than I would implementing the damn
function.
More important is that you give up the control over any and every bug you
could introduce in such trivial function or module. You don't make it less
probable to have those bugs (because battle-tested package! except, not so
much in JavaScript, or Ruby, for that matter), you just make it much harder to
fix them.
And then, dependencies have their own cost later. You actually need a longer
project, not a throw-away one, to see this cost. It manifests in much slower
bug fixing (make a fix, find the author or maintainer, send him/her an e-mail
with the fix, wait for upstream release, vs. make a fix and commit it), it
manifests when upstream unexpectedly introduces a bug (especially between you
making a change and you running `npm install' on production installation), it
manifests when upstream does anything weird to the module, and it manifests in
many, many other subtle and annoying ways.
> You don't make it less probable to have those bugs (because battle-tested package! except, not so much in JavaScript, or Ruby, for that matter)
Battle-tested still applies - if you have that many people using a line of code they're more likely to find any bugs. (Formal proof is better than any amount of testing, but no mainstream language requires formal proof on libraries yet)
> And then, dependencies have their own cost later. You actually need a longer project, not a throw-away one, to see this cost. It manifests in much slower bug fixing (make a fix, find the author or maintainer, send him/her an e-mail with the fix, wait for upstream release, vs. make a fix and commit it), it manifests when upstream unexpectedly introduces a bug (especially between you making a change and you running `npm install' on production installation), it manifests when upstream does anything weird to the module, and it manifests in many, many other subtle and annoying ways.
Large monolithic dependencies have this kind of problem - "we upgraded rails to fix our string padding bug and now database transactions are broken". But atomised dependencies like this avoid that kind of problem, since you can update (or not) each one independently. Regarding fixing upstream bugs, you need a good process around this in any case (unless you're writing with no dependencies at all).
Finding this module on NPM or npmsearch.com is pretty trivial compared to ensuring you implement this in a way that catches every edge case.
> It manifests in much slower bug fixing
I don't buy this at all, because I've done it myself many times. If you're waiting on a PR from the original repo owner to fix a Production bug, you're doing it wrong. It's trivial to copy the dependency out of node_modules and into your src, and then fix the bug yourself. Then when the owner accepts your PR, swap it back in. I don't understand the problem here.
I agree that Lodash would be a better choice because it seems like a well maintained project. There could be two counter args, in theory:
- if the programmer uses other functions included in Lodash his code will have a single larger point of failure. For example, if Lodash is unpublished (intentionally as in this case, or unintentionally) then the programmer will have a lot more work to redo.
- Lodash introduces a lot of code, while the programmer only needs one of its functions to pad a string.
Using a library like lodash makes a lot more sense once you use a module bundler that allows tree shaking (like Rollup or Webpack 2.0) along with the ES6 module syntax. Heck, even if you're just using babel with Browserify or Webpack 1.x, you can use babel-plugin-lodash [0] so it'll update your imports and you only pull in what you need.
I think it speaks to just how lacking the baseline Javascript standard library is. The libraries that come with node help, but all of this stuff seems like it should be built-in, or at least available in some sort of prelude-like standard addon library. The lack of either leads to all these (apparently ephemeral) dependencies for really simple functions like these.
That said, I work with Java, Clojure and Python mostly so I may be more used to having a huge standard library to lean on than is typical.
So many people use lodash as a drop-in standard addon library that I'm surprised people aren't just using the padding functions that are right in there... Some of the packages that broke yesterday even have lodash included as dependencies already!
Looking at the changelog, there have been more than 70 versions of Lodash in less than four years. The first was in April 2012. [1]
_.padleft
does not exist. It was added as part of version 3.0.0 January 26, last year and renamed to _.padstart in Version 4.0 on January 12, this year.
So in less than a year "padleft" came and went away because all strings don't start on the left and someone decided that "left" means "start" except that the reason that it doesn't is the reason that it was changed. Even worse, the 4.0 documentation does not document that _.padstart renamed _.padleft. It's hard to grok what cannot be grepped.
Why blame someone for depending on padleft in a world where libraries swap out abstractions in less than a year? Breaking changes are bad for other people. Semantic versioning doesn't change that.
What if lodash itself was unpublished?
I'm having a hard time drawing a line here, obviously a 10 line function is too far on the bad side of lazy, but I can't tell what is an acceptable dependency.
Probably they do use that if they need left padding but also have a dependency on another package that thinks 'lodash is too big when all I need is left padding' so we get to this situation
This seems like the right answer to me. It's not that we forgot how to program, it's that Javascript forgot a stdlib. You could easily write your own left-pad function in any language, but a stdlib (or this module) gives you a standard way to reference it, so you don't have to look up what you named it or which order the args go in.
Agreed the JavaScript standard library is poor and instead of addressing it they've mostly just added syntax changes to ECMAScript 6 and 7. It's incredibly disappointing.
For instance I added a utility to my own library (msngr.js) so I could make HTTP calls that work in node and the browser because even the fetch API isn't universal for some insane reason.
I think in some ways a good standard library is a measure of programming language maturity. I remember when C++ had a lot of these problems back before you had the STL etc. In the early 90's it was a dog's breakfast.
We have a large internal C++ app at my work of that vintage (~1992) it uses its own proprietary super library (called tools.h++) which is just different enough from how the C++ standard evolved that its not a simple task to migrate our codebase. So now every time we change hardware platforms (has happened a few times in last 30 years) we have to source a new version of this tools++ library as well.
I find it amusing Javascript hasn't learnt from this.
Usually, dependency hell doesn't bite you, until it does. Try to rebuild that thousand-dependencies app in three years from now and you'll see ;-)
I recently had to rebuild a large RoR app from circa 2011 and it took me longer to solve dependencies issues than to familiarise myself with the code base.
Excessive dependencies are a huge anti-pattern and, in our respective developers communities, we should try to circulate the idea that, while it's silly to reinvent the wheel, it's even worse to add unnecessary dependencies.
> Try to rebuild that thousand-dependencies app in three years from now and you'll see ;-)
Let's be honest though, in the current trendy javascript ecosystem these people will already be two or three jobs away before the consequences of their decisions become obvious. Most of the stuff built with this is basically disposable.
I never can believe how often frontend developers talk about "you're just going to rebuild it all in 2 years" anyway. I guess it's a good way to keep yourself employed.
The gemfile.lock must have been "gitignored" at some point, because it had much older packages than the ones in Gemfile. Background: all we had was a git repo and did not have access to any "living" installation.
> Try to rebuild that thousand-dependencies app in three years from now and you'll see ;-)
This is your fault for expecting free resources to remain free forever. If you care about build reproduction, dedicate resources to maintain a mirror for your dependencies. These are trivial to setup for any module system worth mentioning (and trivial to write if your module system is so new or esoteric that one wasn't already written for you). If you don't want to do this, you have no place to complain when your free resource disappears in the future.
I agree. But I find two problems with your proposal:
1- Maintaining a mirror of dependencies can be a non-trivial overhead. In this app that I was working on, the previous devs had forked some gems on github, and then added that specific github repo to the requirements. But they did not do it for every dependency, probably they did not have time/resources to do that.
2- As a corollary to the above, sometimes the problem is not the package itself but compatibility among packages. E.g. package A requires version <=2.5 of package B, but package C requires version >= 2.8 of package B. Now I hear you asking "then how did it compile in the frist place?" probably the requirement was for package A v.2.9 and package C latest version, so while A was frozen, C got updated. This kind of problems is not solved by forking on Github, unless you mantain a different fork of each library for each of your project, but that's even more problematic than maintaining dependencies themselves.
P.S. At least for once, it wasn't "my fault", I didn't build that app LOL ;-)
There's more to dependency hell than "oops, the package disappeared." Try updating one of those dependencies because of a security fix, and finding that it now depends on Gizmo 7.0 when one of your other dependencies requires Gizmo < 6.0.
So, in the context of this discussion... you should make use of micro-modules to reduce code duplication, avoid defects, etc. However, don't expect those micro-modules to be maintained or available in the future; so you need to set up your own package cache to be maintained in perpetuity.
Or, you can implement the functionality yourself (or copy/paste if the license allows) and avoid the hassle.
I've been in the same situation as OP many times (although in most cases I've been brought in fix someone else's code).
In the Ruby ecosystem, library authors didn't really start caring about semantic versioning and backwards compatability until a few years ago. Even finding a changelog circa 2011 was a godsend.
I think this was mainly caused by the language itself not caring about those either. 10 years ago upgrading between patch releases of Ruby (MRI) was likely to break something.
At least this is one thing JavaScript seems to do better.
I can't speak for him, but upgrading really old Rails apps can get complicated very quickly. Especially when you're going across multiple major versions and have to deal with significant changes in Rails behavior and broken gems. "Rebuild" might not be the most accurate way to describe the slow, steady incremental approach you're forced to take (you aren't redoing huge swaths of your domain logic, for instance), but it gets the gist across.
Long story... we had the git repo but lost access to any working installation. I had to rebuild a dev vagrant VM first, and later on a production-ish setup on a server.
I find large dependencies like RoR itself cause a lot more dependency hell than zillions of small dependencies like this one. What kind of dependency hell could possibly happen for a module like this?
I wanted to write this post after the left-pad debacle but I've been beaten to it.
I think we got to this state because everyone was optimizing js code for load time-- include only what you need, use closure compiler when it matters, etc. For front end development, this makes perfect sense.
Somewhere along the line, front end developers forgot about closure compiler, decided lodash was too big, and decided to do manual tree shaking by breaking code into modules. The close-contact between nodejs and front end javascript resulted in this silly idea transiting out of front-end land and into back-end land.
Long time developers easily recognize the stupidity of this, but since they don't typically work in nodejs projects they weren't around to prevent it from happening.
New developers: listen to your elders. Don't get all defensive about how this promised land of function-as-a-module is hyper-efficient and the be-all end-all of programming efficiency. It's not. Often times, you already know you're handing a string, you don't need to vary the character that you're using for padding and you know how many characters to pad. Write a for loop; it's easy.
Note that this is exactly the sort of question I ask in coding interviews: I expect a candidate to demonstrate their ability to solve a simple problems in a simple manner; I'm not going to ask for a binary search. Separately, I'll ask a candidate to break down a bigger problem into smaller problems. In my experience, a good programmer is someone who finds simple solutions to complex problems.
Note: rails is similarly pushing back against developers that have too many dependencies:
They've already taken it to an entirely different level of insanity.
https://github.com/sindresorhus/ama/issues/10#issuecomment-1...
TL;DR: Small modules are easy to reason about, and encourage code reuse and sharing across the entire community. This allows these small modules to get a tremendous amount of real world testing under all sorts of use cases, which can uncover many corner cases that an alternative naive inlined solution would never have covered (until it shows up as a bug in production). The entire community benefits from the collective testing and improvements made to these modules.
I also wanted to add that widespread use of these small modules over inlining everything makes the new module-level tree-shaking algorithms (that have been gaining traction since the advent of ES6 modules) much more effective in reducing overall code size, which is an important consideration in production web applications.
Yes they are, in the same way that a book in which every page consists of a single word is easier to understand than one with more content per page.
By focusing on the small-scale complexity to such an extreme, you've managed to make the whole system much harder to understand, and understanding the big picture is vital to things like debugging and making systems which are efficient overall.
IMHO this hyperabstraction and hypermodularisation (I just made these terms up, but I think they should be used more) is a symptom of a community that has mainly abandoned real thought and replaced it with dogmatic cargo-cult adherence to "best practices" which they think will somehow magically make their software awesome if taken to extremes. It's easy to see how advice like "keep functions short" and "don't implement anything yourself" could lead to such absurdity when taken to their logical conclusions. The same mentality with "more OOP is better" is what lead to Enterprise Java.
Related article that explains this phenomenon in more detail: http://countercomplex.blogspot.ca/2014/08/the-resource-leak-...
If you had written (x > 0) I would have known immediately, it also wouldn't be doing the same thing as is-positive-integer(x) but how many calls to is-positive-integer are actually correct in all the corner cases that is-positive-integer covers?
And then there's the other problem with dependencies: you are trusting some unknown internet person to not push a minor version that breaks your build because you and Dr. is-positive-integer had different definitions of 'backwards compatibility'.
How does that even remotely applies to the "is positive integer" test, and even more so to "it's positive" and "it's integer"?
What's next? is-bigger-than-5? word-starts-with-capital-letter? add-1-to-a-number?
Discoverability though is so poor that most of those modules are most likely just used by the author and the author's co-workers.
If a typical npm user writes hundreds of packages, how the hell am I supposed to make use of them, when I can't even find them? Npm's search is horrendous, and is far from useful when trying to get to "the best/most supported module that does X" (assuming that random programmers rely on popularity to make their choice, which in itself is another problem...).
https://github.com/sindresorhus/ama/issues/10#issuecomment-1....
If anything looking at sindresorhus's activity feed: (https://github.com/sindresorhus) perfectly supports the author's point. Maybe some people have so little to do that they can author or find a relevant package for every single ~10 line function they need to use in their code and then spend countless commits bumping project-versions and updating package.json files. I have no idea how they get work done though..
If it's a popular issue, lots of people had the same issue, many will be nice enough to add their edge cases and make the answer better, most will not. Same goes for contributing to a package.
With a package you would be able to update when someone adds an edge case, but it might break your existent code and that edge case may be something that is not particular to your system.
If you don't want to get too deep in the issue, you can copy paste from SO, just the same you can just add a package.
If you want to understand the problem, you can read the answers, comments, etc. With the package you rely on reading code, I don't know how well those small packages are documented but I wouldn't count on it.
The only arguments that stands are code reuse and testability. But code reuse at the cost of the complexity the dependencies add, which IMO is not worth the time it'll take you to copy and paste some code from SO. Testability is cool but with an endless spiral of dependencies that quite often use one or more of the different (task|package|build) (managers|tools) that the node ecosystem has, I find it hard to justify adding a dependency for something trivial.
Anyone that has been around long enough has war stories about getting two relatively simple pieces of software working with each other. In my experience, integration problems are often the most difficult to deal with.
Flip-side: it isn't easy to reason about a large and complicated graph of dependencies.
"It's all about containing complexity." - this completely ignores the complexity of maintaining dependencies. The more dependencies I have the more complicated my project is to maintain.
Dependencies are pieces of software managed by separate entities. They have bugs and need updates. It's hard to keep up to date.
When I update a piece of software I read the CHANGELOG, how am I expected to read the CHANGELOG for 1,000 packages?
Depending on a bigger package (handled by the same entities, who write one changelog, in the same form) is more straight forward.
I'm not saying this is wrong - but there's a balance here, and you must not ignore the complexity of increasing your number of dependencies. It does make things harder.
Not to mention the cognitive overhead of stopping programming, going to NPM, searching/finding/installing the module, then reading the documentation to understand its API. Isn't it simpler to `while (str.length < endLength) str = padChar + str;`? How can there be a bug in that "alternative naive inlined solution"? Either it works or it doesn't!
But naturally, with any code reuse there's a benefit and a cost to instance of internal or external reuse.
The Benefits for external reuse include ideal reliability as your describe as well as not having to create the code. The costs for external reuse include having your code tied to not just an external object but also the individuals and organizations creating that object.
I think that means that unless someone takes their hundreds of modules from the same person or organization and is capable of monitoring that person, that someone is incorporating a layer of risk to their code that they don't anticipate at all.
Risk of indirectly hosing a project with N module owners providing dependencies: 1-((1-H)^N)
Let's say H is very small, like 0.05% of module owners being the type who'd hose their own packages.
3 module owners: 0.15% chance your own project gets hosed
30 module owners: 1.49% chance your own project gets hosed
300 module owners: 13.93% chance your own project gets hosed
Keep in mind it's not just your dependencies, but your entire dependency chain. And if you think a module owner might hose some modules but not others, maybe H is actually the number of modules in which case 300 starts getting pretty attainable.
Upshot:
Not everyone is trustworthy enough to hang your project on. The more packages you include, the more risk you incur. And the more module owners you include, definitely more risk.
The micromodule ecosystem is wonderful for all the reasons described, but it's terrible for optimizing against dependency risk.
Takeaways:
Host your own packages. That makes you the module owner for the purposes of your dependency chain.
If you're not going to do that, don't use modules dynamically from module owners you don't trust directly with the success of your project.
I love collaborative ecosystems, but some people suck and some perfectly non-sucky people make sucky decisions, at least from your perspective. The ecosystem has to accommodate that. Trust is great...in moderation.
Yes, at some point the complexity cost of gluing together the standard library functions to do something becomes greater than the lookup cost of finding a function that does what I want; but I am saying that adding more functions is not costless.
The derision is unwarranted, due to a failure in critical thinking from otherwise smart people.
There's even some guy calling for a "micro-lodash". To me, as a Python engineer, lodash [1] is already a tiny utility library.
I guess it's also about the fact that JS is a pretty bad language. That you need a one-line `isArray` dependency to `toString.call(arr) == '[object Array]'` is crazy.
[1] https://lodash.com/docs
That's not a Lego block; that's an excuse.
Deleted Comment
Too many modules do not necessarily become a good thing. They may appear to get rid of complexity but in reality, you will have to face the complexity some level above and in fact, the sheer number of small modules will most probably add more complexity of themselves.
Dead Comment
Nope.
I move out modules so I can write really nice tests for the independent of the projects I am using them in. Also, I tend to write projects w/ 100% test coverage, breaking out utils allows me to test projects easier and faster.
Also note, the implementation of this module changed a few times today. With it being open source and having the collaboration of other engineers, we ended up with a very performant version, and discovered interesting quirks about "safe integers" in JS.
Isn't that why we just write functions? Turning simple functions into entire modules just adds an unnecessary level of abstraction that helps nobody.
- Breaking out is-positive-integer hasn't reduced the number of paths to test. You have not gained anything, you've added overhead.
- 100% test coverage is rarely a good thing. It is required for safety critical areas like avionics. I can guarantee that your JS code is not making into any safety critical environment!
This is especially true for one-liner modules in js, where any test at all might let you claim 100% statement coverage, without accounting for branches or loops within method calls.
As someone who spends 80% on the back end, I often get bit on the ass from JavaScripts quirks when I need to add some front end stuff.
Deleted Comment
Those are just the three top-level dependencies. The package has 9 recursive dependencies.
There's also a nice table explaining how a package called "is-positive" managed to reach version 3.1.0.
https://www.npmjs.com/package/average
It's hard to not stare at that in complete disbelief; someone thought that it was worthwhile to create a package for determining the mean of an array of numbers.JavaScript has suffered from a lack of a standard library for a while. Having a small package like this means that (in theory) everyone is using the same bug free version of summing instead of writing their own.
Honestly having JS start building a standard library at this point would be wonderful.
I wonder if there will be a 3rd release with an updated averaging function.
Edit: to be fair, it was an optimization from reduce to for loop [https://github.com/bytespider/average/commit/7d1e2baa8b8304d...]
Pretty much every package management system gets cruft in it like this. Example: for a long time someone had uploaded a random Wordpress core install into Bower.
ducks
(Edit: typo)
I wanted to use a javascript tool that would make my life easier, and when I looked at the npm dependency tree it had 200+ dependencies in total.
If I used that javascript tool, I'd be trusting hundreds of strangers, lots of which had absolutely no clout in github (low number of stars, single contributor projects) with my stuff.
And not just them, I'd be trusting that no one steals their github credentials and commits something harmful (again, these projects are not very popular).
It doesn't help that npm doesn't (AFAIK) implement code signing for packages which at least would let me manage who I choose to trust
In all the debate about this, why is the trust-dependency-fuck-show not getting more attention?
Every dependency you take is another degree of trust in someone else not getting compromised then suddenly finding all sorts of horribleness making it into your production environment.
It beggars belief!
Plus, if you think that's too small, write your own broader module that does a bunch of stuff. If people find it valuable, they'll use it. If they find it more valuable than a bunch of smaller modules, you'll get 10,000 downloads and they'll get 10.
The module you roundly ridicule has had 86 downloads in the last month, 53 of which were today (at the time of this writing). I imagine most of those 53 were after you posted. So that's 40 downloads in a month, as compared to the express framework which has had 5,653,990 downloads in the last month.
The wailing and gnashing of teeth over this module is ridiculous.
This article touches on things that are wrong in the javascript culture. I always had this nagging feeling when working with NPM, this article brings it to light. For what it's worth I never felt this while writing Ruby, C# or Go.
It's the -culture- that needs to change here, not the tools.
https://github.com/tjmehta/101/blob/master/pass-all.js
It’s written in such a way that every time you call...
... there are something like 5 + 2n attribute accesses, 5 + 3n function calls, 3 + n new functions created, as well as some packing and unpacking of arguments, not including the actual application of the functions to the arguments that we care about. That’s in addition to the several functions defined in the dependent submodules, which you only have to pay for constructing once.[From my quick eyeball count. These numbers could be a bit off.]
I will probably have go back and change this now that I know about it. In general though, not gonna a lie, I am not very concerned about micro performance optimizations.
Really
Not that it was impossible, but I still remember having to search for it and being astonished that that was necessary.
Sure, should not apply anymore like that.
So many people on this page have written about how these are well tested, performant, and correct modules. But, the modules aren't even correct in many cases, let alone providing incomplete coverage over edge cases or the slow performance and horrendous dependencies of many of the modules.
I don't use NPM so maybe it doesn't really matter aside from the level of abstraction being implemented being relatively ridiculous.
However, if my build system had to go out and grab build files for every X number of basic functions I need to use, grab Y number of dependencies for those functions, run X * Y number of tests for all those dependent packages, AND then also fell apart if someone threw a tantrum and removed any one of those packages basically shutting me down for a day... then I'd question every single thing about my decisions to use that technology.
[Quick Edit] Basically I'm saying "Get off my lawn ya kids!"
The majority of code I have ever seen is awful (20 years across large + small companies) but that is why I am hired to fix awful code so I am skewed. The amount of times I have seen people implement something simple, in a convoluted error prone was is unbelievable.
I know this seems ridiculous but when you see time and time again how people fail to do the simplest things it seems like a good idea.
Deleted Comment
Keep in mind though that absolutely not all JS programmers are like that. Not everyone wants to be an aforementioned Dick-from-a-mountain.
Why is this insane? What alternatives would be better?
Sounds to me like publishing oneliners on NPM is a trivial way to build a botnet.
> If npm was invoked with root privileges, then it will change the uid to the user account or uid specified by the user config, which defaults to nobody. Set the unsafe-perm flag to run scripts with root privileges.
Lodash does this (and versioning, and clean code, and tests) really well though.
> I think that it's extraordinarily important that we in computer science keep fun in computing. When it started out, it was an awful lot of fun. Of course, the paying customers got shafted every now and then, and after a while we began to take their complaints seriously. We began to feel as if we really were responsible for the successful, error-free perfect use of these machines. I don't think we are. I think we're responsible for stretching them, setting them off in new directions, and keeping fun in the house. I hope the field of computer science never loses its sense of fun. Above all, I hope we don't become missionaries. Don't feel as if you're Bible salesmen. The world has too many of those already. What you know about computing other people will learn. Don't feel as if the key to successful computing is only in your hands. What's in your hands, I think and hope, is intelligence: the ability to see the machine as more than when you were first led up to it, that you can make it more.
Quoted in The Structure and Interpretation of Computer Programs by Hal Abelson, Gerald Jay Sussman and Julie Sussman (McGraw-Hill, 2nd edition, 1996).
When people ask why Javascript is terrible, show them this.
If you don't want to write modules this way, don't. Nothing about javascript requires that you even read articles about modules that you don't want to use. Or read articles and then follow up by posting on message boards about articles about modules you aren't going to use.
Bang some code out instead. Your opinion of javascript is about as valuable as the opinion of the person who wrote the modules.
A good micro-module removes complexity. It has one simple purpose, is tested, and you can read the code yourself in less than 30 seconds to know what's happening.
Take left-pad, for example. Super simple function, 1 minute to write, right? Yes.
But check out this PR that fixes an edge case: https://github.com/azer/left-pad/pull/1
The fact of the matter is: every line of code I write myself is a commitment: more to keep in mind, more to test, more to worry about.
If I can read left-pad's code in 30 seconds, know it's more likely to handle edge cases, and not have to write it myself, I'm happy.
The fault in this left-pad drama is not "people using micro-modules". The fault is in npm itself: all of this drama happened only because npm is mutable. We should focus on fixing that.
That's true. However:
Every dependency you add to your project is also a commitment.
When you add a dependency, you're committing to deal with the fallout if the library you're pulling in gets stale, or gets taken over by an incompetent dev, or conflicts with something else you're using, or just plain disappears. If you add a dependency for just a few lines of code, you're making a way bigger commitment than if you'd just copy/pasted the code and maintained it yourself. That's why so many people are shaking our heads at a 17-line dependency. It's way more risk than it's worth. If you need a better stdlib for your language (some of us write PHP and feel your pain) then find one library that fills in the gaps and use that.
This is a problem with NPM, not with dependencies. With different package management systems with stable builds and lockfiles, then you pin to a specific version and there is no way upstream can cause problems. A lockfile is a pure win over vendoring.
Deleted Comment
Looking at that left-pad module though - no comments, abbreviated variable names, no documentation except a readme listing the minimally intended usage examples. This is not good enough, in my opinion, to upload to a public repository with the objective that other people will use it. It is indistinguishable from something one could throw up in a couple of minutes; I certainly have no reason to believe that the future evolution of this code will conform to any "expectation" or honour any "commitment" that I might have hopefully ascribed to it.
[EDIT: I've just noticed that there are a handful of tests as well. I wouldn't exactly call it "well tested", as said elsewhere in this thread, but it's still more than I gave it credit for. Hopefully my general point still stands.]
The benefits of reusing other people's code, to a code reuser, are supposed to be something like:
(a) It'll increase the quality of my program to reuse this code - the writer already hardened and polished this function to a greater extent than I would be bothered to do myself if I tried right now
(b) It'll save me time to reuse this code - with the support of appropriate documentation, I shouldn't need to read the code myself, yet still be able to use it correctly and safely.
Neither of those things are true for this module. It's not that the module is small, it's that it is bad.
(True that npm's mutability is a problem too - this is just a side-track.)
The whole "amaga, it's a whole package for just ten lines of code" is just elitism. Given the number of downloads on things like left-pad, it's clearly useful code.
I want to plug ied (https://github.com/alexanderGugel/ied) here, which both installs about 5x faster than npm@2, yet deduplicates as well as npm@3.
Left padding is (almost in all languages) built-in, even C can do it with printf (edited)
The problem is not having a library that offers that, but having this micro-module thing as a whole NPM module. No other language does that.
If it was inside a string helpers, that's great.
But don't make me one single module for just left-padding (or is-integer-number)
1) JS is unique in that it is delivered over the wire, so there is a benefit in having micro-modules instead of a bigger "string helpers" module. Things like webpack are changing that now (you can require lodash, and use only lodash.padStart).
2) JS's "standard" library is so small, because it's the intersection of all of the browser implementations of JS dating as far back as you care to support. As pointed out in sibling, a proposal for padLeft is included in ECMA2016. But we'll still need Left-Pad for years after it's adopted.
JavaScript had a very minimal standard library, it's pretty asinine of you to compare it to C or any other language with a pretty extensive standard library.
https://github.com/sindresorhus/ama/issues/10#issuecomment-1...
With npm and the popularity of small modules, it was obvious that I could just cheaply publish each of my utility functions as separate modules. Some of them are about a few dozen lines, but have hundreds of lines of tests and have had significant bugfixes that I am very happy that I haven't had to manually port to dozens of projects. I don't miss copy-pasting code across projects, no matter how many claim I've "forgotten how to program".
So, when I read "It doesn't matter if the module is one line or hundreds." I call bullshit. There is overhead, it's usually just fairly small (at may event begin to rival the gains from using a module at that level), but that small amount adds up. Once you've had to deal with a dependency graph that's 10 levels deep and contains hundreds or thousands of modules, that small extra complexity imposed by using a module is no longer in total, and comes at a real cost, as we've just seen.
Other module ecosystems have gone through some of the same problems. There was a movement in Perl/CPAN a few years back to supply smaller, more tightly focused modules a while back, to combat the sprawling dependencies that were popping up. The module names were generally suffixed with "Tiny"[1] and the goals where multiple:
- Where possible, clean up APIs where consensus had generally been built over what the most convenient usage idioms were.
- Try to eliminate or reduce non-core dependencies where possible.
- Try to keep the modules themselves and their scope fairly small.
- Remove features in comparison to the "everything included" competitor modules.
This has yielded quite a few very useful and strong modules that are commonly includes in any project. They aren't always tiny, but they attack their problem space efficiently and concisely. Even so, I'm not sure there's ever a module that's a single line of code (or less than 10, given the required statements to namespace, etc), as the point is to serve a problem, not an action.
1: https://metacpan.org/search?size=20&q=%3A%3Atiny&search_type...
Deleted Comment
And just because some random guy didn't get something as trivial as this right the first time, doesn't mean nobody else can. Also the de facto standard library lodash already has padding utilities, made by people who have a proven track record.
IMO in the Javascript world they're only there in order to minimize script size for front end work. See lodash & lodash-something1 / lodash-something2 / ..., where there's an option of using the whole module or just including 1-function long scripts, precisely to avoid the script size issue.
Is there a solution for this? I know that the Google Closure compiler can remove dead code, ergo making inclusion of large modules less costly in terms of code size. Am I missing some ES6 feature that also helps with this?
Deleted Comment
Combining this with isomorphic code, cross-browser development and micro-services.
I even wrote a blog post couple of days ago about it [1].
1 : http://www.drinchev.com/blog/increase-your-dependencies/
Why do you need an external dependency on something so small?
NPM/JS has subsumed the class of programmer who would have previously felt at home inside PHPs battery-included ecosystem. Before that, a similar set of devs would have felt at home with Visual Basic. Seriously, go visit the comments section on archived copies of the PHP documentation. You'll find code of a similar nature. If PHP had had a module system 10+ years ago you would have seen this phenomenon then. Instead it was copy and paste.
This isn't elitism, it's just the way it is. The cost of a low barrier to entry in to a software ecosystem is taking in those who don't yet have software engineering experience.
Nobody should be surprised that NPM, which I believe has more packages than any other platform, is 90% garbage. There are only so many problems to solve and so few who can solve them well, in any language. Put 100 programmers in a room, each with 10 years experience, and you'll be lucky to find 1 who has written a good library. Writing libraries is really hard.
Add into that the fact that:
1) Javascript has a huge number of developers, and is often an entry-level language
2) The developers on this thread (I like to think of HN as at least slightly above average) are divided whether having small packages / large dependencies trees is a good or bad thing
3) Dependency management is something that matters mostly to long term (professional / enterprise / etc) applications, which is a subset of programming, and I wonder if not a minority subset of node.js projects in general.
4) If I'm writing a throwaway app or proof of concept, and therefore don't care about dependency maintenance, using as many dependencies as possible is a major time savor,
and of course you get this situation, and it seems to make perfect sense.
Personally, I wish there was an NPM Stable, where packages underwent much more scrutiny and security in order to get it, but nonetheless, nothing I've read so far about npm really scares me given the the above context. If you are a dev creating an unmanageable dependency tree for your enterprise app, you're a shitty dev. That doesn't necessarily mean that NPM is wrong for being so open in allowing others to publish their packages, or that smaller / more worthless packages shouldn't be allowed to publish.
That said, I would really like to hear a response to this post, as I have limited experience with different package management systems.
1/ forces package authors to write stable libraries
2/ forces dependencies to narrow the versions of their dependencies
3/ prevents name squatting to some extent. You cannot have a package named "forms" and then sell the name for real money, like seen on NPM. your package needs to be "namespace"/"name". NPM made a huge mistake with its gems like global namespace and it explains half the problems it is having today.
Can you expand on how to identify the class of programmers you're referring to? Are they the type that copy / paste code directly from StackOverflow? They lack a classical computer science education? They haven't worked on a large, enterprise-grade project?
Others see programming more as an art. They take care to make the code not only efficient but also elegant. They'll read up on new and interesting algorithms and incorporate them in novel ways. They might often be behind deadlines, but when they are, they create things like GNU Hurd that inspire a lot of interest and lead to interesting results, maybe even teach people a few things. Their code is interesting to read. They tend to write the libraries that the first group uses.
Both groups contribute a lot, but it's not easy to get them to understand that about each other.
When you can't even reason about the truthiness of a variable because the language coerces everything, of course things end up screwy.
Compare NPM to Composer/Packagist and you get a better comparision. I've personally seen only very few micro packages on Packagist, thankfully this never seemed to gain traction in the PHP world.
a) No standard lib in JS
b) JS is delivered over the internet to web pages in a time sensitive manner ... so we don't want to bundle huge "do everything" libs. Sometimes its convenient to just grab a tiny module that does one thing well. There isn't the same restriction on any other platform
c) Npm makes it really easy to publish/consume modules
d) And because of c) the community is going "all in" with the approach. It's a sort of experiment. I think that's cool ... if the benefits can be reaped, while the pitfalls understood and avoided then JS development will be in an interesting and unique place. Problems like today can help because they highlight the issues, and the community can optimise to avoid them.
Everyone likes to bash the JS community around, we know that. And this sort of snafu gives a good opportunity. But there many JS developers working happily every day with their lots of tiny modules and being hugely productive. These are diverse people from varied technical backgrounds getting stuff done. We're investigating an approach and seeing how far we can take it.
We don't use tiny modules because we're lazy or can't program, we use them because we're interested in a grand experiment of distributing coding effort across the community.
I can't necessarily defend some of the micro modules being cited as ridiculous in this thread, but you can't judge an entire approach by the most extreme examples.
However, I agree it is ridiculous to have a dedicated module for that one function. For most nontrivial projects I just include lodash, which contains tons and tons of handy utility functions that save time and provide efficient, fast implementations of solutions for common tasks.
Lodash includes `padStart` by the way (https://lodash.com/docs#padStart).
(And, y'know, maybe it's because I'm not a JS programmer, but the notion of looking for a module to implement a string padding function would never have even occurred to me.)
a Git submodule like approach would be much better
Because it's so trivial? I can't wrap my head around why this is an argument in the first place. It makes no sense to bring in a module from a third party adding yet another dependency and potential point of failure when reimplementing it yourself literally takes as long as it takes to find the module, add it to package.json and run npm install.
People should be trying to limit dependencies where possible. Reproducible builds are really important if it costs you almost no time you should have it in your code base IMO.
People taking the DRY principle to the most extreme degree always makes for the worst code to debug and maintain.
Deleted Comment
Even if it does take the same amount of time (which it shouldn't), a 1-line call to a standard module imposes less of a future maintenance burden than 14 lines of custom code.
> People should be trying to limit dependencies where possible. Reproducible builds are really important if it costs you almost no time you should have it in your code base IMO.
That's a non sequitur. Reproducible builds are important, but unless you write code with 0 external dependencies you already have a system in place for handling library dependencies in a reproducible way. So why not use it?
> People taking the DRY principle to the most extreme degree always makes for the worst code to debug and maintain.
This is the opposite of my experience.
I think that was largely the OP's point tbh. Using something like lodash [a utility library] is fine while using a module [for a single function] is not.
It might have gotten lost in the ranting from on high but I don't think the author truly meant more than that.
I'll tell you why.
The least important ones is that downloading such trivial module wastes bandwidth and resources in general (now multiply this by several hundred times, because of dependency fractal JS sloshes in). I would also spend much more time searching for such module than I would implementing the damn function.
More important is that you give up the control over any and every bug you could introduce in such trivial function or module. You don't make it less probable to have those bugs (because battle-tested package! except, not so much in JavaScript, or Ruby, for that matter), you just make it much harder to fix them.
And then, dependencies have their own cost later. You actually need a longer project, not a throw-away one, to see this cost. It manifests in much slower bug fixing (make a fix, find the author or maintainer, send him/her an e-mail with the fix, wait for upstream release, vs. make a fix and commit it), it manifests when upstream unexpectedly introduces a bug (especially between you making a change and you running `npm install' on production installation), it manifests when upstream does anything weird to the module, and it manifests in many, many other subtle and annoying ways.
Battle-tested still applies - if you have that many people using a line of code they're more likely to find any bugs. (Formal proof is better than any amount of testing, but no mainstream language requires formal proof on libraries yet)
> And then, dependencies have their own cost later. You actually need a longer project, not a throw-away one, to see this cost. It manifests in much slower bug fixing (make a fix, find the author or maintainer, send him/her an e-mail with the fix, wait for upstream release, vs. make a fix and commit it), it manifests when upstream unexpectedly introduces a bug (especially between you making a change and you running `npm install' on production installation), it manifests when upstream does anything weird to the module, and it manifests in many, many other subtle and annoying ways.
Large monolithic dependencies have this kind of problem - "we upgraded rails to fix our string padding bug and now database transactions are broken". But atomised dependencies like this avoid that kind of problem, since you can update (or not) each one independently. Regarding fixing upstream bugs, you need a good process around this in any case (unless you're writing with no dependencies at all).
> It manifests in much slower bug fixing
I don't buy this at all, because I've done it myself many times. If you're waiting on a PR from the original repo owner to fix a Production bug, you're doing it wrong. It's trivial to copy the dependency out of node_modules and into your src, and then fix the bug yourself. Then when the owner accepts your PR, swap it back in. I don't understand the problem here.
Deleted Comment
- if the programmer uses other functions included in Lodash his code will have a single larger point of failure. For example, if Lodash is unpublished (intentionally as in this case, or unintentionally) then the programmer will have a lot more work to redo.
- Lodash introduces a lot of code, while the programmer only needs one of its functions to pad a string.
Using a library like lodash makes a lot more sense once you use a module bundler that allows tree shaking (like Rollup or Webpack 2.0) along with the ES6 module syntax. Heck, even if you're just using babel with Browserify or Webpack 1.x, you can use babel-plugin-lodash [0] so it'll update your imports and you only pull in what you need.
[0] https://github.com/lodash/babel-plugin-lodash
That said, I work with Java, Clojure and Python mostly so I may be more used to having a huge standard library to lean on than is typical.
So in less than a year "padleft" came and went away because all strings don't start on the left and someone decided that "left" means "start" except that the reason that it doesn't is the reason that it was changed. Even worse, the 4.0 documentation does not document that _.padstart renamed _.padleft. It's hard to grok what cannot be grepped.
Why blame someone for depending on padleft in a world where libraries swap out abstractions in less than a year? Breaking changes are bad for other people. Semantic versioning doesn't change that.
[1]: https://github.com/lodash/lodash/wiki/Changelog
For instance I added a utility to my own library (msngr.js) so I could make HTTP calls that work in node and the browser because even the fetch API isn't universal for some insane reason.
We have a large internal C++ app at my work of that vintage (~1992) it uses its own proprietary super library (called tools.h++) which is just different enough from how the C++ standard evolved that its not a simple task to migrate our codebase. So now every time we change hardware platforms (has happened a few times in last 30 years) we have to source a new version of this tools++ library as well.
I find it amusing Javascript hasn't learnt from this.
I recently had to rebuild a large RoR app from circa 2011 and it took me longer to solve dependencies issues than to familiarise myself with the code base.
Excessive dependencies are a huge anti-pattern and, in our respective developers communities, we should try to circulate the idea that, while it's silly to reinvent the wheel, it's even worse to add unnecessary dependencies.
Let's be honest though, in the current trendy javascript ecosystem these people will already be two or three jobs away before the consequences of their decisions become obvious. Most of the stuff built with this is basically disposable.
This is your fault for expecting free resources to remain free forever. If you care about build reproduction, dedicate resources to maintain a mirror for your dependencies. These are trivial to setup for any module system worth mentioning (and trivial to write if your module system is so new or esoteric that one wasn't already written for you). If you don't want to do this, you have no place to complain when your free resource disappears in the future.
1- Maintaining a mirror of dependencies can be a non-trivial overhead. In this app that I was working on, the previous devs had forked some gems on github, and then added that specific github repo to the requirements. But they did not do it for every dependency, probably they did not have time/resources to do that.
2- As a corollary to the above, sometimes the problem is not the package itself but compatibility among packages. E.g. package A requires version <=2.5 of package B, but package C requires version >= 2.8 of package B. Now I hear you asking "then how did it compile in the frist place?" probably the requirement was for package A v.2.9 and package C latest version, so while A was frozen, C got updated. This kind of problems is not solved by forking on Github, unless you mantain a different fork of each library for each of your project, but that's even more problematic than maintaining dependencies themselves.
P.S. At least for once, it wasn't "my fault", I didn't build that app LOL ;-)
Or, you can implement the functionality yourself (or copy/paste if the license allows) and avoid the hassle.
In the Ruby ecosystem, library authors didn't really start caring about semantic versioning and backwards compatability until a few years ago. Even finding a changelog circa 2011 was a godsend.
I think this was mainly caused by the language itself not caring about those either. 10 years ago upgrading between patch releases of Ruby (MRI) was likely to break something.
At least this is one thing JavaScript seems to do better.
The same goes for almost any legacy app.
I think we got to this state because everyone was optimizing js code for load time-- include only what you need, use closure compiler when it matters, etc. For front end development, this makes perfect sense.
Somewhere along the line, front end developers forgot about closure compiler, decided lodash was too big, and decided to do manual tree shaking by breaking code into modules. The close-contact between nodejs and front end javascript resulted in this silly idea transiting out of front-end land and into back-end land.
Long time developers easily recognize the stupidity of this, but since they don't typically work in nodejs projects they weren't around to prevent it from happening.
New developers: listen to your elders. Don't get all defensive about how this promised land of function-as-a-module is hyper-efficient and the be-all end-all of programming efficiency. It's not. Often times, you already know you're handing a string, you don't need to vary the character that you're using for padding and you know how many characters to pad. Write a for loop; it's easy.
Note that this is exactly the sort of question I ask in coding interviews: I expect a candidate to demonstrate their ability to solve a simple problems in a simple manner; I'm not going to ask for a binary search. Separately, I'll ask a candidate to break down a bigger problem into smaller problems. In my experience, a good programmer is someone who finds simple solutions to complex problems.
Note: rails is similarly pushing back against developers that have too many dependencies:
https://www.mikeperham.com/2016/02/09/kill-your-dependencies...