The more I read about the js ecosystem, the more it seems the inmates are running the asylum
It's a long time since js needed a good stdlib.
Everybody was using this package but apparently no one but an adversarial player stepped up to actually maintain it.
(And don't get me started on "let's always get the latest version of the package")
> You want to download thousands of lines of useful, but random, code from the internet, for free, run it in a production web server, or worse, your user’s machine, trust it with your paying users’ data and reap that sweet dough. We all do. But then you can’t be bothered to check the license, understand the software you are running and still want to blame the people who make your business a possibility when mistakes happen, while giving them nothing for it? This is both incompetence and entitlement.
Not surprising. Not surprising in the least. "Oh wow somebody 0wned the package I needed". Maybe because js projects have an order of magnitude more dependencies than a Python/Java/Go, etc project. Maybe because in the extreme opposite of NIH, people feel the need to import a module for every small thing they want to do? "Stack overflow programming", "how do I add 2 numbers using React, is there a module for that!?!?!"
The cynic in me is saying that JS is the new PHP. Ironically, both are very capable and suitable languages, but the sheer freedom they offer attracts crowds of wannabe developers who think they are gods because they know how to shave off 10% of execution time in some small part of their overengineered and unmaintainable mess. And give talks about it, write blogs and post clever twits.
However developing robust software is possible in JS, the same way it was possible in PHP. You just need to take some time to vet dependencies a bit better and not use every fad when it surfaces. Common sense and experience help a lot. But it's incredible what can be built with modern toolchains and how maintainable it can be. Don't let the anti-JS feeling scare you from trying React / Vue /..., just don't forget that state management, separation of concerns and similar concepts are still valid.
> the sheer freedom they offer attracts crowds of wannabe developers who think they are gods because they know how to shave off 10% of execution time in some small part of their overengineered and unmaintainable mess. And give talks about it, write blogs and post clever twits
Honestly, I think JavaScript just has a lot of new developers who have never been exposed to concepts of software development such as dependency management, cultivating trust, and how (free and) open source works. I think the kind of developer you're describing is actually a minority.
PHP has many built in functionality and modules that are packaged in Linux distros,in comparison JS is very bare bones and JS ecosystem encourages the reuse of 5 lines helper function from npm.
Edit, a culture difference example:
a dev wanted to get the video length in a mp4, so he installed a npm package but it did not worked, the issue was that ffmpeg was not setup in the PATH so he asked me to solve it, I asked to use the absolute path of the ffmpeg binary but that was not an option n the package.
In the end I checked the package he found and used and I showed that it just called ffmpeg from the command line and if we just do that we can pass the exact options we want and not have to install a third party package.
As a PHP dev when I have to perform something is to find if there is a Linux CLI app that does this, install the app on the server and call it (like ffpmeg, wkhtml2pdf, an epub to mobi app, etc) I can trust a CLI app that is packaged in Linux then a random package.
PHP has composer and we use that for third party integration like Dropbox,Facebook,Amazon have official packages that we can grab and use.
No, both are terrible, garbage languages, languages that have been whipped into reasonable shape over a decade or two by good programmers forced to use them due to awful monocultures that arose through successions of largely arbitrary events. This reasonable shape means that the most recent versions can be used with relative pleasure if you ignore half of their syntax, clamp massive libraries to them to replace the other half, and assume that your end product will be as fragile as glass.
edit: sorry about the trolly comment (which I make in response to a good comment), but if PHP and javascript aren't bad languages, I wouldn't know how to make a case that any language is a bad language.
javascript isn't even usable out-of-the-box without a few (hundreds) libraries to make it comfortable to use. I disagree, PHP is nothing like it, its standard modules library is chock-full of stuff that is fast, useful and based on existing C/C++ libraries or standards.
but sure if one wants want to classify incompetent developers in the PHP bucket, then everything is "the new PHP"
JS is haunted by its past, today we are almost there where we have been in 2011: on the one hand it has this perfect learning curve, being accessible for everybody who mastered basic computer usage but still offering a lot of advanced but optional features. At the same time it has this bad reputation of being unreliable and having bad documentation/code.
I think in 2012 a frameworks got popular that simplified the task of building JS apps that are as complex as apps in C++ or Java. Actually using these frameworks wasn't trivial when they came out. Now that's pretty easy but the deps are crazy. I wonder what happens next, maybe some super advanced package manager will come...
Javascript is inherently broken and unfixable. With Python, if I find a bug or something that I think should be in the standard library then I can just post on the mailing list and make my case. The core devs might not accept my proposed change, but at least I'm guaranteed my day in court. And if you get ruled against then you can go back and gather more evidence and appeal at least once and probably even twice before you get told to fuck off.
Whereas with Javascript if you find a bug or something you want added to the core library then you need to pay millions of dollars to join ECMA. How can you find competent developers to work in an ecosystem with that kind of governance? You can't, which is why we are where we are now.
Even if with respect to the implementations, if you file a bug with any of the major browser vendors then you'll probably die of old age before the ticket is even triaged, let alone fixed.
I haven't done it, but if you want to propose a change to JavaScript, it looks like there is a process to follow, and it doesn't necessarily involve spending money:
To me this illustrates a fundamental problem that JS has to deal with that is virtually unique in the programming language space: multiple browsers across multiple browser versions. It is incredibly difficult to get all browsers to adopt a "standard library" and even then it takes years for all users to adopt those browsers that support the standard library. Even on top of that, not all browsers implement the standard library properly. It really is a nightmare grown out of competing browsers with users that do not update them enough.
> Maybe because js projects have an order of magnitude more dependencies than a Python/Java/Go, etc project.
This is because JS file sizes matter a lot. We have huge libraries like `lodash` which are like a standard library, but nobody wants to use them because they dramatically increase the filesize of the JS bundle. I would rarely want to bring in lodash for a couple utilities, even with treeshaking and the like because it still dramatically increases bundle size. We have pretty excellent datetime libraries that most people hesitate to use -- like moment.js -- because they are huge. So what's the result? A ton of dependencies with very limited scopes because developers do not want to bring in massive libraries that do everything.
Let's flip to Python. Let's say magically you can run python inside a browser starting tomorrow. The second you bring in a library like `numpy` you're looking at a bundle size of 40 MB, and that's just one dependency. In the JS world that is utterly unacceptable. All the languages you mentioned take advantage of the fact that they can download those libraries to the filesystem and forget about it. JS has to download libraries over the wire, it's a completely different game.
What I'm trying to say is that the JS ecosystem didn't invent a bunch of problems to solve or that the people running the ecosystem are script kiddies. There are very unique problems that need to be solved in this ecosystem that make it different, especially when referencing the three languages you mentioned in your post.
With all of these "JS sucks" arguments I see a severe lack in empathy or even remotely trying to understand why JS has the problems it does.
>To me this illustrates a fundamental problem that JS has to deal with that is virtually unique in the programming language space: multiple browsers across multiple browser versions. It is incredibly difficult to get all browsers to adopt a "standard library"
You don't need to. The standard library could just be a community curated project (with the help of major browser vendors) that ships as extra code. It could even be on npm.
If the browser has it included, even better, if not, it's referenced the usual way third party dependencies are.
The problem is having a package.json like:
random lib 1
...
random lib 500
Not having a package.json like:
function 1 of well curated lib X
function 2 of well curated lib X
function 22 of well curated lib Y
...
function 35 of well curated lib Y
where e.g. Y is lodash, or react, and such.
>This is because JS file sizes matter a lot. We have huge libraries like `lodash` which are like a standard library, but nobody wants to use them because they dramatically increase the filesize of the JS bundle. I would rarely want to bring in lodash for a couple utilities, even with treeshaking and the like because it still dramatically increases bundle size. We have pretty excellent datetime libraries that most people hesitate to use -- like moment.js -- because they are huge. So what's the result? A ton of dependencies with very limited scopes because developers do not want to bring in massive libraries that do everything.
That doesn't seem the case either. In major web pages, even from big companies, there are multiple versions of dependencies, even full deps like lodash and co. And people use all kinds of gigantic (web wise) frameworks and third party libs like moment.js with wild abandon.
Besides, even if that was the problem, there's nothing stopping you from having a modular set of libraries (like lodash), that you can cherry pick from the functionality you need and only load that.
The problem the parent mentions is not "JS needs to include big libraries and stop using small dependencies" but JS needs to stop using random small dependencies from here and there.
Using 100 dependencies from all kinds of crappy upstream places (e.g. some crappy leftpad implementation), is different than having a curated set of libraries and loading 100 small dependencies from that.
I agree that download size is a big deal in JavaScript, but it still seems rather unfortunate to use download size as a reason for ignoring the good work of well-established teams with good reputations and relying on random individuals instead.
If tree-shaking isn't good enough, it needs to get better. Instead of having lots of individuals creating tiny packages with one function in them, there need to be fewer, broader libraries that are closely watched by larger teams.
> The more I read about the js ecosystem, the more it seems the inmates are running the asylum
the issue is NPM. NPM is a bad package manager. A package manager that allows duplicates version of the same dependency is broken to begin with. People complain that fetching react-native results in hundreds of packages installed on their computers. How many freaking dupes? off course nobody is going to audit all that crap. Conflicts should be resolved upstream, which would lead to more stable packages to begin with.
NPM was developed by people who were clueless about package management and now are profiting directly from that shit-show, and even Node.js creator went on recording saying tying it to NPM was a mistake.
This. The direct dependencies are usually quite manageable. As a project developer, I know what they do, why they were chosen, roughly how it works, and have decided that the risk is worth it. This is not the case for sub, subsub... dependencies, and there are soo many. If the tree was flatter due-dilligence would be much easier.
Library authors. Please be extremely conservative in when to pull in dependencies. I will prioritize this higher for projects that I maintain.
This reasoning has a problem. We should always be concerned with dependencies and always keep them to the minimum you can correctly get by with.
The problem with thinking these x packages are OK for this library is how we got where we are. Yes we could just avoid using trivial libraries, but even for the other ones the decision to add it into your lib has to be made in the context of the app or lib, or lib's lib using your library. If you only imagine your lib being directly used, your scales a d judgements could be way off the actual cost/bebefit.
As someone else alluded to, I think the problem stems more from the fact that major official libraries are glued together using a lot of dependencies that aren’t necessarily trustworthy. This doesn’t happen nearly as much in the other ecosystems (at least none that I have seen). In Android dev, for instance, you often will see the official Google support binaries as dependencies but nothing else. In .NET you’ll often see Newtonsoft’s JSON serializer but very few non-Microsoft third party dependencies.
It does to a lesser, but still worrisome, extent in Java.
I think it's precisely because it lacks a good committed (commercial) owner like Android and .NET have. I could see it going from bad to worse in the future, as Oracle continues to jettison responsibility for bits of the ecosystem.
Anecdotal, but I find the Node std lib + the language primitives to be good enough for me. I never find myself reaching for external dependencies outside of React or an equivalent on the front end, a testing framework, etc. maybe 5 dependencies total for a moderately complex app. I’m honestly not sure why the JS ecosystem is so dependency happy. It’s not necessary, and the downsides are massive and obvious.
Yes. Huge package ecosystems are hurtful. For the basics, language communities should come with a nice, batteries improved, standard library.
For the rest, people should only trust community projects with big following, processes, etc (like Apache stuff, Django, moment.js, postgres, etc), or open source projects supported by companies (e.g. nginx, mongo, React, etc).
The rest, sorry, but you got to write them yourself.
Downloading and using 1-person libs for trivial stuff like lefpad and such is madness.
Part of the problem is that people are taught one thing in school, expected to immediately find work, and the then perform something immediately different at work. That, or they are simply Java (or whatever fill in the blank) and get retasked to write JavaScript.
What ends up happening is that people want to do their previous job in the context of their current job, such as write something that looks like Java in JavaScript. When that becomes the failure you would expect they blame the technology for not being their previous thing.
Compounding that training failure is that JavaScript has a low barrier of entry. Some of the people writing in this language are less than brilliant. The end result is a cluster fuck as evidenced by frequently used terms like imposter syndrome, snowflake, Dunning Kruger, and so forth. The insecurity, fear, and delusion clearly evident to me as a corporate JavaScript developer.
How do you solve for the obvious emotional failure that comes from the described lack of preparation? You hide under layers and layers of abstractions. You make life easier to the point of hoping code exists that does your job for you. Unfortunately that easiness is the opposite of slim or simple. It's great until it isn't if you aren't in a hurry.
Npm should delete the top useless libraries and force the package maintainers to rewrite the code to be less dependant on them.
Or they should do what they appear to be incapable of which is to provide certification and verification of certain packages. If people really need a lib for inarray then npm should maintain and certify it.
It should not be difficult for npm to raise the funds needed to do that.
Actually, it is. That's why foundations such as Apache, Mozilla, Khronos, etc exist. Transfers of ownership, abandonment, and bad faith are not new. We need to trust not only in the software for today, but for tomorrow as well. Foundations step in because they're able to harness the financial clout to attract maintainers.
"We must make software simpler."
We must make software INTERFACES simpler. And that means opinionated solutions, preferably with a HEAVILY opinionated top layer API for the 80% and a less opinionated lower layer API for the 20%.
This is where "wide open" development systems such as Javascript and Lisp fail: They offer too much freedom. You need a standard, opinionated library that provides most of what people need, just for interoperability sake. You need standard ways of doing the common tasks so that people can harness those patterns into reusable modules that stand a greater chance of working well with others.
Yeah for all the good points in the article the author flubs these two major takeaways. Trust is required for all human interactions, not just software. But at scale you have to build the tools to enable trust into the system. And JavaScript/NPM has a lot more and bigger built-in systemic risks, including a much greater level of trust required and a lot fewer tools to enable it.
As someone with the slightest consciousness for security, this always terrifies me.
The Gradle version in the Ubuntu repos is too old? Just add a PPA from some random guy on the internet. The Gradle wrapper also exists, but you'd need to type "./gradlew" instead, which is apparently enough reason to use the PPA instead.
Need a Gradle plugin? Oh, yeah, just add Maven Central and JCenter as repos. No idea who (/if anyone) audits those, but we'll just trust them anyways.
Need a Docker image? Just go to Docker Hub or Quay and download one that looks good.
Don't quite like Eclipse and the company won't pay for the IntelliJ Ultimate Edition? Just install the Community Edition, with no idea if that Privacy Statement actually means they could phish your SSH keys, phone number etc.
Need a way to transfer files? Google Drive is a great way to do that.
Need an operating system? We have either Windows 10 or Windows 10 for you, which has been shown to transfer encrypted, undisclosed packages to Redmond, even in the Enterprise Edition.
Office suite? MS Office, for which the same is true.
Need to look up some specific issue with security critical software you use? Just type it into Google.
I even once saw someone who typed a password into the Chrome URL bar to show the guy next to them how it's spelled.
With the sheer disregard I frequently encounter for any sensible behaviour, I've really stopped wondering how hacking, industry espionage etc. work. I'm already quite content if it's not just all out in the open.
For any electronic device of your choice, search for its manual and look for this notice:
"This device complies with part 15 of the FCC Rules. Operation is subject to the following two conditions: (1) This device may not cause harmful interference, and (2) this device must accept any interference received, including interference that may cause undesired operation."
Everything is backdoored, absolutely everything: your hardware, your firmware, the compiler used to build your software, your software, libraries used by your software, your network infrastructure, your crypto, your sources of entropy, the machines you communicate with, everything.
Understandability is definitely a nice thing to have, though I believe it's ultimately limited by essential complexity of the problem domain. Also, it's not a solution here, because a malicious actor can make the code very easy to understand in a way in which you gloss over it and not notice subtle bits here and there that add up to an attack.
Still, this is not the main problem highlighted by this (yet another) NPM fiasco. I believe there are two and only two core problems that caused this issue, and that will enable future incidents like this:
Problem 1: some scoundrel violated the commons. The commons have no effective means of tracking them down and punishing them, so that a) they'll deeply regret what they've done, and b) other scoundrels will be deterred from trying. Lack of means of effective policing means various open source communities will keep having such problems.
Problem 2: people don't check their dependencies. Yes, I can already hear all startups screaming, "we can't afford it". Well, sucks to be you, but you'd better hustle and find a way. The licenses of almost all the software you use disclaim any responsibility for anything whatsoever, so if you expose users to that software and that software harms the users, it's your fault. You mishandled them. So find a way to vet your software, or buy some insurance against yet another NPM compromise. Or don't, and accept you're taking a risk.
To be clear, I'm not advocating a general "caveat emptor" attitude to software. We've build a civilization in part on systems and regulations that allow us to not vet everything we interact with, and yet be quite confident in our safety. But FOSS is not there yet (Problem 1). It's built on trust, but most communities have little means of protecting that trust. As for companies, I have little sympathy (Problem 2), like I wouldn't have much for any other company in any other industry that said they can't afford to do their basic job right.
I don't know about the feasibility to understand all that's going on on your computer with the ongoing stuffing of layers over layers (kernel, userspace, "container", language VM such as node/v8 or JVM, framework, etc.). If anything, I'd expect younger devs to understand less of it all, since they don't learn from the ground up (eg from hardware). But there has always been the option to use interchangable standard components based on POSIX, SQL, etc. as a remedy to become critically dependent and vulnerable. In fact, node.js started as just one of many implementations of the CommonJS module loading mechanism and base API, in addition to being based on one of the most portable languages of all time.
But I guess no amount of education will make the kind of developers go away who think 400+ package dependencies for a trivial app is a good thing, or that you absolutely have to write a proprietary not-quite-JavaScript language compiled to JavaScript via babel, or that node.js in itself is a goal, rather than merely an implementation.
Not knowing much about node.js, wouldn't the most logical step now to make a standard node.js library that implements the majority of the functions of those ~400 packages? Vetting one package is easier than vetting 400. I guess that's what Rails did in the very beginning, I keep on discovering in-built magic
Node does have a standard library. It's not massive, but there's a lot more available out of the box than is available in a browser.
The tricky part is that lots is npm packages end up bundled into an app that runs in the browser. It's just a guess, but I'd guess that significantly more npm imports end up running in the browser than in Node. So even giving Node a standard library as extensive as Ruby's wouldn't help as much as one would hope.
New to web dev. New to node. Looks like a complete mess to me, deserves criticism, as does much of the web dev ecosystem.
Package maintainer should indeed have found someone to pass it onto (see Cathedral & the Bazaar). And that doesn’t include the first person he’s never heard of stepping up.
BUT this applies to all package managers, maintainers, and OSS at some level.
The idea that say a startup has time to audit every line of every dependency is absurd. Even a big business can’t do that. The idea that you “don’t have to trust” the authors is untrue, in the current workflow. FOSS relies entirely on trust.
I’m not convinced FOSS is even a good idea at this point, but with the advent of widespread cyberwarfare we need to either introduce a sophisticated accompanying trust model, or exclude FOSS when working commercially.
This is a business opportunity. Audit FOSS and sell your audit guarantees in a contract. Offer services to audit more recent versions on the proviso that you can sell that audit elsewhere.
This will have the incidental benefit of encouraging clean software to be written in languages that minimise audit costs, as those projects will get used more.
Some commercial arbitration of FOSS now looks inevitable.
> The idea that say a startup has time to audit every line of every dependency is absurd. Even a big business can’t do that.
This may sound strange, but once tech companies actually had to either write or buy all of their software, and if they didn't have a contract in place that made someone else responsible for its quality, they were. So, basically, the world was absurd.
The way FOSS works is if users agree that a common good is important enough to invest in, and then they all benefit more than they would if they invested alone; it's anti-competitive. Free (and Open Source) software should be thought of as "free as in beer" i.e., the next round is on you. If you can't audit every line, audit some of them, or pay someone else to do it. Coordinate to get code coverage. If you use a project that is inadequately covered, you're responsible for everything that goes wrong.
If you don't even know what libraries your project depends on, how could that ever be thought of as anyone's fault but your own?
This may be a central part of the issue. It's a coordination problem and creating common knowledge.
Each corporation might be willing to pay to have some bits of their dependencies audited as long as others cover other pieces. But to do that they need to be able to announce the audit result and scan their dependency trees for pieces that are not audited and pick from those. You'd also need some common standards what constitutes an audit and the lawyers would probably want some limits on liability so results should be considered as best-effort or whatever.
There are no conventions and social protocols in place to support this.
I might be missing your point or simply have a hard time understanding this anti FOSS rethoric.
I'm reading that you'd pay a third party so you can trust open source code and think that FOSS somehow exposes commercial code to more risk in some kind of cyber warfare? How is that not complete FUD? You already have the option to pay vendors like redhat for many open source software components if liability is your only concern, the same is true for many of the more complex libraries out there.
Closed source on the other hand would mean buying every single piece of code or paying in house devs to write that code. I get the quality concerns raised here up to a point but just because a company paid somebody to write something doesn't mean it's not effectively written by a solo dev under heavy time constraints. Except with FOSS you at least have the chance to go in and inspect/fix the thing yourself if needs be.
1) The PC software world did run for quite a few years on the model of predominantly commercial/proprietary software, most of it being closed-source, so it's not like it is some far-fetched idea that doesn't work in economic terms.
Personally, I prefer the commercial license/source-included model, with the emphasis on the author/company getting paid to ensure that the situations like the one described here are avoided. You can then have additional educational licenses for ensuring access to developer tools for educational purposes, but that's up to the author/company.
2) If you directly pay someone to write software, I would expect any such arrangement to include the source code as part of the work product, regardless of the ultimate visibility of the source code to outside parties.
TANSTAAFL. You're literally getting the source code for free. I agree that it absolutely makes sense to pay a third party to audit your dependencies if you can't do so yourself. The alternative is to restrict yourself to code from a source you trust implicitly.
> The idea that say a startup has time to audit every line of every dependency is absurd. Even a big business can’t do that.
Big business absolutely do that. Code quality review, security review, legal review. Every line of every 3rd party dependency.
Of course, for the most part big business doesn't take 3rd party dependencies. If you have a big enough software org, you write everything above the std library in-house. Why do you think so many of the big open-source frameworks are vended by big 10 tech firms?
Don't know why you're being downvoted, because Red Hat does indeed audit all of the code we ship in RHEL (and believe me it's pretty tedious). We don't ship much in the way of Javascript libs though.
> The idea that say a startup has time to audit every line of every dependency is absurd.
I kind of agree, but remember you are getting free software. Not a little, but a ton of free software and you feel like somebody should guarantee all works fine.
Choices:
- See what is going on in all your deps and waste a lot of time
- Risk it and use the software without knowing what is doing
- Pay somebody to guarantee that the software is not malicious
> Package maintainer should indeed have found someone to pass it onto (see Cathedral & the Bazaar). And that doesn’t include the first person he’s never heard of stepping up.
WITH NO WARRANTY.
Do you not understand what this part of the licensed mean?
Mainter doesn't have to do shit. That is the point.
You want to start putting arbitrary ethics and morals on these developers? The "Fuck you. Pay me." talk comes to mind.
FOSS works fine like this and has been for a long time.
We're seeing issues now because of nodes lack of a standard library. Not trust and certainly not instead issues with free and open source software.
It sounds like you haven’t read the Cathedral and the Bazaar, which really is one of the founding documents of OSS, and it does indeed encode ethics in its motivation. Recommended reading.
Lack of standard library only makes the trust issue manifest with more hilariously basic libraries, instead of more slowly with the libraries. We've still seen this in Python, the poster child for batteries-included.
> I’m not convinced FOSS is even a good idea at this point, but with the advent of widespread cyberwarfare we need to either introduce a sophisticated accompanying trust model, or exclude FOSS when working commercially.
So, you are saying that we should prefer code where it is impossible to have a look at the source because that solves the problem of having to trust the developers of that code?
No, the point is that in practice FOSS code is not any more open than closed source.
That's the point of this piece. For any non-trivial edit on a real project with real deadlines the source code is effectively useless, because no one has the time, the resources, or possibly even the inclination to fix bugs, do full-coverage testing, or make custom modifications.
So you have to take the internals on trust. Which is a ridiculous situation when so many packages are created as hobby projects with - literally - no guarantee of performance or reliability.
I realise it's hard for FOSS advocates to understand this, because it's a fundamental flaw with the FOSS philosophy. The benefits are "obvious" to crusaders, but the objective reality is that large swathes of FOSS are full of casual or hobby code that barely works, has gaping security vulns, and/or is nowhere close to being robust enough for production.
"Make software simpler" is a good goal, but hard to do. Other solutions are also possible. They're hard too. So it goes.
But there will be no solutions at all until the FOSS community starts dealing with professional reality instead of relying on free-to-tinker-without-consequences rhetoric - and understands that there are real problems that need real answers, and not just more "Clap Louder" and "At least we're not Microsoft".
Software must be made understandable. The essence of FOSS for me can be reduced to one fundamental computing right:
the right to refuse to run, on my machines, code that I do not have the option to understand. That is it.
To me, those three sentences don't even fit together.
You've always had "the right to refuse to run, on my machines, code that I do not have the option to understand". Nobody is forcing you to run any random piece of code you found on-line. You do that of your own accord. And if you screw this up, and that screwup affects other people, it's your fault. Simple as that.
Not only have we always had this right, the rest of the article argues, correctly, that the option to understand the code you run does not solve the problem. It's like having the option to study before an exam.
The call for a massive reduction in complexity reminds me of the VPRI "STEPS" project (Alan Kay et al) to build a complete end-user operating system in under 20,000 lines of code. Sadly, it looks like the project is dead or over, and the links I've found pointing to it on vpri.org are dead today.
Until there's some external stimulus, I don't think the industry is going to change. It's a lot cheaper to add new flashy things if you don't care about complexity (or the consequences of it, like bugs, and security). Getting a consumer to care about the complexity of the software in their computer or phone is like asking a Ruby programmer to care about the microcode in their CPU. It's not that we can't understand the problem but it's not a concern until it gets so bad it impacts my level of abstraction.
I'd love to see programs start putting little badges on their webpages that brag about how few lines of code they have, how low their cyclomatic complexity is, or how short their dependency tree is. I'm terrible at marketing but surely there's a way to make this sound appealing.
It's a long time since js needed a good stdlib.
Everybody was using this package but apparently no one but an adversarial player stepped up to actually maintain it.
(And don't get me started on "let's always get the latest version of the package")
> You want to download thousands of lines of useful, but random, code from the internet, for free, run it in a production web server, or worse, your user’s machine, trust it with your paying users’ data and reap that sweet dough. We all do. But then you can’t be bothered to check the license, understand the software you are running and still want to blame the people who make your business a possibility when mistakes happen, while giving them nothing for it? This is both incompetence and entitlement.
Not surprising. Not surprising in the least. "Oh wow somebody 0wned the package I needed". Maybe because js projects have an order of magnitude more dependencies than a Python/Java/Go, etc project. Maybe because in the extreme opposite of NIH, people feel the need to import a module for every small thing they want to do? "Stack overflow programming", "how do I add 2 numbers using React, is there a module for that!?!?!"
However developing robust software is possible in JS, the same way it was possible in PHP. You just need to take some time to vet dependencies a bit better and not use every fad when it surfaces. Common sense and experience help a lot. But it's incredible what can be built with modern toolchains and how maintainable it can be. Don't let the anti-JS feeling scare you from trying React / Vue /..., just don't forget that state management, separation of concerns and similar concepts are still valid.
Honestly, I think JavaScript just has a lot of new developers who have never been exposed to concepts of software development such as dependency management, cultivating trust, and how (free and) open source works. I think the kind of developer you're describing is actually a minority.
Edit, a culture difference example:
a dev wanted to get the video length in a mp4, so he installed a npm package but it did not worked, the issue was that ffmpeg was not setup in the PATH so he asked me to solve it, I asked to use the absolute path of the ffmpeg binary but that was not an option n the package.
In the end I checked the package he found and used and I showed that it just called ffmpeg from the command line and if we just do that we can pass the exact options we want and not have to install a third party package.
As a PHP dev when I have to perform something is to find if there is a Linux CLI app that does this, install the app on the server and call it (like ffpmeg, wkhtml2pdf, an epub to mobi app, etc) I can trust a CLI app that is packaged in Linux then a random package.
PHP has composer and we use that for third party integration like Dropbox,Facebook,Amazon have official packages that we can grab and use.
No, both are terrible, garbage languages, languages that have been whipped into reasonable shape over a decade or two by good programmers forced to use them due to awful monocultures that arose through successions of largely arbitrary events. This reasonable shape means that the most recent versions can be used with relative pleasure if you ignore half of their syntax, clamp massive libraries to them to replace the other half, and assume that your end product will be as fragile as glass.
edit: sorry about the trolly comment (which I make in response to a good comment), but if PHP and javascript aren't bad languages, I wouldn't know how to make a case that any language is a bad language.
but sure if one wants want to classify incompetent developers in the PHP bucket, then everything is "the new PHP"
I think in 2012 a frameworks got popular that simplified the task of building JS apps that are as complex as apps in C++ or Java. Actually using these frameworks wasn't trivial when they came out. Now that's pretty easy but the deps are crazy. I wonder what happens next, maybe some super advanced package manager will come...
But in all other aspects I think JS is the new PHP.
Whereas with Javascript if you find a bug or something you want added to the core library then you need to pay millions of dollars to join ECMA. How can you find competent developers to work in an ecosystem with that kind of governance? You can't, which is why we are where we are now.
Even if with respect to the implementations, if you file a bug with any of the major browser vendors then you'll probably die of old age before the ticket is even triaged, let alone fixed.
https://github.com/tc39/ecma262/blob/master/CONTRIBUTING.md
To me this illustrates a fundamental problem that JS has to deal with that is virtually unique in the programming language space: multiple browsers across multiple browser versions. It is incredibly difficult to get all browsers to adopt a "standard library" and even then it takes years for all users to adopt those browsers that support the standard library. Even on top of that, not all browsers implement the standard library properly. It really is a nightmare grown out of competing browsers with users that do not update them enough.
> Maybe because js projects have an order of magnitude more dependencies than a Python/Java/Go, etc project.
This is because JS file sizes matter a lot. We have huge libraries like `lodash` which are like a standard library, but nobody wants to use them because they dramatically increase the filesize of the JS bundle. I would rarely want to bring in lodash for a couple utilities, even with treeshaking and the like because it still dramatically increases bundle size. We have pretty excellent datetime libraries that most people hesitate to use -- like moment.js -- because they are huge. So what's the result? A ton of dependencies with very limited scopes because developers do not want to bring in massive libraries that do everything.
Let's flip to Python. Let's say magically you can run python inside a browser starting tomorrow. The second you bring in a library like `numpy` you're looking at a bundle size of 40 MB, and that's just one dependency. In the JS world that is utterly unacceptable. All the languages you mentioned take advantage of the fact that they can download those libraries to the filesystem and forget about it. JS has to download libraries over the wire, it's a completely different game.
What I'm trying to say is that the JS ecosystem didn't invent a bunch of problems to solve or that the people running the ecosystem are script kiddies. There are very unique problems that need to be solved in this ecosystem that make it different, especially when referencing the three languages you mentioned in your post.
With all of these "JS sucks" arguments I see a severe lack in empathy or even remotely trying to understand why JS has the problems it does.
You don't need to. The standard library could just be a community curated project (with the help of major browser vendors) that ships as extra code. It could even be on npm.
If the browser has it included, even better, if not, it's referenced the usual way third party dependencies are.
The problem is having a package.json like:
Not having a package.json like: where e.g. Y is lodash, or react, and such.>This is because JS file sizes matter a lot. We have huge libraries like `lodash` which are like a standard library, but nobody wants to use them because they dramatically increase the filesize of the JS bundle. I would rarely want to bring in lodash for a couple utilities, even with treeshaking and the like because it still dramatically increases bundle size. We have pretty excellent datetime libraries that most people hesitate to use -- like moment.js -- because they are huge. So what's the result? A ton of dependencies with very limited scopes because developers do not want to bring in massive libraries that do everything.
That doesn't seem the case either. In major web pages, even from big companies, there are multiple versions of dependencies, even full deps like lodash and co. And people use all kinds of gigantic (web wise) frameworks and third party libs like moment.js with wild abandon.
Besides, even if that was the problem, there's nothing stopping you from having a modular set of libraries (like lodash), that you can cherry pick from the functionality you need and only load that.
The problem the parent mentions is not "JS needs to include big libraries and stop using small dependencies" but JS needs to stop using random small dependencies from here and there.
Using 100 dependencies from all kinds of crappy upstream places (e.g. some crappy leftpad implementation), is different than having a curated set of libraries and loading 100 small dependencies from that.
If tree-shaking isn't good enough, it needs to get better. Instead of having lots of individuals creating tiny packages with one function in them, there need to be fewer, broader libraries that are closely watched by larger teams.
the issue is NPM. NPM is a bad package manager. A package manager that allows duplicates version of the same dependency is broken to begin with. People complain that fetching react-native results in hundreds of packages installed on their computers. How many freaking dupes? off course nobody is going to audit all that crap. Conflicts should be resolved upstream, which would lead to more stable packages to begin with.
NPM was developed by people who were clueless about package management and now are profiting directly from that shit-show, and even Node.js creator went on recording saying tying it to NPM was a mistake.
Npm allowing it is a good thing about npm.
Library authors. Please be extremely conservative in when to pull in dependencies. I will prioritize this higher for projects that I maintain.
The problem with thinking these x packages are OK for this library is how we got where we are. Yes we could just avoid using trivial libraries, but even for the other ones the decision to add it into your lib has to be made in the context of the app or lib, or lib's lib using your library. If you only imagine your lib being directly used, your scales a d judgements could be way off the actual cost/bebefit.
I think it's precisely because it lacks a good committed (commercial) owner like Android and .NET have. I could see it going from bad to worse in the future, as Oracle continues to jettison responsibility for bits of the ecosystem.
Yes. Huge package ecosystems are hurtful. For the basics, language communities should come with a nice, batteries improved, standard library.
For the rest, people should only trust community projects with big following, processes, etc (like Apache stuff, Django, moment.js, postgres, etc), or open source projects supported by companies (e.g. nginx, mongo, React, etc).
The rest, sorry, but you got to write them yourself.
Downloading and using 1-person libs for trivial stuff like lefpad and such is madness.
Let's safely call it immature. The principle objectives here:
Part of the problem is that people are taught one thing in school, expected to immediately find work, and the then perform something immediately different at work. That, or they are simply Java (or whatever fill in the blank) and get retasked to write JavaScript.What ends up happening is that people want to do their previous job in the context of their current job, such as write something that looks like Java in JavaScript. When that becomes the failure you would expect they blame the technology for not being their previous thing.
Compounding that training failure is that JavaScript has a low barrier of entry. Some of the people writing in this language are less than brilliant. The end result is a cluster fuck as evidenced by frequently used terms like imposter syndrome, snowflake, Dunning Kruger, and so forth. The insecurity, fear, and delusion clearly evident to me as a corporate JavaScript developer.
How do you solve for the obvious emotional failure that comes from the described lack of preparation? You hide under layers and layers of abstractions. You make life easier to the point of hoping code exists that does your job for you. Unfortunately that easiness is the opposite of slim or simple. It's great until it isn't if you aren't in a hurry.
Or they should do what they appear to be incapable of which is to provide certification and verification of certain packages. If people really need a lib for inarray then npm should maintain and certify it.
It should not be difficult for npm to raise the funds needed to do that.
Actually, it is. That's why foundations such as Apache, Mozilla, Khronos, etc exist. Transfers of ownership, abandonment, and bad faith are not new. We need to trust not only in the software for today, but for tomorrow as well. Foundations step in because they're able to harness the financial clout to attract maintainers.
"We must make software simpler."
We must make software INTERFACES simpler. And that means opinionated solutions, preferably with a HEAVILY opinionated top layer API for the 80% and a less opinionated lower layer API for the 20%.
This is where "wide open" development systems such as Javascript and Lisp fail: They offer too much freedom. You need a standard, opinionated library that provides most of what people need, just for interoperability sake. You need standard ways of doing the common tasks so that people can harness those patterns into reusable modules that stand a greater chance of working well with others.
The Gradle version in the Ubuntu repos is too old? Just add a PPA from some random guy on the internet. The Gradle wrapper also exists, but you'd need to type "./gradlew" instead, which is apparently enough reason to use the PPA instead.
Need a Gradle plugin? Oh, yeah, just add Maven Central and JCenter as repos. No idea who (/if anyone) audits those, but we'll just trust them anyways.
Need a Docker image? Just go to Docker Hub or Quay and download one that looks good.
Don't quite like Eclipse and the company won't pay for the IntelliJ Ultimate Edition? Just install the Community Edition, with no idea if that Privacy Statement actually means they could phish your SSH keys, phone number etc.
Need a way to transfer files? Google Drive is a great way to do that.
Need an operating system? We have either Windows 10 or Windows 10 for you, which has been shown to transfer encrypted, undisclosed packages to Redmond, even in the Enterprise Edition.
Office suite? MS Office, for which the same is true.
Need to look up some specific issue with security critical software you use? Just type it into Google.
I even once saw someone who typed a password into the Chrome URL bar to show the guy next to them how it's spelled.
With the sheer disregard I frequently encounter for any sensible behaviour, I've really stopped wondering how hacking, industry espionage etc. work. I'm already quite content if it's not just all out in the open.
"This device complies with part 15 of the FCC Rules. Operation is subject to the following two conditions: (1) This device may not cause harmful interference, and (2) this device must accept any interference received, including interference that may cause undesired operation."
Everything is backdoored, absolutely everything: your hardware, your firmware, the compiler used to build your software, your software, libraries used by your software, your network infrastructure, your crypto, your sources of entropy, the machines you communicate with, everything.
Still, this is not the main problem highlighted by this (yet another) NPM fiasco. I believe there are two and only two core problems that caused this issue, and that will enable future incidents like this:
Problem 1: some scoundrel violated the commons. The commons have no effective means of tracking them down and punishing them, so that a) they'll deeply regret what they've done, and b) other scoundrels will be deterred from trying. Lack of means of effective policing means various open source communities will keep having such problems.
Problem 2: people don't check their dependencies. Yes, I can already hear all startups screaming, "we can't afford it". Well, sucks to be you, but you'd better hustle and find a way. The licenses of almost all the software you use disclaim any responsibility for anything whatsoever, so if you expose users to that software and that software harms the users, it's your fault. You mishandled them. So find a way to vet your software, or buy some insurance against yet another NPM compromise. Or don't, and accept you're taking a risk.
To be clear, I'm not advocating a general "caveat emptor" attitude to software. We've build a civilization in part on systems and regulations that allow us to not vet everything we interact with, and yet be quite confident in our safety. But FOSS is not there yet (Problem 1). It's built on trust, but most communities have little means of protecting that trust. As for companies, I have little sympathy (Problem 2), like I wouldn't have much for any other company in any other industry that said they can't afford to do their basic job right.
But I guess no amount of education will make the kind of developers go away who think 400+ package dependencies for a trivial app is a good thing, or that you absolutely have to write a proprietary not-quite-JavaScript language compiled to JavaScript via babel, or that node.js in itself is a goal, rather than merely an implementation.
The tricky part is that lots is npm packages end up bundled into an app that runs in the browser. It's just a guess, but I'd guess that significantly more npm imports end up running in the browser than in Node. So even giving Node a standard library as extensive as Ruby's wouldn't help as much as one would hope.
Package maintainer should indeed have found someone to pass it onto (see Cathedral & the Bazaar). And that doesn’t include the first person he’s never heard of stepping up.
BUT this applies to all package managers, maintainers, and OSS at some level.
The idea that say a startup has time to audit every line of every dependency is absurd. Even a big business can’t do that. The idea that you “don’t have to trust” the authors is untrue, in the current workflow. FOSS relies entirely on trust.
I’m not convinced FOSS is even a good idea at this point, but with the advent of widespread cyberwarfare we need to either introduce a sophisticated accompanying trust model, or exclude FOSS when working commercially.
This is a business opportunity. Audit FOSS and sell your audit guarantees in a contract. Offer services to audit more recent versions on the proviso that you can sell that audit elsewhere.
This will have the incidental benefit of encouraging clean software to be written in languages that minimise audit costs, as those projects will get used more.
Some commercial arbitration of FOSS now looks inevitable.
This may sound strange, but once tech companies actually had to either write or buy all of their software, and if they didn't have a contract in place that made someone else responsible for its quality, they were. So, basically, the world was absurd.
The way FOSS works is if users agree that a common good is important enough to invest in, and then they all benefit more than they would if they invested alone; it's anti-competitive. Free (and Open Source) software should be thought of as "free as in beer" i.e., the next round is on you. If you can't audit every line, audit some of them, or pay someone else to do it. Coordinate to get code coverage. If you use a project that is inadequately covered, you're responsible for everything that goes wrong.
If you don't even know what libraries your project depends on, how could that ever be thought of as anyone's fault but your own?
This may be a central part of the issue. It's a coordination problem and creating common knowledge.
Each corporation might be willing to pay to have some bits of their dependencies audited as long as others cover other pieces. But to do that they need to be able to announce the audit result and scan their dependency trees for pieces that are not audited and pick from those. You'd also need some common standards what constitutes an audit and the lawyers would probably want some limits on liability so results should be considered as best-effort or whatever.
There are no conventions and social protocols in place to support this.
I'm reading that you'd pay a third party so you can trust open source code and think that FOSS somehow exposes commercial code to more risk in some kind of cyber warfare? How is that not complete FUD? You already have the option to pay vendors like redhat for many open source software components if liability is your only concern, the same is true for many of the more complex libraries out there.
Closed source on the other hand would mean buying every single piece of code or paying in house devs to write that code. I get the quality concerns raised here up to a point but just because a company paid somebody to write something doesn't mean it's not effectively written by a solo dev under heavy time constraints. Except with FOSS you at least have the chance to go in and inspect/fix the thing yourself if needs be.
1) The PC software world did run for quite a few years on the model of predominantly commercial/proprietary software, most of it being closed-source, so it's not like it is some far-fetched idea that doesn't work in economic terms.
Personally, I prefer the commercial license/source-included model, with the emphasis on the author/company getting paid to ensure that the situations like the one described here are avoided. You can then have additional educational licenses for ensuring access to developer tools for educational purposes, but that's up to the author/company.
2) If you directly pay someone to write software, I would expect any such arrangement to include the source code as part of the work product, regardless of the ultimate visibility of the source code to outside parties.
Big business absolutely do that. Code quality review, security review, legal review. Every line of every 3rd party dependency.
Of course, for the most part big business doesn't take 3rd party dependencies. If you have a big enough software org, you write everything above the std library in-house. Why do you think so many of the big open-source frameworks are vended by big 10 tech firms?
If would be true somebody would have noticed this hack before. Has been online for 2 months and they only found out because of a deprecation message.
> Of course, for the most part big business doesn't take 3rd party dependencies.
I worked for many big companies, definitively they use 3rd party deps.
> std library in-house
And that contains 3rd party deps.
Big companies have some 3rd party to check the libraries but it looks like they are not good enough because they didn't catch this one.
Pay Redhat enough and they will do that. Although you will be limited in what you can use.
I kind of agree, but remember you are getting free software. Not a little, but a ton of free software and you feel like somebody should guarantee all works fine.
Choices:
- See what is going on in all your deps and waste a lot of time - Risk it and use the software without knowing what is doing - Pay somebody to guarantee that the software is not malicious
WITH NO WARRANTY.
Do you not understand what this part of the licensed mean?
Mainter doesn't have to do shit. That is the point.
You want to start putting arbitrary ethics and morals on these developers? The "Fuck you. Pay me." talk comes to mind.
FOSS works fine like this and has been for a long time.
We're seeing issues now because of nodes lack of a standard library. Not trust and certainly not instead issues with free and open source software.
So, you are saying that we should prefer code where it is impossible to have a look at the source because that solves the problem of having to trust the developers of that code?
That's the point of this piece. For any non-trivial edit on a real project with real deadlines the source code is effectively useless, because no one has the time, the resources, or possibly even the inclination to fix bugs, do full-coverage testing, or make custom modifications.
So you have to take the internals on trust. Which is a ridiculous situation when so many packages are created as hobby projects with - literally - no guarantee of performance or reliability.
I realise it's hard for FOSS advocates to understand this, because it's a fundamental flaw with the FOSS philosophy. The benefits are "obvious" to crusaders, but the objective reality is that large swathes of FOSS are full of casual or hobby code that barely works, has gaping security vulns, and/or is nowhere close to being robust enough for production.
"Make software simpler" is a good goal, but hard to do. Other solutions are also possible. They're hard too. So it goes.
But there will be no solutions at all until the FOSS community starts dealing with professional reality instead of relying on free-to-tinker-without-consequences rhetoric - and understands that there are real problems that need real answers, and not just more "Clap Louder" and "At least we're not Microsoft".
2. It’s about legal responsibility and recourse.
Software must be made understandable. The essence of FOSS for me can be reduced to one fundamental computing right: the right to refuse to run, on my machines, code that I do not have the option to understand. That is it.
You've always had "the right to refuse to run, on my machines, code that I do not have the option to understand". Nobody is forcing you to run any random piece of code you found on-line. You do that of your own accord. And if you screw this up, and that screwup affects other people, it's your fault. Simple as that.
Deleted Comment
Until there's some external stimulus, I don't think the industry is going to change. It's a lot cheaper to add new flashy things if you don't care about complexity (or the consequences of it, like bugs, and security). Getting a consumer to care about the complexity of the software in their computer or phone is like asking a Ruby programmer to care about the microcode in their CPU. It's not that we can't understand the problem but it's not a concern until it gets so bad it impacts my level of abstraction.
I'd love to see programs start putting little badges on their webpages that brag about how few lines of code they have, how low their cyclomatic complexity is, or how short their dependency tree is. I'm terrible at marketing but surely there's a way to make this sound appealing.
Discussion on HN (2016) : https://news.ycombinator.com/item?id=11686325
I think Brett Victor is one of the people driving the effort beyond that project: http://worrydream.com/