> ... Deno is (and always will be) a single executable file. Like a web browser, it knows how to fetch external code. In Deno, a single file can define arbitrarily complex behavior without any other tooling.
> ...
> Also like browsers, code is executed in a secure sandbox by default. Scripts cannot access the hard drive, open network connections, or make any other potentially malicious actions without permission. The browser provides APIs for accessing cameras and microphones, but users must first give permission. Deno provides analogous behaviour in the terminal. The above example will fail unless the --allow-net command-line flag is provided.
The Deno feature that seems to draw the most fire is dependency management. Some skeptics may be latching onto the first point without deeply considering the second.
Deno is just doing the same thing a browser does. Like a browser, there's nothing that JavaScript running on sandboxed Deno can do that a browser can't - in principle. So the security concerns seem a little over the top.
The one caveat is that once you open the sandbox on Deno, it appears you open it for all modules. But then again, that's what NPM users do all the time - by default.
As far as criticisms around module orchestration, ES modules take care of that as well. The dependency graph forms from local information without any extra file calling the shots.
This seems like an experiment worth trying at least.
See the thing about the sandbox is that it's only going to be effective for very simple programs.
If you're building a real world application, especially a server application like in the example, you're probably going to want to listen on the network, do some db access and write logs.
For that you'd have to open up network and file access pretty much right off the bat. That combined with the 'download random code from any url and run it immediately', means it's going to be much less secure than the already not-that-secure NPM ecosystem.
> That combined with the 'download random code from any url
What protection does NPM actually give you?
Sure, they'll remove malware as they find it, but it is so trivially easy to publish packages and updates to NPM, there effectively is no security difference between an NPM module and a random URL. If you wouldn't feel comfortable cloning and executing random Github projects, then you shouldn't feel comfortable installing random NPM modules.
> and run it immediately
NPM packages also do this -- they can have install scripts that run as the current user, and have network access that can allows them to fetch, compile, and execute random binaries off the Internet.
From a security point of view, Deno is just making it clear up-front that you are downloading random code snippets, so that programmers are less likely to make the mistake of trusting a largely unmoderated package repository to protect themselves from malware.
I lean towards calling that a reasonably big security win on its own, even without the other sandboxing features.
> That combined with the 'download random code from any url and run it immediately', means it's going to be much less secure than the already not-that-secure NPM ecosystem.
What deno does is move package management away from the framework distribution. This is great - one thing I hate about node is that npm is default and you get only as much security as npm gives you. (You can switch the npm repo, but it's still the overwhelming favourite because it's officially bundled.)
Deno can eventually give you:
import lib from 'verified-secure-packages.com'
import lib from 'packages.cloudflare.com'
So you'll be able to pick a snippet repository based on your risk appetite.
Both the network and disk access permissions are granular, which means you can allow-write only to your logs folder, and allow net access only to your DB's address.
> For that you'd have to open up network and file access pretty much right off the bat.
For the network access I have an issue[0] open that asks to separate permissions to listen on the network from permission to dial. Also, along the way I want to have the ability to let the OS pick the port as an option.
Permissions are meant to work by whitelisting. So you wouldn't open access to the whole system just to talk to your DB, or to operate on some files.
Maybe this will develop into a standard of multi-process servers (real micro services you could say), where the permissions are only given to a slice of the application.
For the use-case you describe, your just going to need network access: no file access and no process-forking needed, this is a big surface attack reduction.
Moreover Idk how granular the network permission is, but if its implementation is smart, you could block almost all outbound network access except the ones to your DB and the few API you may need to contact.
> For that you'd have to open up network and file access pretty much right off the bat.
I think that overall you're right, but it's worth noting that deno can restrict file system access to specific folders and can restrict read and write separately. It's plausible to me that you could have a web server that can only access specific folders.
I don't think running a public web server application is one of the envisioned use cases here. It looks like a tool for quickly and dirtily getting some job done. But I agree that to get something useful done, you probably need to open up a bunch of permissions, so you're still running arbitrary code on your machine.
It's always a good idea to run in a container, which limits the ports you can listen on, directories allowed for writing and reading, and can have its own firewall to limit outgoing connections.
If you don't need the firewall, you can just run in a chroot under a low-privilege user.
I mean, if you do otherwise, you are not following best practices and the voice of reason.
The manual looks pretty sketchy, but it seems you can limit file access to certain files or directories and that could be used to just give it access to the database and log files.
Simple solution to the dependency management (spitballing): a directory of files where the filename is the module name. Each file is simply:
<url><newline><size in bytes><newline><hash>
And then in an application:
import { serve } from deno.http.server;
If you want nested levels so deno.X.X wouldn't be a ton of files you could possibly just do nested directories so deno/http/server would equate to deno.http.server.
Most people would want the option to do dependency management on a per-project basis as well. Simply allow a command-line parameter to provide one or more other directories to source from first (besides presumably the global one for your installation).
If we wanted the file to be generated automatically, maybe something like this:
import { serve } from deno.http.server at "https://deno.land/std@0.50.0/http/server.ts";
I guess I'm wondering why Deno is targeting V8 instead of Servo? Maybe I'm mistaken, but Servo [0] and Stylo [1] are both production-ready browser scripting and styling engines implemented in Rust.
>Servo [0] and Stylo [1] are both production-ready browser scripting and styling engines implemented in Rust.
Servo is absolutely not production ready. A couple of particular pieces of Servo, such as Stylo and WebRender, can be considered production-ready, but no so much the project as a whole.
If you're getting into Deno and want to keep up with new stuff from the ecosystem on a regular basis, we're now publishing https://denoweekly.com/ .. issue 2 just went out minutes after the 1.0 release. I've been doing JavaScript Weekly for 487 issues now, so this is not a flash in the pan or anything :-D
Am curious how Parallelism could be handled in the runtime? Besides exposing WebWorkers, would shared memory be a possibility? V8 looks like its heading toward a portable WebAssembly SIMD accelerator.
>>> Promises all the way down
Async / await is a great pattern for render loops by resolving to continuous window.requestAnimationFrame calls. Here is a nifty example computing a Buddhabrot and updating new data quite smoothly:
Hmm, interesting! Thanks for the report. I just ran it through Qualys SSL Labs and everything passed. (We got a B though because we still support TLS 1.1.)
It's a multi-domain Let's Encrypt X3 certificate and I believe most LE users will be in a similar boat now.
> TSC must be ported to Rust. If you're interested in collaborating on this problem, please get in touch.
This is a massive undertaking. TSC is a moving target. I occasionally contribute to it. It’s a fairly complex project. Even the checker + binder (which is the core of TS) is pretty complex.
One idea that comes to mind is to work with Typescript team that they are only using a subset of JS such that tsc can be compiled down web assembly and have llvm spit out a highly optimized binary. This not only benefits demo, but the rest of the internet.
TSC has done some great architectural changes in the past like doing mostly functional code, rather than lots of classes.
The target we should be aiming for is a powerful typed language like typescript that complies very quickly to webasshmbly that can run in guaranteed sandbox environments.
There already exists and experimental compiler that takes a subset of TypeScript and compiles it to native[1]. It might be able to target wasm instead of asm.
Also: If I'm not entirely mistaken Microsoft initially planned to have a TypeScript-specific interpreter in Explorer. This also might indicate that something like that could be possible.
This does seem like a dangerous side-path unrelated to the Deno project's needs.
From the description, it doesn't sound like Deno needs the type information for V8 optimizations (I thought they had explored that, but I don't recall, and the description here is unclear), so maybe switching to more of a two pass system of a simple as possible "type stripper" (like Babel's perhaps?) and leave tsc compilation for type checking as a separate background process. Maybe behind some sort of "production mode" flag so that type errors stop debug runs but in production assume you can strip types without waiting for a full compile?
Maybe even writing a type stripper in Rust isn't a bad idea, but definitely trying to capture all of tsc's functionality in Rust seems like a fool's errand.
v8 has the ability to snapshot a program just after it loads, but before it executes. If you snapshot after doing some kind of warmup, to trigger the right optimisations, you get something that should fire up ready to go, which is probably the main problem - the compiler being repeatedly invoked and parsed from javascript and compiled on the fly.
One problem with taking the v8 snapshot and using it as a binary executable is that it will probably be much slower then running v8 live. Although the startup time will be 10x faster. The runtime will be 10x slower.
The dependency management is highly questionable for me. Apart from the security concerns raised by others, I have huge concerns about availability.
In it's current form, I'd never run Deno on production, because dependencies have to be loaded remotely. I understand they are fetched once and cached, but that will not help me if I'm spinning up additional servers on demand. What if the website of one of the packages I depend on goes down, just as I have a huge spike in traffic?
Say what you want about Node's dependency management, but atleast I'm guaranteed reproducible builds, and the only SPOF is the NPM repositories, which I can easily get around by using one of the proxies.
Why can't you download all the packages you use actually with your source code? That's how software has been built for decades...
I'm a desktop developer so I understand I'm the dinosaur in the room but I've never understood why you would not cache all the component packages next to your own source code.
Since this is straighforward to do I presume there is some tradeoff I've not thought about. Is it security? Do you want to get the latest packages automatically? But isn't that a security risk as well, as not all changes are improvements?
For Node, the main tradeoff is number and size of files. Usually the distribution of a node module (that which is downloaded into node_modules) contains the source, documentation, distribution, tests, etc. In my current project, it adds up to 500MB already.
They would do well to have an option to optimize dependencies for vendoring.
Hi!
The response to your fears are in the announcement.
"If you want to download dependencies alongside project code instead of using a global cache, use the $DENO_DIR env variable."
Then, it will work like node_modules.
Ah, in this case, I would then have to commit my dependencies into my VCS to maintain reproducible builds. I'm not sure I like that solution very much either. I've seen node_modules in multiple GBs, and I'm sure Deno's dependency sizes are going to be similar.
I think the primary way to manage dependencies should be in a local DIR and optionally, a URL can be specified.
The default in Deno is questionable choice. Just don't fuck with what works. Default should be safest followed by developers optionally enabling less safe behaviors.
That might work for some projects, but can quickly blow up the size of the repo.
I don't think it it is an unsolvable problem. For example, other solutions could be using a mirror proxy to get packages, instead of directly from the source, or pre-populating the deno dir from an artifact store. It would be nice to have documentation on how to do those though.
It's just a URL right? So could you not mirror the packages to your own server if you're so concerned, or better yet import from a local file? Nothing here seems to suggest that packages must be loaded from an external URL.
Several responses to your concern but all of them seem to say "you can cache dependencies". How does Deno find dependencies ahead of runtime? Does Deno not offer dynamic imports at all?
If I have an application that uses, say, moment.js and want to import a specific locale, typically this is done using a dynamic import.
You’d deploy builds that include all the dependencies. This isn’t Node where you have to download all deps on your production server, they are built into your deployed files.
Go supports "vendor" folders for storing dependencies locally after initial download. That combined with Go Modules means you can handle everything locally and (I believe) reproducible.
I like what Deno is selling. URL like import path is great, I don't know why people are dismissing it. It is easy to get up-and-running quickly.
Looks like my personal law/rule is in effect again: The harsher HN critics are, the more successful the product will be. I have no doubt Deno will be successful.
The GoLang-like URL import and dependency management are indeed an innovation in simplicity while simultaneously offering better compatibility with browser JavaScript.
Perhaps the HN-hate is not about simplified greenfield tech as much as it is about breaking established brownfield processes and modules.
Almost all successful, mainstream, techs are like that. From a purely technical perspective, they are awful (or where awful at launch), they were just adopted because they were easy to use. When I say awful, I mean for professional use in high impact environments: financial, healthcare, automotive, etc.
Few people would argue that except for ease of use and ubiquity, any of these techs were superior to their competitors at launch or even a few years after.
After a while what happens is that these techs become entrenched and more serious devs have to dig in and they generally make these techs bearable. See Javascript before V8 and after, as an example.
A big chunk of the HN crowd is high powered professional developers, people working for FAANGs and startups with interesting domains. It's only normal they criticize what they consider half-baked tech.
Forget the (reasonable) security and reliability concerns people have already brought up with regard to importing bare URLs. How about just the basic features of dealing with other people's code: how am I supposed to update packages? Do we write some separate tool (but not a package management tool!) that parses out import URLs, increments the semver, and... cURLs to see if a new version exists? Like if I am currently importing "https://whatever.com/blah@1.0.1", do I just poll to see if "1.0.2" exists? Maybe check for "2.0.0" too just in case? Is the expectation that I should be checking the blogs of all these packages myself for minor updates? Then, if you get past that, you have a huge N-line change where N is every file that references that package, and thus inlines the versioning information, instead of a simple and readable one-line diff that shows "- blah: 1.0.1, + blah: 1.0.2".
Another thing is that with package.json every dependency can say which versions of its dependencies it works with. This lets you update a dependency that is used by other dependencies and only have a single version (most up to date) of it. Some package managers also let you completely override a version that one of your dependencies uses, allowing you to force the use of a newer version.
With Deno, both of these use cases seem way harder to satisfy. None of your dependencies can say "hey, I work with versions 1.1 to 1.3 of this dependency", instead they link to a hardcoded URL. The best chance of overriding or updating anything is if Deno supports a way to shim dependencies, but even then you might need to manually analyze your dependency tree and shim a whole bunch of URLs of the same library. On top of that, as soon as you update anything, your shims might be out of date and you need to go through the whole process again. To make the whole process easier, Deno could compare checksums of the dependencies it downloads and through that it could show you a list of URLs that all return the same library, but this would be like the reverse of package.json: instead of centrally managing your dependencies, you look at your dependencies after they have been imported and then try to make sense of it.
I prefer a unix approach where each tool does a single thing.
The runtime does runtime things and someone can build a package manager to do package manager things.
The benefit of having runtime + package management bundled into one is you have an opinionated and standard way of doing things - the downside is if the package manager stinks then the whole ecosystem stinks.
Yes, that is of course the point. There’s an infinite number of possible later versions to check for. The suggestion to poll for new versions using cURL was sarcastic. These “conventions” you speak of actually get handled by... package managers! If not everyone who hosts a package needs to “know” about magic files and make sure they make it following a spec that isn’t enforced by anyone and doesn’t break immediately but rather much later when someone tries to update. It’s like everyone managing their own database row with an informally agreed upon database schema.
Maybe a convention will arise, that you do all the imports in one file (basically like a `package.json` file) and import that from the rest of your code? It seems hackish to me but could work.
You have to stop thinking in terms of NPM where it takes 1000000000 packages to do anything. A Deno application is designed to be distributed as a single file. You can override the default behavior to have NPM like stupidity, but if that is really your goal why bother moving to Deno in the first place?
Forget 10000000 packages. Many languages often make use of 10s of packages. If I have several projects, each with around 10 packages, and no automated way to just check if all my projects’ respective dependencies have security updates that could be applied, it seems to go against the stated security goal.
Separately I’m not sure what is enforcing this “small dependency graph” aside from making it hard to import things I guess. I wouldn’t be surprised if you end up with the normal behavior of people coming up with cool things and other people importing them.
Congratulations on the 1.0 release! I've been using Deno as my primary "hacking" runtime for several months now, I appreciate how quickly I can throw together a simple script and get something working. (It's even easier than ts-node, which I primarily used previously.)
I would love to see more focus in the future on the REPL in Deno. I still find myself trying things in the Node.js REPL for the autocomplete support. I'm excited to see how Deno can take advantage of native TypeScript support to make a REPL more productive: subtle type hinting, integrated tsdocs, and type-aware autocomplete (especially for a future pipeline operator).
There are both ocaml and haskell repls, so it can be done with languages whose type systems are the focus. Not sure if there's anything specific about typescript that would make it hard, though.
I thought a lot about it, and it seems as secure as node_modules, because anybody can publish to npm anyway. You can even depend to your non-npm repo (github, urls...) from a npm-based package.
If you want to "feel" as safe, you have import maps in deno, which works like package.json.
Overall, I think Deno is more secure because it cuts the man in the middle (npm) and you can make a npm mirror with low effort, a simple fork will do. Which means you can not only precisely pin which code you want, but also make sure nobody knows you use those packages either.
Take it with an open mind, a new "JSX" or async programming moment. People will hate it, then will start to see the value of this design down the road.
> I thought a lot about it, and it seems as secure as node_modules, because anybody can publish to npm anyway
npm installs aren't the same as installing from a random URL, because:
* NPM (the org) guarantees that published versions of packages are immutable, and will never change in future. This is definitely not true for a random URL.
* NPM (the tool) stores a hash of the package in your package-lock.json, and installing via `npm ci` (which enforces the lockfile and never updates it in any case) guarantees that the package you get matches that hash.
Downloading from a random URL can return anything, at the whims of the owner, or anybody else who can successfully mitm your traffic. Installing a package via npm is the same only the very first time you ever install it. Once you've done that, and you're happy that the version you're using is safe, you have clear guarantees on future behaviour.
One advantage of having a centralized repository is that the maintainers of that repository have the ability to remove genuinely malicious changes (even if it's at the expense of breaking builds). Eliminating the middle man isn't always a great thing when one of the people on the end is acting maliciously.
> It's basically the same "exposure" as importing a random npm, but it has the benefit if being explicit when you do it.
This is definitely false. For all the problems with the NPM registry and the Node dependency situation, an NPM package at a specific version is not just at the whims of whatever happens to be at the other end of a URL at any given moment it's requested. This is a huge vulnerability that the Node/NPM world does not currently have.
> It's basically the same "exposure" as importing a random npm, but it has the benefit if being explicit when you do it.
I'm not sure how this works in detail here, but at least in NPM you got a chance to download packages, inspect them and fix the versions if so desired. Importantly, this gave you control over your transitive dependencies as well.
This seems more like the curl | bash school of package management.
It's not just about the integrity. The url may very well provide what they claim to provide, so checksums would match, but it's the direct downloading and running of remote code that is terrifying.
This is pretty much like all the bash one-liners piping and executing a curl/wget download. I understand there are sandbox restrictions, but are the restrictions on a per dependency level, or on a program level?
If they are on a program level, they are essentially useless, since the first thing I'm going to do is break out of the sandbox to let my program do whatever it needs to do (read fs/network etc.). If it is on a per dependency level, then am I really expected to manage sandbox permissions for all of my projects dependencies?
Does Deno have some built in way to vendor / download the imports pre-execution? I don't want my production service to fail to launch because some random repo is offline.
You can also use the built in bundle command to bundle all of your dependencies and your code into a single, easily deployable file. https://deno.land/manual/tools/bundler.
Deno caches local copies and offer control on when to reload them. in term of vendoring you can simply download everything yourself and use local paths for imports.
deno requires that you give the process explicitly which permissions it has. I think it's much better than praying that a package has not gone rough like with node. If you don't trust the remote script, run it without any permission and capture the output. Using multiple process with explicit permissions are much safer.
> ...
> Also like browsers, code is executed in a secure sandbox by default. Scripts cannot access the hard drive, open network connections, or make any other potentially malicious actions without permission. The browser provides APIs for accessing cameras and microphones, but users must first give permission. Deno provides analogous behaviour in the terminal. The above example will fail unless the --allow-net command-line flag is provided.
The Deno feature that seems to draw the most fire is dependency management. Some skeptics may be latching onto the first point without deeply considering the second.
Deno is just doing the same thing a browser does. Like a browser, there's nothing that JavaScript running on sandboxed Deno can do that a browser can't - in principle. So the security concerns seem a little over the top.
The one caveat is that once you open the sandbox on Deno, it appears you open it for all modules. But then again, that's what NPM users do all the time - by default.
As far as criticisms around module orchestration, ES modules take care of that as well. The dependency graph forms from local information without any extra file calling the shots.
This seems like an experiment worth trying at least.
If you're building a real world application, especially a server application like in the example, you're probably going to want to listen on the network, do some db access and write logs.
For that you'd have to open up network and file access pretty much right off the bat. That combined with the 'download random code from any url and run it immediately', means it's going to be much less secure than the already not-that-secure NPM ecosystem.
What protection does NPM actually give you?
Sure, they'll remove malware as they find it, but it is so trivially easy to publish packages and updates to NPM, there effectively is no security difference between an NPM module and a random URL. If you wouldn't feel comfortable cloning and executing random Github projects, then you shouldn't feel comfortable installing random NPM modules.
> and run it immediately
NPM packages also do this -- they can have install scripts that run as the current user, and have network access that can allows them to fetch, compile, and execute random binaries off the Internet.
From a security point of view, Deno is just making it clear up-front that you are downloading random code snippets, so that programmers are less likely to make the mistake of trusting a largely unmoderated package repository to protect themselves from malware.
I lean towards calling that a reasonably big security win on its own, even without the other sandboxing features.
What deno does is move package management away from the framework distribution. This is great - one thing I hate about node is that npm is default and you get only as much security as npm gives you. (You can switch the npm repo, but it's still the overwhelming favourite because it's officially bundled.)
Deno can eventually give you:
So you'll be able to pick a snippet repository based on your risk appetite.For the network access I have an issue[0] open that asks to separate permissions to listen on the network from permission to dial. Also, along the way I want to have the ability to let the OS pick the port as an option.
Permissions are meant to work by whitelisting. So you wouldn't open access to the whole system just to talk to your DB, or to operate on some files.
[0] https://github.com/denoland/deno/issues/2705
Moreover Idk how granular the network permission is, but if its implementation is smart, you could block almost all outbound network access except the ones to your DB and the few API you may need to contact.
I have only the bare minimum of like, experience with nodejs. Would you mind fleshing out why that is so?
I think that overall you're right, but it's worth noting that deno can restrict file system access to specific folders and can restrict read and write separately. It's plausible to me that you could have a web server that can only access specific folders.
If you don't need the firewall, you can just run in a chroot under a low-privilege user.
I mean, if you do otherwise, you are not following best practices and the voice of reason.
Most people would want the option to do dependency management on a per-project basis as well. Simply allow a command-line parameter to provide one or more other directories to source from first (besides presumably the global one for your installation).
If we wanted the file to be generated automatically, maybe something like this:
Not saying that makes it a bad idea, but importing/downloading trusted code over http(s) is not simple even if the protocol sorta is.
Yup! I am really excited about Deno and curious about how popular it will be in a few years.
[0] https://servo.org/
[1] https://wiki.mozilla.org/Quantum/Stylo
Servo is absolutely not production ready. A couple of particular pieces of Servo, such as Stylo and WebRender, can be considered production-ready, but no so much the project as a whole.
Of course, Deno has an official Twitter account as well at https://twitter.com/deno_land :-)
Am curious how Parallelism could be handled in the runtime? Besides exposing WebWorkers, would shared memory be a possibility? V8 looks like its heading toward a portable WebAssembly SIMD accelerator.
>>> Promises all the way down
Async / await is a great pattern for render loops by resolving to continuous window.requestAnimationFrame calls. Here is a nifty example computing a Buddhabrot and updating new data quite smoothly:
http://www.albertlobo.com/fractals/async-await-requestanimat...
It's a multi-domain Let's Encrypt X3 certificate and I believe most LE users will be in a similar boat now.
This is a massive undertaking. TSC is a moving target. I occasionally contribute to it. It’s a fairly complex project. Even the checker + binder (which is the core of TS) is pretty complex.
One idea that comes to mind is to work with Typescript team that they are only using a subset of JS such that tsc can be compiled down web assembly and have llvm spit out a highly optimized binary. This not only benefits demo, but the rest of the internet.
TSC has done some great architectural changes in the past like doing mostly functional code, rather than lots of classes.
The target we should be aiming for is a powerful typed language like typescript that complies very quickly to webasshmbly that can run in guaranteed sandbox environments.
Also: If I'm not entirely mistaken Microsoft initially planned to have a TypeScript-specific interpreter in Explorer. This also might indicate that something like that could be possible.
1: https://www.microsoft.com/en-us/research/publication/static-...
https://github.com/swc-project/swc
It's still not feature-complete, but there aren't any alternatives written in Rust that I know of.
From the description, it doesn't sound like Deno needs the type information for V8 optimizations (I thought they had explored that, but I don't recall, and the description here is unclear), so maybe switching to more of a two pass system of a simple as possible "type stripper" (like Babel's perhaps?) and leave tsc compilation for type checking as a separate background process. Maybe behind some sort of "production mode" flag so that type errors stop debug runs but in production assume you can strip types without waiting for a full compile?
Maybe even writing a type stripper in Rust isn't a bad idea, but definitely trying to capture all of tsc's functionality in Rust seems like a fool's errand.
I use it with ts-node all the time for my docker images that require fast startup.
node -r ts-node/register/transpile-only xyz.ts
In it's current form, I'd never run Deno on production, because dependencies have to be loaded remotely. I understand they are fetched once and cached, but that will not help me if I'm spinning up additional servers on demand. What if the website of one of the packages I depend on goes down, just as I have a huge spike in traffic?
Say what you want about Node's dependency management, but atleast I'm guaranteed reproducible builds, and the only SPOF is the NPM repositories, which I can easily get around by using one of the proxies.
I'm a desktop developer so I understand I'm the dinosaur in the room but I've never understood why you would not cache all the component packages next to your own source code.
Since this is straighforward to do I presume there is some tradeoff I've not thought about. Is it security? Do you want to get the latest packages automatically? But isn't that a security risk as well, as not all changes are improvements?
They would do well to have an option to optimize dependencies for vendoring.
The default in Deno is questionable choice. Just don't fuck with what works. Default should be safest followed by developers optionally enabling less safe behaviors.
I don't think it it is an unsolvable problem. For example, other solutions could be using a mirror proxy to get packages, instead of directly from the source, or pre-populating the deno dir from an artifact store. It would be nice to have documentation on how to do those though.
And this is different from NPM how? Except that I've now lost all the tooling around NPM/Yarn.
https://deno.land/manual/tools/bundler
If I have an application that uses, say, moment.js and want to import a specific locale, typically this is done using a dynamic import.
https://github.com/ChALkeR/notes/blob/master/Gathering-weak-...
- node ships with npm
- npm has a high number of dependencies
- npm does not implement good practices around authentication.
Can someone compromise npm itself? probably, according to that article.
Deleted Comment
Looks like my personal law/rule is in effect again: The harsher HN critics are, the more successful the product will be. I have no doubt Deno will be successful.
Perhaps the HN-hate is not about simplified greenfield tech as much as it is about breaking established brownfield processes and modules.
Or how about attack vectors like DNS poisoning? or government-based firewalls?
I know there's this[1], but somehow I still feel uneasy because the web is so fragile and ephemeral...
At the very least I would like to have the standard library offline...
[1] https://github.com/denoland/deno/blob/master/docs/linking_to...
What is anyone going to do about it? Anything has a chance of getting hacked or goes down just when you need it, be it GitHub, npmjs.org...
Blaming the tool for not having a protection against DNS poisoning is a bit far fetched.
Almost all successful, mainstream, techs are like that. From a purely technical perspective, they are awful (or where awful at launch), they were just adopted because they were easy to use. When I say awful, I mean for professional use in high impact environments: financial, healthcare, automotive, etc.
Examples: VB/VBA, Javascript, PHP, MySQL, Mongo, Docker, Node.
Few people would argue that except for ease of use and ubiquity, any of these techs were superior to their competitors at launch or even a few years after.
After a while what happens is that these techs become entrenched and more serious devs have to dig in and they generally make these techs bearable. See Javascript before V8 and after, as an example.
A big chunk of the HN crowd is high powered professional developers, people working for FAANGs and startups with interesting domains. It's only normal they criticize what they consider half-baked tech.
With Deno, both of these use cases seem way harder to satisfy. None of your dependencies can say "hey, I work with versions 1.1 to 1.3 of this dependency", instead they link to a hardcoded URL. The best chance of overriding or updating anything is if Deno supports a way to shim dependencies, but even then you might need to manually analyze your dependency tree and shim a whole bunch of URLs of the same library. On top of that, as soon as you update anything, your shims might be out of date and you need to go through the whole process again. To make the whole process easier, Deno could compare checksums of the dependencies it downloads and through that it could show you a list of URLs that all return the same library, but this would be like the reverse of package.json: instead of centrally managing your dependencies, you look at your dependencies after they have been imported and then try to make sense of it.
That's a real problem that needs to be solved.
Also, what happens when two lib A and lib B depend on different versions of lib C? Each have their own scoped instance of C?
* A deps.ts that handles all external dependencies and re-exports them
* Import maps
Neither of these really give you a way to do the equivalent of "npm update". But I almost never want to update all my packages at once.
The runtime does runtime things and someone can build a package manager to do package manager things.
The benefit of having runtime + package management bundled into one is you have an opinionated and standard way of doing things - the downside is if the package manager stinks then the whole ecosystem stinks.
[1] https://deno.land/manual/linking_to_external_code#it-seems-u...
Separately I’m not sure what is enforcing this “small dependency graph” aside from making it hard to import things I guess. I wouldn’t be surprised if you end up with the normal behavior of people coming up with cool things and other people importing them.
I would love to see more focus in the future on the REPL in Deno. I still find myself trying things in the Node.js REPL for the autocomplete support. I'm excited to see how Deno can take advantage of native TypeScript support to make a REPL more productive: subtle type hinting, integrated tsdocs, and type-aware autocomplete (especially for a future pipeline operator).
> fish
I see what you did there, and I approve.
I think stepping outside the npm ecosystem is going to be a bigger issue then people think.
I'm sure I'm missing something obvious in that example, but that capability terrifies me.
If you want to "feel" as safe, you have import maps in deno, which works like package.json.
Overall, I think Deno is more secure because it cuts the man in the middle (npm) and you can make a npm mirror with low effort, a simple fork will do. Which means you can not only precisely pin which code you want, but also make sure nobody knows you use those packages either.
Take it with an open mind, a new "JSX" or async programming moment. People will hate it, then will start to see the value of this design down the road.
npm installs aren't the same as installing from a random URL, because:
* NPM (the org) guarantees that published versions of packages are immutable, and will never change in future. This is definitely not true for a random URL.
* NPM (the tool) stores a hash of the package in your package-lock.json, and installing via `npm ci` (which enforces the lockfile and never updates it in any case) guarantees that the package you get matches that hash.
Downloading from a random URL can return anything, at the whims of the owner, or anybody else who can successfully mitm your traffic. Installing a package via npm is the same only the very first time you ever install it. Once you've done that, and you're happy that the version you're using is safe, you have clear guarantees on future behaviour.
It's also exactly what the websites you visit do. ;)
This is definitely false. For all the problems with the NPM registry and the Node dependency situation, an NPM package at a specific version is not just at the whims of whatever happens to be at the other end of a URL at any given moment it's requested. This is a huge vulnerability that the Node/NPM world does not currently have.
I'm not sure how this works in detail here, but at least in NPM you got a chance to download packages, inspect them and fix the versions if so desired. Importantly, this gave you control over your transitive dependencies as well.
This seems more like the curl | bash school of package management.
Edit: This is explained in more detail at https://deno.land/manual/linking_to_external_code and indeed seems a lot more sane.
> It's also exactly what the websites you visit do. ;)
Well yes, and it causes huge problems there already - see the whole mess we have with trackers and page bloat.
You can add an “integrity” attribute to script tags in the browser.
https://developer.mozilla.org/en-US/docs/Web/Security/Subres...
This is pretty much like all the bash one-liners piping and executing a curl/wget download. I understand there are sandbox restrictions, but are the restrictions on a per dependency level, or on a program level?
If they are on a program level, they are essentially useless, since the first thing I'm going to do is break out of the sandbox to let my program do whatever it needs to do (read fs/network etc.). If it is on a per dependency level, then am I really expected to manage sandbox permissions for all of my projects dependencies?
https://deno.land/manual/linking_to_external_code
Do you trust this thing? Better off developing it yourself, or working with something you trust then.
I don't find versioned URLs much more difficult to work with than package@<version> though.