Why? Exporting a single class for this kind of lib is preferable. I'd do:
export default IPFSDropzone;
See react-dropzone which does this [1]. (For that matter the author's questions around project setup would probably be answered by looking at react-dropzone's which looks pretty good).
Also with npm scripts, local deps that expose executables are added to your PATH so you can call them directly, e.g.
Because having a mix of libraries/modules which some export defaults and some don't makes it cumbersome to require them here and there: I have to remember which was exported as default and which had the class inside. Also it makes more obvious visually which variables are modules and which are classes/functions. For my projects I avoid default exports even when a module only has one class (which happens frequently).
Additionally, ES6 default exports don't mix well with commonJS, sometimes I have to add require('foo').default for no apparent reason.
> Also with npm scripts, local deps that expose executables are added to your PATH
I think it didn't work reliably on windows or cygwin or something. I just tried and it works everywhere. Good to know! The only gotcha is that ";" and "&&" must have spaces around.
I see. Then don't go "full ES6" but I prefer exporting an object with the class for reasons I've explained in a comment above.
But I do see ES6 modules in the wild, some of them I'm using with other languages. So I guess it's fine to leave it as-is. Or if you want to ensure maximum compatibility, use babel as I suggested but add the sources to .npmignore so they're not duplicated in the published packages.
This is the #1 thing I see people confused about with IPFS. Basically, there's 3 ways for content to become available:
1. You add the files to your own node. This is how content gets added, but obviously it only lasts as long as your node is connected to the Internet, just like an ordinary HTTP server.
2. Someone views your content using their node. This causes their node to cache the content temporarily (IIRC for 30 minutes by default) and publish it to other nodes. In theory, if your content got at least one view every half hour it could live on in the users' caches forever.
3. Someone tells their node to pin your content. The node will then keep it permanently (until they unpin it) and serve it whenever it's connected to the Internet. Generally they would do this if they believe it's valuable--either because they want to keep it themselves, or as a public service to make it available to others (for example, pinning the Turkish Wikipedia to help evade censorship).
There are also several pinning services, which you can pay to have them pin your content on their node, in much the same way as you'd pay a hosting provider to serve your content.
In this case, it appears the node is in your browser, right? Does it go away when you close your tab? In which case your link will survive for 30 minutes, or is it just gone?
Is there an nginx/Apache equivalent that can be used for hosting one's own content?
From what I've read I could just run the ipfs-daemon on my VPS/hosting server, but is there a user friendly way where I don't have to manually add every file everytime? How easy is it to host a web page with it?
How will does IPFS handle NAT? Could I host my own pinning server on a Raspberry Pi on my home network?
This hasn't been launched yet, but IPFS is also planning to introduce their cryptocurrency, Filecoin, to offer cheap hosting. People will then host your site (aka pin it on their IPFS node) for Filecoins, which should be cheaper than traditional file storage services.
There was someone (victorbjelkholm) talking about the zoom-out limits of https://filemap.xyz/ on IRC and I couldn't reply there:
I've arbitrarily imposed these limits because the purpose of the app is not for casual visitors to wander around the world browsing everybody's files, they're supposed to go to specific addresses and browse only their files there.
This measure will not protect anyone absolutely, since an "attacker" can easily read the entire database and figure out where are all the files, but it protects users from 99% of the casual visitors.
I have to say that DropZone is supercool and this project makes a lot of sense. We also use Dropzone.js to store files and their Hash values at VisiFile. A demo is here:
Don't do this:
Do this (and change documentation accordingly) or to go full ES6 do this add the package babel-env and add this script to package.json: and change main to "build/index.js".Run "npm run prepublish" to test.
Edit: Now that I think of it, compiling with babel is probably unnecessary. But it doesn't hurt and can help old setups.
> module.exports = IPFSDropzone
> Do this (and change documentation accordingly)
> module.exports = {IPFSDropzone}
Why? Exporting a single class for this kind of lib is preferable. I'd do:
See react-dropzone which does this [1]. (For that matter the author's questions around project setup would probably be answered by looking at react-dropzone's which looks pretty good).Also with npm scripts, local deps that expose executables are added to your PATH so you can call them directly, e.g.
[1] https://github.com/react-dropzone/react-dropzone/blob/master...Additionally, ES6 default exports don't mix well with commonJS, sometimes I have to add require('foo').default for no apparent reason.
> Also with npm scripts, local deps that expose executables are added to your PATH
I think it didn't work reliably on windows or cygwin or something. I just tried and it works everywhere. Good to know! The only gotcha is that ";" and "&&" must have spaces around.
* If I go ES6 I'll require all people importing it to also use an ES6 module bundler -- which means Babel only, or Rollup, which never works.
* If I precompile and expect non-Babel people to use the precompiled then these people will have duplicated dependencies everywhere.
Not complaining, but all this compatibility mess seems hard to me. I use Browserify!
But I do see ES6 modules in the wild, some of them I'm using with other languages. So I guess it's fine to leave it as-is. Or if you want to ensure maximum compatibility, use babel as I suggested but add the sources to .npmignore so they're not duplicated in the published packages.
Dead Comment
1. You add the files to your own node. This is how content gets added, but obviously it only lasts as long as your node is connected to the Internet, just like an ordinary HTTP server.
2. Someone views your content using their node. This causes their node to cache the content temporarily (IIRC for 30 minutes by default) and publish it to other nodes. In theory, if your content got at least one view every half hour it could live on in the users' caches forever.
3. Someone tells their node to pin your content. The node will then keep it permanently (until they unpin it) and serve it whenever it's connected to the Internet. Generally they would do this if they believe it's valuable--either because they want to keep it themselves, or as a public service to make it available to others (for example, pinning the Turkish Wikipedia to help evade censorship).
There are also several pinning services, which you can pay to have them pin your content on their node, in much the same way as you'd pay a hosting provider to serve your content.
From what I've read I could just run the ipfs-daemon on my VPS/hosting server, but is there a user friendly way where I don't have to manually add every file everytime? How easy is it to host a web page with it?
How will does IPFS handle NAT? Could I host my own pinning server on a Raspberry Pi on my home network?
Personally I believe IPFS to be far superior. BitTorrent has already shown that P2P works great. We don't need tokens for this.
Try reading their documentation. https://ipfs.io/docs/
I've arbitrarily imposed these limits because the purpose of the app is not for casual visitors to wander around the world browsing everybody's files, they're supposed to go to specific addresses and browse only their files there.
This measure will not protect anyone absolutely, since an "attacker" can easily read the entire database and figure out where are all the files, but it protects users from 99% of the casual visitors.
http://139.162.228.5/
The code is here: https://github.com/zubairq/visifile/blob/master/public/index...
If someone knows where to plug into Uppy's upload function then it should be really easy.