Interesting. Can someone explain how IPFS works? Is it like Tor? I don't have any interest in running some sort of distributed content farm that might place CP on my computer. Even if the chance of that happening is 0.00001%.
The best one-liner I've heard: it's like one giant git repo that's inside of one giant bittorrent.
My (imperfect) understanding is that it runs like a market: you temporarily store and forward blocks (<1MB) that are considered "valuable" (i.e. popular) in exchange for people forwarding you the blocks that you want. So there's a cache at each node where popular blocks are held - which I'm sure you can keep in RAM if you want. So while it's possible that content you don't want might pass through your IPFS node, it's pretty ephemeral.
In general, I don't think IPFS is a great place to do naughty things - it's not big on anonymity, and since blocks drop off the network if they're not being actively requested, to keep something up there you have to store it permanently _somewhere_, which is going to be traceable to the same degree that running a webserver is.
As far as my understanding goes (and I've spent a day discussing it with IPFS authors in person this September), IPFS doesn't store or forward anything you haven't explicitly requested.
So you participate only in sharing things you're aware of (and bad things are rather easy to remove from that cache) — and that's a deliberate design decision.
Things in local IPFS cache can indeed be "garbage-collected" (and there's a CLI command to trigger GC manually) — but IPFS daemon has a concept of _pinning_, and pinned IPFS nodes won't be collected, and will remain stored (and being shared) as as the pinning goes.
In fact, IPFS via the DHT, tells the network of your whole network topology, including internal address you may have, and VPN endpoints too.
There's still talks in how to handle Tor connections. Because right now, if you were to use a Tor connection with IPFS, it will tell the whole network your public, private, and .onion addresses all.
Freenet encrypts content, parcels it into chunks and distributes those chunks amongst peers. Does this meet your definition of "distributed content farm that might place CP on my computer"?
I'm curious because I've seen objections to Freenet for that reason yet the content stored is in no way CP. No bad content can be reconstructed from the data in your store. Not just because it's encrypted but because you'd be holding random small chunks of the file.
The vast majority of Freenet content is probably about Freenet itself (Web of trust data, Sone traffic, FMS traffic), not bad content.
Those chunks are intended to be used as child porn, and the "you can't tell its child porn" objection is a weak one for those who don't want to be part of its distribution on moral grounds.
> "As seductive as a blockchain’s other advantages are, neither companies or individuals are particularly keen on publishing all of their information onto a public database that can be arbitrarily read without any restrictions by one’s own government, foreign governments, family members, coworkers and business competitors"
The browser demo does seem to be working, however it seems very slow? Beautiful interface though! One of the best I've seen yet.
What is going on underneath? Are you guys using WebSocket or WebRTC? The reason I ask is because I wrote an interactive coding tutorial for building a distributed chat app ( http://gun.js.org/converse.html ), and it uses WebSockets to communicate with a federated relay peer server. I'm hoping to add WebRTC support but I'm curious what you guys are doing. Like, IPFS doesn't have pub/sub support right? So did you add this?
The version deployed at orbit.libp2p.io is using orbit-db which is using redis to do pubsub right now. However, pubsub is being worked on and exists in for example go-ipfs#master right now under a feature flag. Run `ipfs pubsub --help` after build from source to try it out. It's also being worked on getting into js-ipfs.
Developer of Orbit here. Great to hear all the feedback, thank you!
Most questions have been already answered, but to clarify:
Orbit indeed uses IPFS pubsub (https://github.com/ipfs/go-ipfs/pull/3202) for real-time message propagation, no servers are involved. In addition, it uses orbit-db (https://github.com/haadcode/orbit-db) - a distributed database on IPFS - for the message history, so the messages are not ephemeral and the channel history can always be retrieved. This is a really nice property and allows Orbit to work in "disconnected" or split networks, as well as offline.
Orbit has been a testbed for IPFS applications and orbit-db came out of that work, enabling various types of distributed, p2p applications and use cases: comment systems, votes/likes/starring systems (with counters), feeds, etc. And now with IPFS pubsub, we're finally at a point of being completely serverless and distributed which is hugely exciting and opens so many doors for future work!
I'll be hanging out on #ipfs in Orbit if you're interested to try it out. Note that the Electron app and the web version at orbit.libp2p.io don't talk to each other atm (we're working on this), so I would highly recommend to try out the Electron app.
While you're at it, try drag & dropping files and folders to a channel, that's one of the coolest feature of Orbit atm imo :)
We're actively developing Orbit and making a push in the next few months, if you're interested to take part in the design and development, or would like to develop your own apps using the same tech, join us on Github https://github.com/haadcode/orbit/issues.
Thanks for the comments everyone, much appreciated!
My point is that chat messages are meant to be ephemeral, so it was a waste to store them in IPFS and hash them and make them identifiable by a unique useless hash in the entire world forever.
But since I posted the comment I realized that this is actually a cool feature for a chat app to have.
Hah, I agree with your resentment about the recent wave of "serverless" :)
By now redis has been replaced with native IPFS pubsub, which is provided by both go-ipfs and js-ipfs. The only remaining server-ish is some means of bootstrapping, i.e. entering the network.
I'm not sure how up-to-date the readme is, but the demo (orbit.libp2p.io) is out-of-date and still uses redis pubsub. I pinged @haadcode, who can go more into detail.
Are you familiar with IPFS pubsub? Would you be able to link some information about the implementation/usage?
I'm quite surprised to hear what you said. I've been following multiple Github Issues on IPFS pubsub, and none of them (that i followed) announced success/etc. I thought it was still in the planning phase.
Because IPFS only does distributed storage. It has no processing power or logic to handle data transformations.
Now, one avenue to handle that is js-ipfs. In order to update things like IPNS records, you need the private key of the node you're trying to change. Interestingly enough, and machine with the pub/priv key can submit an IPNS change.
So effectively, you could have a shared repo like Usenet, where everyone has the pub/priv key and pushes updates via js-ipfs. Although, I could imagine easily how that could get super-heavy.
_________________________
Another idea I had, was to build something akin to AWS lambda, except using Tor Hidden Services, and Erlang. It would be effectively a private computation cloud. The reason for the HS is so each machine, regardless of their location, could always talk with each other, using Erlang's built--in networking support. (I am using non-standard applications of Tor Hidden Services - read more what I'm doing here: https://hackaday.io/project/12985-multisite-homeofficehacker... )
redis was mainly used to get pubsub for the first iteration (demonstrated in June), now (demonstrated in September) orbit uses IPFS pubsub (available in the go-ipfs implementation) for a complete distributed web application.
My (imperfect) understanding is that it runs like a market: you temporarily store and forward blocks (<1MB) that are considered "valuable" (i.e. popular) in exchange for people forwarding you the blocks that you want. So there's a cache at each node where popular blocks are held - which I'm sure you can keep in RAM if you want. So while it's possible that content you don't want might pass through your IPFS node, it's pretty ephemeral.
In general, I don't think IPFS is a great place to do naughty things - it's not big on anonymity, and since blocks drop off the network if they're not being actively requested, to keep something up there you have to store it permanently _somewhere_, which is going to be traceable to the same degree that running a webserver is.
Things in local IPFS cache can indeed be "garbage-collected" (and there's a CLI command to trigger GC manually) — but IPFS daemon has a concept of _pinning_, and pinned IPFS nodes won't be collected, and will remain stored (and being shared) as as the pinning goes.
In fact, IPFS via the DHT, tells the network of your whole network topology, including internal address you may have, and VPN endpoints too.
There's still talks in how to handle Tor connections. Because right now, if you were to use a Tor connection with IPFS, it will tell the whole network your public, private, and .onion addresses all.
I'm curious because I've seen objections to Freenet for that reason yet the content stored is in no way CP. No bad content can be reconstructed from the data in your store. Not just because it's encrypted but because you'd be holding random small chunks of the file.
The vast majority of Freenet content is probably about Freenet itself (Web of trust data, Sone traffic, FMS traffic), not bad content.
You share:
1. Files you Pinned (think of as torrent seeding)
2. Files you have in your IPFS cache
3. The default files that are added to a new IPFS repo (unless you removed them or init'ed using the appropriate option to not include them)To answer the GP's question: As long as you don't pin child porn, and you don't look for child porn, there's 0% chance in IPFS-land.
https://blog.ethereum.org/2016/01/15/privacy-on-the-blockcha...
What is going on underneath? Are you guys using WebSocket or WebRTC? The reason I ask is because I wrote an interactive coding tutorial for building a distributed chat app ( http://gun.js.org/converse.html ), and it uses WebSockets to communicate with a federated relay peer server. I'm hoping to add WebRTC support but I'm curious what you guys are doing. Like, IPFS doesn't have pub/sub support right? So did you add this?
[1] - https://github.com/haadcode/ipfs-daemon/blob/master/ipfs-dae...
Most questions have been already answered, but to clarify:
Orbit indeed uses IPFS pubsub (https://github.com/ipfs/go-ipfs/pull/3202) for real-time message propagation, no servers are involved. In addition, it uses orbit-db (https://github.com/haadcode/orbit-db) - a distributed database on IPFS - for the message history, so the messages are not ephemeral and the channel history can always be retrieved. This is a really nice property and allows Orbit to work in "disconnected" or split networks, as well as offline.
Orbit has been a testbed for IPFS applications and orbit-db came out of that work, enabling various types of distributed, p2p applications and use cases: comment systems, votes/likes/starring systems (with counters), feeds, etc. And now with IPFS pubsub, we're finally at a point of being completely serverless and distributed which is hugely exciting and opens so many doors for future work!
I recently gave a talk at Devcon2 about Orbit and developing distributed real-time applications (https://ethereumfoundation.org/devcon/?session=orbit-distrib...) and while the videos of the talk are not out yet (afaik coming very soon!), there's the uncut video of the talk here http://v.youku.com/v_show/id_XMTc1NjU1NzEyNA==.html?firsttim... if you're interested to learn more. Video of the demo I showed in the talk is here https://ethereumfoundation.org/devcon/wp-content/uploads/201....
I'll be hanging out on #ipfs in Orbit if you're interested to try it out. Note that the Electron app and the web version at orbit.libp2p.io don't talk to each other atm (we're working on this), so I would highly recommend to try out the Electron app.
While you're at it, try drag & dropping files and folders to a channel, that's one of the coolest feature of Orbit atm imo :)
We're actively developing Orbit and making a push in the next few months, if you're interested to take part in the design and development, or would like to develop your own apps using the same tech, join us on Github https://github.com/haadcode/orbit/issues.
Thanks for the comments everyone, much appreciated!
But since I posted the comment I realized that this is actually a cool feature for a chat app to have.
https://github.com/ipfs/?utf8=%E2%9C%93&query=ipfs-api
By now redis has been replaced with native IPFS pubsub, which is provided by both go-ipfs and js-ipfs. The only remaining server-ish is some means of bootstrapping, i.e. entering the network.
I'm not sure how up-to-date the readme is, but the demo (orbit.libp2p.io) is out-of-date and still uses redis pubsub. I pinged @haadcode, who can go more into detail.
I'm quite surprised to hear what you said. I've been following multiple Github Issues on IPFS pubsub, and none of them (that i followed) announced success/etc. I thought it was still in the planning phase.
Now, one avenue to handle that is js-ipfs. In order to update things like IPNS records, you need the private key of the node you're trying to change. Interestingly enough, and machine with the pub/priv key can submit an IPNS change.
So effectively, you could have a shared repo like Usenet, where everyone has the pub/priv key and pushes updates via js-ipfs. Although, I could imagine easily how that could get super-heavy.
_________________________
Another idea I had, was to build something akin to AWS lambda, except using Tor Hidden Services, and Erlang. It would be effectively a private computation cloud. The reason for the HS is so each machine, regardless of their location, could always talk with each other, using Erlang's built--in networking support. (I am using non-standard applications of Tor Hidden Services - read more what I'm doing here: https://hackaday.io/project/12985-multisite-homeofficehacker... )