Readit News logoReadit News
guntherhermann · 2 years ago
> The “Public IPFS DHT” is henceforth going to be called “Amino”. This follows along with the trend from 2022 in the IPFS ecosystem to use more precise language to create space for alternative options

I'd argue that "Public IPFS DHT", if less catchy, is far more precise than "Amino".

diggan · 2 years ago
As the quoted part mentions, if they'd call it "The Public IPFS DHT", there isn't really any room for someone else to create something that could replace it, because There Could Be Only One.

With a specific name for the specific implementation of a general concept, others could provide alternative implementations implementing the same concept.

somat · 2 years ago
I am probably missing something then, but as far as I can tell most of the value proposition of ipfs is the single universal dht. if you remove or fragment it now all you have is basically a worse bittorrent.

Most of the interesting things I want to do with ipfs involve the dht, any sort of file transfer is usually a secondary concern.

cjrp · 2 years ago
Call it "The Public IPFS DHT v1.0" then ;)
Retr0id · 2 years ago
It's less precise because there are many ways one could implement a "Public IPFS DHT". "Public IPFS DHT" is a concept, Amino is a concrete instantiation of that concept.
omginternets · 2 years ago
Agreed. One frustrating thing about PL is that they seem to make odd decisions that detract or distract from their main value proposition. In particular:

- Filecoin is not interesting. IPFS and lip2p are interesting.

- Renaming IPFS-the-application to Kubo is confusing

- Naming the IPFS DHT "Amino" is confusing. Why does it even need its own name?

I really wish PL would go through the occasional contraction phase where it prunes the bulk of its initiatives and re-focuses on what it does amazingly well. IPFS and libp2p are truly amazing.

mburns · 2 years ago
> Why does it even need its own name?

So that other DHT implementations can exist and potentially replace the existing one.

Same for go-ipfs being renamed. We generally don’t have web browsers named after the protocol they use. And with multiple ipfs clients, one of them being named “ipfs” is itself confusing.

Frankly, both should have probably happened years ago.

bembo · 2 years ago
I think the point is that amino is just one public ipfs dht, so they renamed it, so that other public ipfs dhts can exist without confusion.
stavros · 2 years ago
Is IPFS working these days? I was very excited about it eight years ago, to the point where I made one of the first IPFS pinning services, but lost all my interest. IPFS is a great idea, but the implementation basically doesn't work, and it certainly doesn't work to the point where people can be running the node locally.

It used to have tons of problems discovering content from other nodes on the network unless it was directly connected to them, and it broke often. It also didn't seem like Protocol Labs worked on any of these problems at all, focusing on launching a cryptocurrency instead.

Has it changed now?

kkielhofner · 2 years ago
I have a similar take with slightly more recent experience.

When it came down to it the resource requirements for an IPFS node were pretty insane relative to the "value" provided and by many takes it still basically didn't "work".

I understand it's not the same thing at all but in the days of running a web server on nearly anything than can handle many thousands of requests/sec an IPFS node running on the beefiest hardware we could throw at it ate tremendous amounts of system resources and bandwidth for double digit requests per second, and even then it would frequently time out and/or get into various unrecoverable states necessitating a service restart. We had to run a cluster of them and watch IPFS nearly melt the hardware...

We tried every IPFS implementation available and ended up having to use the "least worst" while also adding a lot of instrumentation, health checks, etc around it just to keep it up and running in some kind of consistent, usable fashion.

aftbit · 2 years ago
I briefly ran an IPFS node, I believe working towards the same project that you are discussing. It ate my home network: drove my packet loss into the 10% range and somehow convinced my core switch (a Brocade ICX6610) to send all traffic to every port. When I saw every port on my upstairs switch blinking like crazy and tcpdump showed traffic intended for a downstairs server arriving at my upstairs workstation, I pulled the plug and told free he was on his own.
b_fiive · 2 years ago
Depends on how you define working :). I'm a 6+ year vet of the IPFS ecosystem, we work on iroh these days, which I think addresses many core issues with the protocol design: https://iroh.computer

The biggest challenges still unaddressed are twofold imho: 1. The network is very forgetful. Stuff you added 24 hours ago is likely gone unless you've taken specific steps to keep it up. This is hard because all CIDs in IPFS have equal weight, which makes it very hard to cache intelligently. 2. The implicit promise that IPFS will resolve _any_ of the 86-100k new CIDs it sees daily in "normal internet" times, (sub-second TTFB). This doesn't work in practice, because mapping content addresses to location-based providers who are under high churn is, well, very hard.

Both of these problems are "content routing" problems, which is the core of "get me stuff from this hash, I don't care where" interface IPFS offers. It's hard. With iroh we just don't make that promise at all right now.

stavros · 2 years ago
I hope you succeed!
diggan · 2 years ago
Well, I guess it depends on your use-case. As a general and public discovery / providing / downloading network, it's been kind of overloaded for the last years, and it finally seems like Protocol Labs is putting some efforts to solving some of the most biting issues.

In this case, it's about the process of adding content to the network. It was neigh impossible to add large directories/files as your connections got overloaded with provide messages. This seems to batch things up and parallelize better, so should at least make it easier to add content and subsequently find it for the peers who want to.

But, the implementation still works very well when you're doing your own networks, which I think is a much better use of the protocol anyways. So when building a application with IPFS, you're using your own network only composed of nodes that are actually relevant to your application, instead of connecting to the public DHT.

Unless your scale is really big, it'll work a lot better than using the already huge public DHT.

stavros · 2 years ago
That's kind of a shame, the appeal of a public, peer-to-peer, content-addressable network was very high to me personally, because of its relative uniqueness. For a personal network, IPFS becomes just another technology I could deploy, out of many options.
londons_explore · 2 years ago
I think it would be better to fix the public network than to split the network into millions of local networks...

The public design today still allows actual file transfers to be local over your local network - it is only metadata that goes over the public internet.

9dev · 2 years ago
I hope the irony of a protocol called „Interplanetary file system“ being more suitable for local usage isn’t lost on people :)
theK · 2 years ago
I also was very active in the early IPFS days. I think two points really atributted to your experience

1. Success: IPFS got tons of usage early on so scaling the software (which back then was mostly a prototype) was challenging, especially with Bennett's BDFL initial stance.

2. The need to codify an incentives market which then lead to the creation of filecoin took a lot of effort and setting up a trustworthy org around that one and ipfs got even more challenging

So Yes, working with IPFS was not plain sailing (and still isn't) but it seems that by now the two projects have been set up to start iterating again and I see a lot of great work happening on both fronts so it looks lika a promising future here.

Source: I have worked and am still working with both IPFS and Filecoin as part of my business

chriswarbo · 2 years ago
> 2. The need to codify an incentives market which then lead to the creation of filecoin took a lot of effort and setting up a trustworthy org around that one and ipfs got even more challenging

I see this as a non-goal: HTTP is doing fine without an "incentives market", and that's the sort of core layer IPFS is suited for. When I switch off my HTTP servers, there's no expectation that the resources they're hosting remain accessible; and the same is true for IPFS. The advantage of IPFS is that it allows resources to remain accessible, e.g. if someone else cares enough to host it too, or if I happen to have copies buried on some old boxen (without having to coordinate some load-balanced shenanigans up-front).

For example, we can avoid "leftpad" fiascos if software companies could host their own dependencies (as in, contribute to ensuring their canonical URLs resolve; rather than current practice of re-hosting copies at myriad private URLs, or routing their network through caching proxies).

Good luck to other projects which want to work on such a thing (Filecoin, etc.), but it's mostly orthogonal to IPFS itself.

noman-land · 2 years ago
Can you say a bit about using it for your business? Curious how people are using IPFS in the real world.
yiannisbot · 2 years ago
We've been doing quite extensive measurements over several parts of the architecture, which you can find here: https://probelab.io/

Still lots to optimise, but wouldn't say it's unusable. In fact, performance is pretty good for a decentralised P2P network.

Borg3 · 2 years ago
Well said. I too was looking at IPFS few times, but hardly could find a place to use it. Im a big fan of distributed storage or self hosting, but IPFS is far too static for anything usefull, except maybe archives of import static blobs.
diggan · 2 years ago
What exactly are you unable to build with IPFS that requires it to be even more dynamic than what it is? I've found it flexible enough for most use cases I've had in mind, as long as you're flexible on how they architecture should look.
yieldcrv · 2 years ago
It works great for my use case

Since you havent looked, people are using Protocol Lab’s crypto version of IPFS to pin on IPFS

Filecoin+IPFS is far more free than any of the IPFS SaaS pinning services

and it has decent replication too

I serve over CDNs, of which there are many, and they cache well enough.

I use it to stay on Vercel and Netlify’s free tiers for my static assets, so my sites can have huge spikes in traffic but my static assets are not loaded from them.

Its free on free, big use case for exploratory projects

https://web3.storage does that filecoin+ipfs pinning

omginternets · 2 years ago
I'm quietly using it in a side-project of mine that is intended to provide a cloud-esque environment to a permissioned p2p compute cluster. In my case, it's basically providing S3-like functionality, which works rather nicely in a datacenter environment.
simonw · 2 years ago
I mostly lost interest in it when I learned that it's possible for a file published to IPFS to simply blink out of existence one day in the future (if no one is left pinning it).

At that point I'd rather stick something in an S3 bucket and pay for it myself.

stavros · 2 years ago
I didn't mind that too much, because IPFS is strictly better than S3 in that regard. IPFS isn't meant to make it so that you don't need to host your own files, but rather that I can seamlessly host your files too.

In that regard, it's much more available than any HTTP server.

detaro · 2 years ago
a file on S3 will simply blink out of existence too one day if you stop paying for one specific someone to "pin" it, so I don't think this point makes that much sense without further context?
evbogue · 2 years ago
I don't mind pinning, but back in the day I was having issues using IPFS to transfer small files between devices. I admit I haven't investigated to see if the problems were ever resolved.

I've been watching https://github.com/ipfs/helia which is going to replace https://github.com/ipfs/js-ipfs and hoping they can get an IPFS node working in the browser.

anacrolix · 2 years ago
IPFS has ruined the public perception of what a content addressable network could be.

Now when you mention CAD, people think IPFS and freak out about bad performance and flakiness. It's a real shame because we already have a fantastically reliable CAD and DHT in BitTorrent, and it's trivial to build on top of that to create excellent experiences.

KirillPanov · 2 years ago
IPFS is propped up by AI companies.

When it becomes clear that their models were trained on library genesis, they are betting that "but our web crawler stumbled into it through cloudflare's gateway" will be a good enough excuse to keep them out of prison.

This is basically the only thing IPFS offers that Bittorrent doesn't.

j_maffe · 2 years ago
> This is basically the only thing IPFS offers that Bittorrent doesn't.

Now that's simply just untrue. The main difference from Bittorrent is that it relies on Content ID, not torrent links. IPFS is used by many organizations and individuals more than just AI companies.

j_m_b · 2 years ago
I've been looking into a private IPFS network as a way to share photos. It doesn't seem ready for that. Is there something out there that allows clients to update a mounted drive and keep in sync? Something that is transparent enough that ordinary users aren't intimidated to use it?
ianopolous · 2 years ago
You can do that with peergos [1]- mount a peergos folder locally using FUSE. Or login to the web interface and share easily and privately.

[1] https://github.com/peergos/peergos

h0h0h0h0111 · 2 years ago
I think https://fission.codes/ecosystem/wnfs/ might do what you want (though I don't know about viewing photos in browser etc). Alternatively, IPFS supports unixFS and a mutable filesystem through the desktop client if you are happy to host them on your own machine (it acts like a unix dir)

edit: ah sorry, I see you actually asked for a private network. You could possibly look into https://ipfscluster.io/, though it might be a little heavyweight for what you're looking for

Karrot_Kream · 2 years ago
There's also Perkeep [1], though it seems like development has slowed down on it in recent years.

[1]: https://perkeep.org/

GTP · 2 years ago
Maybe Syncthing would work for you? [1]

[1] https://syncthing.net/

vorpalhex · 2 years ago
Syncthing with "copycat" as a web UI and Samba access is what I give to my users. People who onboard to syncthing like it but usually need help on initial setup.

Deleted Comment

richarme · 2 years ago
I'm building something that solves this problem. I'd love to hear more about your use case, is it something you'd like to discuss or join a beta down the line?

If so, reach out at marc@ at my username .net

londons_explore · 2 years ago
The concept of an "Interplanetry Filesystem" is a good one.

The actual IPFS implementation doesn't live up to expectations though.

Expectations:

* I want to be able to mount / as IPFS and know that I can boot linux from anywhere.

* I want to have my photo library on IPFS and add to it from anywhere.

* I want to be able to share anything on IPFS, and if someone else has already uploaded it for the upload to be instant.

* I want all the storage on my phone/laptop/whatever permanently full of other peoples stuff, earning me credits to store my own data.

* I want my stuff reed-solomon encoded with lots of other data, so that in case of a failure of a chunk of the network, my data is still recoverable.

* I want the network to be fast and reliable with excellent sharding of data and minimal hotspotting.

diggan · 2 years ago
Are those expectations coming from reading the landing page at ipfs.tech, or where they come from?

> * I want to be able to mount / as IPFS and know that I can boot linux from anywhere.

A starting point: https://github.com/magik6k/netboot.ipfs

> * I want to have my photo library on IPFS and add to it from anywhere.

I personally wouldn't keep my private photos on a public network, but everyone is different. You should be able to do this today, maybe you're saying that the client software for doing this is missing? Because the protocol would support it, but I'm not aware of any clients that would help you with this.

> * I want to be able to share anything on IPFS, and if someone else has already uploaded it for the upload to be instant.

You don't really "upload" anything to IPFS, ever, that's not how the protocol works. You "provide" something and then if someone requests it, you upload it directly to them. So in that way, "uploads" are already instant if the content already exists on the other node.

> * I want all the storage on my phone/laptop/whatever permanently full of other peoples stuff, earning me credits to store my own data.

> * I want my stuff reed-solomon encoded with lots of other data, so that in case of a failure of a chunk of the network, my data is still recoverable.

These are both "solved" by Filecoin rather than IPFS, although you can solve the second one by yourself with IPFS by just running multiple nodes you yourself own. But the whole incentive part is (rightly) part of Filecoin rather than IPFS.

> * I want the network to be fast and reliable with excellent sharding of data and minimal hotspotting.

You and me both :)

kevincox · 2 years ago
> I personally wouldn't keep my private photos on a public network, but everyone is different.

IPFS needs transparent encryption yesterday. I tried to start a discussion and even made a rough design but they don't seem interested.

They have added some basic protection where a node won't serve content to another node without knowing the CID but this isn't the same level of security as E2EE.

I think the encryption key should be transmitted with the CID but separable. So that you can pin data with just the raw CID but share data easily with CID+key.

grumbel · 2 years ago
> I personally wouldn't keep my private photos on a public network, but everyone is different.

Well, that's exactly the problem, isn't it? IPFS could be extremely useful for local and private storage, as it provides a network file system with proper directories, an optional HTTP interface, content addresses and an fuse implementation to mount it on Linux, along with automatic distribution and caching of the data. Those are all excellent features that I haven't really seen in any other system.

But the actual support for local or private hosting is basically non-existent. On IPFS everything is public all the time. The whole thing is way to much focused on being a globally spread protocol, while it neglects the benefits it could provide on the local PC, by just being a file format.

What I am missing is something like Git build in top of IPFS hashes. Something that allows me to manage my files on my local PC without any of the networking, but with the content addressing. Something that allows me to quickly publish them to a wider audience if I desire, but doesn't force me to. Or even just something I can use as a way to access my local files via content address instead of filename.

londons_explore · 2 years ago
> You don't really "upload" anything to IPFS, ever, that's not how the protocol works. You "provide" something and then if someone requests it, you upload it directly to them.

This model should be changed... I should be able to just send something to the network, having other users store it for me, and come fetch it back later.

The whole idea that I am constantly online 'pinning' files is a bad one. The whole idea that I must store the specific files I want to make available to others is also a bad one. The network protocol should mix file data beyond recognition, and the exact data on my hard drive should have little correlation to the data I specifically am sharing with others.

kosolam · 2 years ago
Works fine for us so far. Discovery of newly added files is immediate. Downloading speed is fast. It’s quite easy to get this to work, you need to have a few or more instances with these objects pinned. And make sure the bandwidth and other resources are sufficient and the servers are always online. Or use a reliable pinning service that can do this for you.
eternityforest · 2 years ago
I still think the biggest problem with IPFS is that they put every block of every file in the DHT. It's just insane compared to BitTorrent, which only puts the top level torrent info in the DHT.

Having the option to pin just one file is useful, but they could greatly reduce DHT traffic if they didn't need to allow access to arbitrary resources without starting at some parent block.

BitTorrent requires you access files via a collection, and only the collections are stored in the DHT, and the bandwidth use when idle is single digit kb.

I think BitTorrent itself could be extended to cover most IPFS use cases, possibly better than IPFS itself, although IPFSes database-like stuff is pretty unique.

anacrolix · 2 years ago
Yes, you are completely correct. And BitTorrent is absolutely usable everywhere that IPFS is being used.

https://github.com/anacrolix/btlink

https://news.ycombinator.com/item?id=37771434

mtillman · 2 years ago
I clicked About and received a 500 Error “Importing a module script failed”.
pierat · 2 years ago
Ah, so you pulled it from IPFS. That's the usual experience.
jl6 · 2 years ago
It would be nice if there was an IPFS implementation with much lower memory requirements. I tried a while back on something equivalent to a “free tier VM” and it quickly ate all available RAM.