I'm still a little lost. Could someone more knowledgeable than me explain how operations in space would benefit from a public content-addressable filesystem?
This seems like the kind of application that works best when bandwidth and storage are cheap, and I would have assumed that that, in space, both are about as far from cheap as you can get.
The internet traditionally uses location-addressing. A DNS name pointing to a IP which has a specific location your computer tries to reach in order to fetch the content. This means that particular node needs to respond to you, and the bandwidth is depending on the bandwidth available on the particular path to that specific node.
Instead, IPFS uses content-addressing, where the content is hashed and given a ID based on the content itself. So "foo" could have the ID A, while "bar" could have the ID B. No matter who added the content to the network, if the content is the same, the ID will be the same.
The benefit of this is that you can fetch the data from anywhere, and you'll get the data you wanted guaranteed (assuming the content is available somewhere on the network).
In terms of space, imagine being on the moon and requesting some website. With location-addressing, you have to be able to reach that particular node, and the data has to travel from there to you. With content-addressing, that content can be fetched from anywhere, maybe another node on the moon has it already, so it can fetched quicker from there, or some node in-between the moon and earth.
OK. But, going back to my comment about cheap. This claim:
> The benefit of this is that you can fetch the data from anywhere
Only works if the data I want is being replicated in many locations. But storage in spacecraft is presumably a tightly constrained resource, due to the need to use technology that's space-hardened and has an extremely good reliability record, and is therefore probably far from being both cutting edge and high density. JWST, for example, only has 68GB of storage despite being an instrument for taking high resolution photographs.
I would guess that that creates a situation that is very far from the one content-addressable storage is trying to solve. The goal isn't on-demand access to arbitrary files that are just sitting around in long term storage, because I'm guessing that, in space, there is effectively no long-term storage of data files in the first place. Instead, what you're looking to do is to get the data off of the spacecraft and down to earth as quickly and efficiently as possible, and then delete it from the spacecraft's storage so that you can make room to do more cool space stuff.
And I also don't want to be using my spacecraft's SSD to cache or replicate data for others if I can possibly avoid it. That will unnecessarily shorten the lifetime of the SSD, and, by extension, the piece of hardware that I just spent millions or perhaps even billions of dollars to build and launch into space.
And I just can't follow you as far as
> imagine being on the moon and requesting some website
because, right here right now, that is such a hypothetical situation that I have absolutely no idea why it needs a real-world demonstration of proof of concept using currently-available technology. Let's wait to see if browsing the Web from the moon leaves the domain of science fiction and becomes science reality first, so that then we can benefit from whatever technology exists in that future when we're solving the problem.
You still have to talk to specific machines to get the data, so having it content addressable will not automatically make it either more or less rapidly available.
Also, even for a peer-to-peer network, you still need some way to discover the peers that actually have the data, so you need some kind of centralized infrastructure that can help coordinate. Ultimately whether you connect to a CDN or to the centralized P2P facilitator is not such a massive difference.
Also, given the way ISPs typically operate, you will probably have much better bandwidth downloading from a centralized server than from a generic other end-user.
This is why BitTorrent only really has two successfully deployed use cases:
1. Windows updates and similar, where machines can rely on broadcasts on a small LAN to coordinate without needing more complex infrastructure (doesn't scale beyond LAN)
2. "Piracy", where the major advantage is that it diffuses the blame for copyright infringement, instead of presenting a single large target that holds all of it.
> The internet traditionally uses location-addressing. A DNS name pointing to a IP which has a specific geographical location
This complete nonsense. First of all DNS can give you multiple IP addresses[1] (that's how CDNs work eh!) and then IP addresses are only very loosely coupled to geographical locations…
It's quite the opposite. We've sketched out a school platform using IPFS that allows very remote villages in third world countries that have little to no internet to still have viable "google drive" like experiences for school home work. The idea is that a single USB drive that contains all the updated files for the google drive is sufficient for the entire network (that exists in the village, that is decent, but has otherwise very low to no bandwidth to the outside world) to have access to the files. Or, a single node downloads the homework/movie/article/whatever, and the remainder of the network has access to it - through IPFS, which only asks for the content's hash, and gets it whever it can.
I think the idea behind content addressable databases like this is that they are supposed to be permanent. Putting it out into space may be a way of saying "its never going down".
Most likely and obfuscation technique, they (IPFS people) are afraid to admit that there is no guarantee about file preservation there at all (unless you pay to people hosting your files specifically) and cache eviction may happen to your file at any time, so they try to shout all across the internet how their tech is so reliable and great to drown any dissent. Scary and important looking headlines like "Something In Space" help to instill superficial respect in the tech.
Worth noting that content-addressing is not meant to help making the data itself permanent, but it does help with making the identifier of the content permanent.
It doesn't guarantee that the content the ID resolves to is available, but once you have a content-addressable ID, you can be sure you'll be able to get the data as long as the data is available somewhere on the network.
You're asking about why run this service in space, but this service is a reliable scalable data store. Ask instead what kinds of data would you want in space, and why?
The throughput probably won't be great, but mainly if we assume 1:1 not broadcast connections! There will be roving access, not continual, to a given satellite - a fact until there are thousands of other satellites also serving ipfs. Costs will be high.
But still, there are wins. So far, satellite reliability is fairly high; there haven't been natural or human-made disasters afflicting many satellites. In some conditions the roving nature of the satellite could be a boon; you can get information out to a lot of people.
Wikipedia, news, important events... those would all benefit from guaranteed-reoccuring-availability model, perhaps. This would be an interesting tamper-resistant way to store something like votes, if it could be vetted. If there's satellite to satellite communications, the ability to have an expanding archive of crucial orbital information is useful; the swarm can grow & update vital behavior with ipfs in interesting ways.
Rather than assess just on whether this will make a good ipfs node as ipfs is typically used (high bandwidth, high storage nodes), I think it's worth considering this on the merits of what this technology offers that is distinct. Pulling out a single ruler to measure everything isn't always the best and only way to judge; indeed I think we miss a lot when we apply only our current set of expectations to new ideas & encounters. I'd encourage a more liberal consideration. Although I agree, I'm not sure either what the killer use is, I want to see thinking that pioneers what could work, that explored what values are on offer, even if it's not the same old values as the incumbent (fiber optics on the ground).
The kings of marketing and buzz-words at it again trying to re-spin CDNs as their invention. IPFS doesn't solve persistence of data; doesn't solve churn in p2p systems; doesn't actually 'store' anything. Sounds cool though! Space and shit, some-thing, something 'decentralized' --hand waving-- 'space', umm, can I have money now? I'm doing it.
It’s all about the filecoin, the pumpers are lying to themselves how it will change humanity and spreading their preachings on social media. Some actually knows it’s bs but they know a lot of them will buy it up
If you think IPFS is trying to "re-spin CDNs as their invention", I'm pretty sure you misunderstand what IPFS. The homepage is a great starting point if you're curious rather than antagonistic: https://ipfs.tech/
> IPFS doesn't solve persistence of data
I don't think it claims to solve this either? What it does claim to solve is the persistence of identifiers of data.
> doesn't solve churn in p2p systems
What P2P system has ever done so or even claimed to have done so?
Cheers, read the paper 10+ years ago. Nothing new or interesting in that time. Very impressive restatements of old ideas introduced by other people in typical pseudo-academic Protocol Labs style. Protocol Labs is famous for reinventing wheels poorly. It's like they look at dated papers and say: 'what can we do to sell this basic shit as our own.' Then they end up with horrible versions of STUN, TURN, hole-punching, DHTs, torrenting, commit-reveal schemes, hash-locks, and other technologies (((that are already well-known and ah... exist?)))
You might be thinking that I'm making this up to troll. But the funny thing is so few people have any idea about this area of technology a company like Protocol Labs can get away with being completely insular, mediocre, unimaginative, impractical, and actually full of shit, and no one will notice. In fact, I'm routinely reminded what a 'visionary' the founder is. Despite nothing of value ever coming out of the company. But what I've learned is if enough people believe the lie it might as well be true.
Now give me money! Space, space, blockchain, space!
Huge achievement. A big, big stepping stone toward interplanetary communication.
But while I love the idea of IPFS, it comes with a bunch of tradeoffs that I think make it very unlikely that it'll ever become mainstream.
What I do think will happen in space, is similar to what already happens with PoPs around the world, no IPFS required. As an example: I believe that there will be a YouTube on the moon and YouTube on Mars, and YouTube servers caching most of the content on both, and it'll all work the same. Cross-planet communication will be high latency, but it won't matter.
> What I do think will happen in space, though, is similar to what already happens with PoPs around the world. As an example: I believe that there will be a YouTube on the moon and YouTube on Mars, and YouTube servers caching most of the content on both, and it'll all work the same. Cross-planet communication will be high latency, but it won't matter.
Importantly, this is enabled by content addressable storage in IPFS, where the address of the data is derived deterministically with cryptographic hashing. Agree it likely won't ever become super popular, but it is a core component of distributed storage systems without central orchestration.
I think you might've misunderstood what I was saying. I absolutely agree with you. I don't think it will use IPFS.
I'm saying in a hypothetical future in which we have colonised other planets, tech companies will just plop caching servers on Mars and the moon, completely obviating the need for IPFS or anything special.
One of the most unusual home pages I've seen in a long time, not sure if I like it or I'm just conditioned to expect a certain style, kudos to them for being different though.
I think the typography can be improved a little, but overall I like it quite a bit. It's refreshing to come across something that isn't just the zillionth iteration of your bog standard home/landing page.
*One other thing I noticed is that the "Resources" section doesn't have the gradient border edges on it, which I don't know is intentional or not.
I really appreciated the interactive video-style homepage on https://filecoin.io/. I thought it was a lot cooler than the standard self-congratulatory graphs / bullet points that the I usually skim over.
EDIT: However, it is strange that the foundation has a totally different site.
Based on a skim of the article; they uploaded, ran, and tested their code on a cubesat running "Lockheed Martin’s SmartSat technology". So, it wasn't that expensive. My initial thought was cubesats are cheap anyway, but they don't seem to have even launched one themselves.
Gimmick, sure. but it's an important milestone and costs less than a series of TV ads.
Filecoin is probably one of the few cryptocurrencies with intrinsic value, even if the amount is debatable. IPFS has stood the test of time and seems like a good protocol; and a cryptocurrency that can be used to pay for storage is not valueless.
Has it? I have never seen anybody using IPFS in the wild, even projects for which it should be well suited (archive.org, Linux package distribution, Git, Lemmy, Imgur, CivitAI), don't use it. Worse yet, IPFS still provides no real way to deal with private or local data, which drastically limits its use.
I love the idea about data being addressable by hash, but I don't feel IPFS has actually delivered anything meaningful in that area yet.
And with ipfs-search.com shutdown, there is not even any way left to explore what is actually on the network now.
And technical issues aside, there is also the legal problem that IPFS conflicts with copyright. Redistributing anything is illegal by default, unless somebody gives you permission, IPFS provides no means to track that permission. You can't attach a GPL or an author to a hash.
> Has it? I have never seen anybody using IPFS in the wild
Every time someone downloads a book from a shadow library like Library Genesis, it's through IPFS most of the time, often via an IPFS gateway such as Cloudflare's IPFS gateway, so you don't even notice it's using IPFS. These shadow libraries have millions of users per day, especially academics.
Wouldn't all those examples also work just as easily with torrents/magnet-links? I think in all those cases a central server distribution model has unfortunately been "good enough" for the majority of users (even though their data is mined and ads are injected)
When it comes to FOSS, I personally don't understand why something like a package manager isn't P2P by default. It feels very aligned with the hacker culture - a la "A Declaration of the Independence of Cyberspace". Virtually nobody uses it, so the solutions are half baked, clunky and not integrated with every day workflows (ex: browsers don't support anything P2P). Something like libcurl can't pull a torrent from the web
I was working on the same building as the team from Protocol Labs. I talked with a couple of devs and it seems like they never even considered that most people who would be interested in running a node would like to have control over who gets access to the pinned files. I think I opened a ticket asking for an ACL system, but it got closed.
I really had high hopes for it, but I realized that all I really want is an object storage with content addressable urls.
> I have never seen anybody using IPFS in the wild, even projects for which it should be well suited (archive.org, Linux package distribution, Git, Lemmy, Imgur, CivitAI), don't use it.
> there is also the legal problem that IPFS conflicts with copyright. Redistributing anything is illegal by default
I think you might misunderstand how IPFS works, or possibly confuse it with Freenet. When you use IPFS, nothing is automatically distributed, unless they happen to know the hash of the content you've added locally. It's not until someone starts to explicitly request data from you, that you start sharing anything.
Running IPFS at scale is horrible. Try to download a few dozen TBs of small files. Its garbage collection is rubbish (ended up nuking the ZFS dataset every couple of days instead), it is very CPU and IOPS hungry, and it has bad network throttling support.
I would claim it has failed the test of time as it has very little adoption.
I would like to add that IPFS pretty much doesn't run on spinning rust or slow CPUs. A PI or other low end box can easily run torrents with an external harddrive. IPFS can't download large files at a reasonable speed on slow hardware.
It raises the question, though: I can pay Storj S3 with STORJ tokens, and I can also pay for renterd space with Sia tokens, and I can also pay for IPFS with Filecoin, and all of them still work with good old credit cards. What "intrinsic" value is there for any of these tokens, if they can only be used on their own internal economies?
And before the "but permissionless, so I can pay with crypto!", why not just use DAI?
I’ve earned Storj for years by renting excess hard drive space to store other people’s files. Having a native currency lets you innovate in some unique ways.
Microtransactions are not practical with credit cards and other traditional settlement methods.
The reasons not to use something like DAI are fundraising (unfortunately - crypto VCs really like seeing a token), network bloat/decentralization, and leadership risk. Your own currency is less likely to collapse by other people’s bad decisions.
Afaik there was some work to make it possible to pay for Sia storage in the new renterd node with any crypto asset you could make a payment channel with (so, most of them including Dai), but I don't see that in the readme anymore: https://github.com/SiaFoundation/renterd
On paper yes. In practice a lot of these crypto coins are dominated by get rich quick types that treat the bit that actually generates value as an afterthought. I've not seen much evidence that Filecoin is any different.
The benchmark for success here would be people participating being able to earn meaningful amounts of file coin simply by hosting ipfs data. Is that even remotely profitable at this point?
IPFS as a technology is fine. It seems to work well enough though there are some resilience and scaling challenges. Your data basically disappears unless you ensure it doesn't Which is where filecoin and other incentives come in. Basically it requires you to pay for someone to host your content. Because others won't unless they happen to have downloaded your content in which case it may linger on their drive for a while.
My guess is that the whole Filecoin thing is scaring away a lot of enterprise users though.
What it boils down to is that things like s3 and other storage solutions are pretty robust and affordable as well if you are going to pay for hosting the files anyway. And probably a lot easier to deal with. So, for most companies they might look at it and then use something like S3. The whole business of having to buy some funny coins on a dodgy website is a bit of a nonstarter in most companies.
IIRC they subsidize hosting by ~8x (ie, when someone pays $1 in filecoin to pin content, filecoin pays out $8 for people actually running nodes. Otherwise the economic incentive to run nodes isn't there, just another VC scheme of selling dollars for a dime to brag about growth)
Is there actually a way to use filecoin to pay for IPFS pinning yet? Last time I checked a couple years ago Filecoin was worth more than Disney and the actual use case was vaporware.
Filecoin, Akash, DVPN, Jackal, Helium are all cryptocurrency projects with real products/use cases outside of the circular defi economy/number go up technology.
The company that lied about partnerships, structured itself as a way for insiders to cash out on tokens early, then failed to attract any business so pivoted to a new tech and a new token to do it all again?
I don't agree that Filecoin does anything to enable piracy. If your goal is to pirate software and movies, your needs were already (and still are) adequately met with BitTorrent and there's no need for a blockchain token. Tokens would just be an expensive distraction to someone like that.
Ignoring the question if it is bullshit or not, what do you personally gain from posting comments like this? Wouldn't it be more interesting for everyone involved (including yourself) if you actually share the arguments against the parent comment, if you have any?
The article says there are 3 advantages of IPFS -- speed, data verification, and data resilience. HTTPS/curl can provide the latter two. Does someone know if they published speed numbers of IPFS vs HTTPS/curl in the satellite environment?
Maybe initiatives like that will finally help transitioning to IPv6. We're out of address space here on Earth, and now we're going with TCP/IP to space.
If we won't be wasteful with IPv6 space, it'll suffice for all inter-planetary and inter-galactic communication. Considering 1 trillion stars per galaxy and 1 trillion galaxies in the universe, we're safe with more than 340 trillion addresses per star.
We would be able to address between 1/3 to 1/2 of all atoms in the universe with IPv6.
This seems like the kind of application that works best when bandwidth and storage are cheap, and I would have assumed that that, in space, both are about as far from cheap as you can get.
Instead, IPFS uses content-addressing, where the content is hashed and given a ID based on the content itself. So "foo" could have the ID A, while "bar" could have the ID B. No matter who added the content to the network, if the content is the same, the ID will be the same.
The benefit of this is that you can fetch the data from anywhere, and you'll get the data you wanted guaranteed (assuming the content is available somewhere on the network).
In terms of space, imagine being on the moon and requesting some website. With location-addressing, you have to be able to reach that particular node, and the data has to travel from there to you. With content-addressing, that content can be fetched from anywhere, maybe another node on the moon has it already, so it can fetched quicker from there, or some node in-between the moon and earth.
More reading about content-addressing: https://en.wikipedia.org/wiki/Content-addressable_storage
Magnet links and git are two fairly common technologies that uses content-addressing with lots of benefits.
(edit: of course, this is a simplification of both how IP, networks, IPFS, DNS and more works, but I hope it provides a fair overview at least)
> The benefit of this is that you can fetch the data from anywhere
Only works if the data I want is being replicated in many locations. But storage in spacecraft is presumably a tightly constrained resource, due to the need to use technology that's space-hardened and has an extremely good reliability record, and is therefore probably far from being both cutting edge and high density. JWST, for example, only has 68GB of storage despite being an instrument for taking high resolution photographs.
I would guess that that creates a situation that is very far from the one content-addressable storage is trying to solve. The goal isn't on-demand access to arbitrary files that are just sitting around in long term storage, because I'm guessing that, in space, there is effectively no long-term storage of data files in the first place. Instead, what you're looking to do is to get the data off of the spacecraft and down to earth as quickly and efficiently as possible, and then delete it from the spacecraft's storage so that you can make room to do more cool space stuff.
And I also don't want to be using my spacecraft's SSD to cache or replicate data for others if I can possibly avoid it. That will unnecessarily shorten the lifetime of the SSD, and, by extension, the piece of hardware that I just spent millions or perhaps even billions of dollars to build and launch into space.
And I just can't follow you as far as
> imagine being on the moon and requesting some website
because, right here right now, that is such a hypothetical situation that I have absolutely no idea why it needs a real-world demonstration of proof of concept using currently-available technology. Let's wait to see if browsing the Web from the moon leaves the domain of science fiction and becomes science reality first, so that then we can benefit from whatever technology exists in that future when we're solving the problem.
Also, even for a peer-to-peer network, you still need some way to discover the peers that actually have the data, so you need some kind of centralized infrastructure that can help coordinate. Ultimately whether you connect to a CDN or to the centralized P2P facilitator is not such a massive difference.
Also, given the way ISPs typically operate, you will probably have much better bandwidth downloading from a centralized server than from a generic other end-user.
This is why BitTorrent only really has two successfully deployed use cases:
1. Windows updates and similar, where machines can rely on broadcasts on a small LAN to coordinate without needing more complex infrastructure (doesn't scale beyond LAN)
2. "Piracy", where the major advantage is that it diffuses the blame for copyright infringement, instead of presenting a single large target that holds all of it.
CAS is a separate issue than a caching hierarchy.
This complete nonsense. First of all DNS can give you multiple IP addresses[1] (that's how CDNs work eh!) and then IP addresses are only very loosely coupled to geographical locations…
[1]: https://www.cloudflare.com/learning/dns/what-is-anycast-dns/
Reality is the space thing is just part of the usual crypto grift narrative building.
Or once its in the network thats it?
Seems easy to DDOS by changing one character many times in a file which generates a new id each time
The same thing could be applied to space.
It doesn't guarantee that the content the ID resolves to is available, but once you have a content-addressable ID, you can be sure you'll be able to get the data as long as the data is available somewhere on the network.
The throughput probably won't be great, but mainly if we assume 1:1 not broadcast connections! There will be roving access, not continual, to a given satellite - a fact until there are thousands of other satellites also serving ipfs. Costs will be high.
But still, there are wins. So far, satellite reliability is fairly high; there haven't been natural or human-made disasters afflicting many satellites. In some conditions the roving nature of the satellite could be a boon; you can get information out to a lot of people.
Wikipedia, news, important events... those would all benefit from guaranteed-reoccuring-availability model, perhaps. This would be an interesting tamper-resistant way to store something like votes, if it could be vetted. If there's satellite to satellite communications, the ability to have an expanding archive of crucial orbital information is useful; the swarm can grow & update vital behavior with ipfs in interesting ways.
Rather than assess just on whether this will make a good ipfs node as ipfs is typically used (high bandwidth, high storage nodes), I think it's worth considering this on the merits of what this technology offers that is distinct. Pulling out a single ruler to measure everything isn't always the best and only way to judge; indeed I think we miss a lot when we apply only our current set of expectations to new ideas & encounters. I'd encourage a more liberal consideration. Although I agree, I'm not sure either what the killer use is, I want to see thinking that pioneers what could work, that explored what values are on offer, even if it's not the same old values as the incumbent (fiber optics on the ground).
Deleted Comment
If you think IPFS is trying to "re-spin CDNs as their invention", I'm pretty sure you misunderstand what IPFS. The homepage is a great starting point if you're curious rather than antagonistic: https://ipfs.tech/
> IPFS doesn't solve persistence of data
I don't think it claims to solve this either? What it does claim to solve is the persistence of identifiers of data.
> doesn't solve churn in p2p systems
What P2P system has ever done so or even claimed to have done so?
You might be thinking that I'm making this up to troll. But the funny thing is so few people have any idea about this area of technology a company like Protocol Labs can get away with being completely insular, mediocre, unimaginative, impractical, and actually full of shit, and no one will notice. In fact, I'm routinely reminded what a 'visionary' the founder is. Despite nothing of value ever coming out of the company. But what I've learned is if enough people believe the lie it might as well be true.
Now give me money! Space, space, blockchain, space!
Typical "web3" though, solving problems nobody actually has.
But while I love the idea of IPFS, it comes with a bunch of tradeoffs that I think make it very unlikely that it'll ever become mainstream.
What I do think will happen in space, is similar to what already happens with PoPs around the world, no IPFS required. As an example: I believe that there will be a YouTube on the moon and YouTube on Mars, and YouTube servers caching most of the content on both, and it'll all work the same. Cross-planet communication will be high latency, but it won't matter.
Importantly, this is enabled by content addressable storage in IPFS, where the address of the data is derived deterministically with cryptographic hashing. Agree it likely won't ever become super popular, but it is a core component of distributed storage systems without central orchestration.
https://en.wikipedia.org/wiki/Content-addressable_storage
Can't it be enabled in all sorts of ways? E.g. edge caching, the way YouTube works today?
I'm saying in a hypothetical future in which we have colonised other planets, tech companies will just plop caching servers on Mars and the moon, completely obviating the need for IPFS or anything special.
(edited original comment a little to clarify)
*One other thing I noticed is that the "Resources" section doesn't have the gradient border edges on it, which I don't know is intentional or not.
EDIT: However, it is strange that the foundation has a totally different site.
Gimmick, sure. but it's an important milestone and costs less than a series of TV ads.
I don't see a use case for this other than speculation.
I would even go as far to say that this 'side project' was done to try and pump the price of filecoin so that people can take profit off of it.
The other use cases are hinted to in the name of the project...
That's exactly what cryptomoney has been about for years now, so it fits right in.
I love the idea about data being addressable by hash, but I don't feel IPFS has actually delivered anything meaningful in that area yet.
And with ipfs-search.com shutdown, there is not even any way left to explore what is actually on the network now.
And technical issues aside, there is also the legal problem that IPFS conflicts with copyright. Redistributing anything is illegal by default, unless somebody gives you permission, IPFS provides no means to track that permission. You can't attach a GPL or an author to a hash.
Every time someone downloads a book from a shadow library like Library Genesis, it's through IPFS most of the time, often via an IPFS gateway such as Cloudflare's IPFS gateway, so you don't even notice it's using IPFS. These shadow libraries have millions of users per day, especially academics.
When it comes to FOSS, I personally don't understand why something like a package manager isn't P2P by default. It feels very aligned with the hacker culture - a la "A Declaration of the Independence of Cyberspace". Virtually nobody uses it, so the solutions are half baked, clunky and not integrated with every day workflows (ex: browsers don't support anything P2P). Something like libcurl can't pull a torrent from the web
I really had high hopes for it, but I realized that all I really want is an object storage with content addressable urls.
FWIW it seems like NixOS at least tried or is trying: https://blog.ipfs.tech/2020-09-08-nix-ipfs-milestone-1/
Netflix uses IPFS. [0]
[0] https://blog.ipfs.tech/2020-02-14-improved-bitswap-for-conta...
I think you might misunderstand how IPFS works, or possibly confuse it with Freenet. When you use IPFS, nothing is automatically distributed, unless they happen to know the hash of the content you've added locally. It's not until someone starts to explicitly request data from you, that you start sharing anything.
I would claim it has failed the test of time as it has very little adoption.
From my experience it’s a “heavy tech”/resource hog hog that does not appeal neither to developers nor to end users(it has no killer app)
I think the challenge with torrents is maintaining communities of seeders without getting taken down, but I don't think IPFS really helps with that.
It raises the question, though: I can pay Storj S3 with STORJ tokens, and I can also pay for renterd space with Sia tokens, and I can also pay for IPFS with Filecoin, and all of them still work with good old credit cards. What "intrinsic" value is there for any of these tokens, if they can only be used on their own internal economies?
And before the "but permissionless, so I can pay with crypto!", why not just use DAI?
Microtransactions are not practical with credit cards and other traditional settlement methods.
The reasons not to use something like DAI are fundraising (unfortunately - crypto VCs really like seeing a token), network bloat/decentralization, and leadership risk. Your own currency is less likely to collapse by other people’s bad decisions.
The benchmark for success here would be people participating being able to earn meaningful amounts of file coin simply by hosting ipfs data. Is that even remotely profitable at this point?
IPFS as a technology is fine. It seems to work well enough though there are some resilience and scaling challenges. Your data basically disappears unless you ensure it doesn't Which is where filecoin and other incentives come in. Basically it requires you to pay for someone to host your content. Because others won't unless they happen to have downloaded your content in which case it may linger on their drive for a while.
My guess is that the whole Filecoin thing is scaring away a lot of enterprise users though.
What it boils down to is that things like s3 and other storage solutions are pretty robust and affordable as well if you are going to pay for hosting the files anyway. And probably a lot easier to deal with. So, for most companies they might look at it and then use something like S3. The whole business of having to buy some funny coins on a dodgy website is a bit of a nonstarter in most companies.
source was personal communication, sorry
The company that lied about partnerships, structured itself as a way for insiders to cash out on tokens early, then failed to attract any business so pivoted to a new tech and a new token to do it all again?
Helium is a failure and a joke.
https://www.forbes.com/sites/sarahemerson/2022/09/23/helium-...
Price (market cap): ~3B
Annualized revenue based on the last 30 days: 3M.
Intrinsic value based on perpetual DCF assuming 5% interest rate: 60M.
2. You're valuing it like a company which is also flawed
None of these coins have any intrinsic value.
There are thousands of people who sat on bench and thought, "I want to create a ponzi coin" doesn't mean they create value.
Crypto is a net negative phenomenon.
Any coin, including USD, has no intrinsic value. Its value is in what people can buy with it.
If we won't be wasteful with IPv6 space, it'll suffice for all inter-planetary and inter-galactic communication. Considering 1 trillion stars per galaxy and 1 trillion galaxies in the universe, we're safe with more than 340 trillion addresses per star.
We would be able to address between 1/3 to 1/2 of all atoms in the universe with IPv6.
Now that's a safety margin!
There are around 10^38 IPv6 addresses and 10^80 atoms in the observable universe. It's not even close.