Readit News logoReadit News
rglullis · 2 years ago
If you really, absolutely want to continue using Mastodon, support this: https://github.com/mastodon/mastodon/issues/28554.

If you just want to run a single-user instance on the cheap: forget Mastodon! Takahe [0] does not require one whole server for each user and it has substantial lower TCO. A "small instance" for Mastodon can not be realistically be found for less than $10/month today , while I can offer Takahe for $39 per year [1].

But if you want to make the Fediverse really, really cheap to operate, lets rethink the whole server-centric approach and put some effort to develop applications that are primarily based on the client [2]

[0] https://jointakahe.org

[1] https://communick.com/takahe

[2] https://raphael.lullis.net/a-plan-for-social-media-less-fedi...

juped · 2 years ago
The only thing the horrible "Fediverse" ecosystem has going for it is the ability to not be at the mercy of unaccountable admins.
rglullis · 2 years ago
Pretty much like Democracy: it's the worst, except for everything else that has been tried before.
urza · 2 years ago
I am in your comments telling you that federated server is still just someone else's computer.

But there are systems where user is sovereign, defined by their key, not dependent on any masters. One such system is Nostr:

https://nostr.com/

asmor · 2 years ago
discoverability, spam and community moderation are already big problems in slightly less centralized systems - they're even more unsolved with these. as with activitypub, things work fine until suddenly there's enough adoption that you have an entire different class of problem.
echelon · 2 years ago
This would all be solved if we treated P2P social media as ephemeral. That's a much more tractable problem.

Imagine that you have 48 hours to converse on a thread and grab its content, then it's gone. Third party software could archive if you wanted, but it wouldn't be an infrastructure requirement.

It'd be so much easier to serve that peer nodes could easily be stood up on residential / commodity hardware and could reasonably be expected to handle serving large portions of the discussion graph.

dewey · 2 years ago
This would be a technical solution that doesn’t really map to what real people would expect from how a service works these days.
graemep · 2 years ago
No, but it would work fine for me if it was a month instead of 48 hours.

My biggest problem with social media is deleting history I do not want retained.

numpad0 · 2 years ago
Some of 2000s Japanese imageboards worked a bit like that. Replying to a thread bumps it to top of a board, last thread in list of threads is deleted when total reply count exceeds a limit, bumps stop working for older threads. Thread MHT archives can be captured and uploaded to fileshares if needed. Remaining time is controllable by internal parameters, but also self regulating by traffic because of bump system, to somewhere between just few hours to over a week. Discussion worth continuing can be done so by starting a new thread.

Many from such communities later migrated to Twitter, despite the board still operating. Wrestling virtue signaling game while earning buzz is easier for those with that background, and I still come across accounts on timeline with clear influence from there.

Massive upgrade from there to Twitter was forced individual consistency with account requirement - absolute anonymity leads to people without working theory of mind going gung-ho with total release from normally enforced multi-agent world modeling, and that's just malicious, like rejecting concept of criticism, possession, even nominatives.

amaccuish · 2 years ago
Sounds like Usenet!
Kwpolska · 2 years ago
So you want to introduce even more urgency and FOMO into social media just to save some compute costs?
Zak · 2 years ago
It probably wouldn't be hard to make a fork of Mastodon, Pleroma, etc... that deletes posts after a time limit and refuses to remote-fetch anything exceeding it. Of course, it wouldn't be able to impose that mandate on the rest of the network, and I don't think you'd convince many people to use it.

I find Mastodon has a slower pace than corporate social media, and I like it that way. I want to browse posts when I have time for it. I have a crude patch running on my own server to backfill old posts from remote accounts when I scroll, which is very much the opposite idea.

I hate this suggestion, and I hope nothing like it becomes popular enough that I feel I must interact with it.

krapp · 2 years ago
If you're running your own Mastodon instance you can set it to delete content after a certain amount of time. It's all but mandatory on small accounts, especially if you use relays.
zelphirkalt · 2 years ago
Currently I support an instance only minimally, but probably enough to pay for the cost incurred by my usage. If every user did it, not instance owner would ever have financial worries. However, I would support more, if I got some guarantees, that media is also hosted on some bare metals or so, away from S3 or Fastly or similar services, that put data i to the hands of big players.
rglullis · 2 years ago
They are not even cheap. I set up my own Minio cluster for Communick and it's costing me $150 for $80TB of usable data, and Hetzner does not charge for the first 20TB of data egress.
asmor · 2 years ago
I always thought Pleroma solved this problem much more elegantly: proxy the media from the origin, and cache a set amount of data.

Also respects other people deleting their servers/media and prevents someone posting CSAM becoming a problem for everyone.

viraptor · 2 years ago
That's solving the storage, but not the bandwidth, right? In practice you could assume that at least one person will see each image so everything will be cached at least once, (I think that holds, but I'd love to see it confirmed) and everything created locally will still be downloaded by most peers in that solution.
asmor · 2 years ago
If you set your cache to not evict and use the same proxy for several instances, it also solves bandwidth. There might be a small issue with privately shared media and authentication through, that could prevent multiple instances from using one mediaproxy, but shouldn't be too hard to fix.

Deleted Comment