What AI told me to tell you:
In today's overwhelming digital landscape, it's easy for individual thoughts and moments to get lost in the noise. I wanted to create a space where everyone has an equal opportunity to have their voice (or at least a short burst of it!) seen by a potentially massive audience, without the need for building a profile or fighting algorithms.
Flamabl.com offers a super simple premise: type in your message, and every 15 seconds, my system randomly selects one message from all currently connected users to display for everyone to see. It's part luck, part shared moment, and a whole lot of "what's next?". When your message "wins," you get a fun burst of fireworks! You can also vote on the messages you see, and each message with votes gets a permanent link with its vote history. The catch? Messages only stay in the pool while you're actively on the site, emphasizing the "now" of the experience.
Who can benefit from it? Anyone who has a fleeting thought they want to share, a quick question for the world, a link they think might be interesting, or even just wants to experience the unpredictable stream of collective consciousness. It's for the curious, the spontaneous, and those who enjoy a little bit of internet randomness. Plus, if you have something you want to promote (a new project, a cause, etc.), it's a free shot at being seen by a potentially huge audience for 15 seconds!
Why it's interesting to me:
It's websockets all the way down, providing the same message on all connected devices every 15 seconds. The server is written in C++, split into several components, so if the number of users starts to grow, I can easily roll out a few more "workernodes" to handle the websocket connections for the end users. Crucially, all the server components also use websockets for internal communication, allowing them to reside on the same server or be distributed across several. The frontend is simple React code. Everything is highly optimized so that potentially millions of users can see the same message simultaneously and vote.
.-------.
| Hi |
'-------'
^ (\_/)
'----- (O.o)
(> <)It is hard to explain to people who never tried qmk layers that more keys doesn't mean better keyboard, having proper layers on home row is many times more ergonomic. One of the things that don't click unless you try I guess.
Can it automatically prune backups older than N days?
I don’t see anything about encryption.
Any new backup is hardlinked against previous in temporary 'in-progress' directory, then renamed to proper name at the end. If backup breaks, new 'saf backup' by default first removes 'in-progress' than starts things again (linking with latest good one) but you can 'saf backup --resume' to try to finish interrupted one. I prefer clean try again (which is the default) but --resume works well too.
> Can it automatically prune backups older than N days?
Yes, manually by 'saf prune' on top of 'saf backup' doing prune itself. Prune periods are defined in each .saf.conf, per backup source location, with the defaults of 2/30/60/730/3650 days, for all/daily/weekly/monthly/yearly backups. All defaults are easy to change per source.
> I don’t see anything about encryption.
saf doesn't deal with encryption, only with transport. I prefer to use other specialized tool for the encryption if I have such backup target that needs one.
Lately, in last few years, I am leaning towards using many cheap backups instead of clever and more expensive ones, with the idea that many of them can't all break at the same time. Yes occasional checks are good but safety in numbers seems as a good strategy.
It is not an accident that saf tag line says "one backup is saf, two are safe, three are safer" ;)
On top of many cheap backups, I am also trying not to rely on any single peace of technology (I know, it is not ideal that hardware and OS remains the same on any computer no matter what backup is used). If I use saf as my preferred rsync based solution I will also use Borg or duply/duplicity as a additional backup to avoid rsync bugs.
Having two or more rsync based backups, so they all go trough the same rsync pipe, makes much less sense than mixing completely different backup solutions, right?
My solution is to pick a few random files (plus whatever is new), and compute their hashes on both local and remote versions. But it's slow and probabilistic. ZFS also helps, but I feel it's too transparent to rely on (what if the remote storage changes filesystem).
Lately, in last few years, I am leaning towards using many cheap backups instead of clever and more expensive ones, with the idea that many of them can't all break at the same time. Yes occasional checks are good but safety in numbers seems as a good strategy.
It is not an accident that saf tag line says "one backup is saf, two are safe, three are safer" ;)
rsnapshot uses centralized rsnapshot.conf, saf has git style .saf.conf per each backup source location.
Apart from using rsync, there are more differences than similarities between rshapshot and saf.