> One way is to ensure that machines that must be backed up via "push" [..] can only access their own space. More importantly, the backup server, for security reasons, should maintain its own filesystem snapshots for a certain period. In this way, even in the worst-case scenario (workload compromised -> connection to backup server -> deletion of backups to demand a ransom), the backup server has its own snapshots
My preferred solution is to let client only write new backups, never delete. The deletion is handled separately (manually or cron on the target).
You can do this with rsync/ssh via the allowed command feature in .ssh/authorized_keys.
Another thing you can do is just run a container or a specific backup user. Something like with a systemd-nspawn can give you a pretty lightweight chroot "jail" and you can ensure that anyone inside that jail can't do any rm commands.
pacman -S arch-install-scripts # Need this package (for debian you need debootstrap)
pacstrap -c /mnt/backups/TestSpawn base # Makes chroot
systemd-nspawn -D /mnt/backups/TestSpawn # Logs in
passwd # Set the root password. Do whatever else you need then exit
sudo ln -s /mnt/backups/TestSpawn /var/lib/machines/TestSpawn
sudo machinectl start TestSpawn # Congrats, you can now control with machinectl
Configs work like normal systemd stuff. So you can limit access controls, restrict file paths, make the service boot only at certain times or activate based on listening to a port, make only accessible via 192.168.1.0/24 (or 100.64.0.0/10), limit memory/CPU usage, or whatever you want. (I also like to use BTRFS subvolumes) You could also go systemd-vmspawn for a full VM if you really wanted to.
Extra nice, you can use importctl to then replicate.
I fall into the "pull" camp so this is less of a worry. The server to be backed-up should have no permissions to the backup server. If an attacker can root your live server (with more code/services to exploit), they do not automatically also gain access to the backup system.
I also implemented my backup scheme using "pull" as it is easier to do than an append-only system, and therefore probably more secure as there is less room for mistakes. The backup server can only be accessed through a console directly, which is a bit annoying sometimes, but at least it writes summaries back to the network.
I do both. It requires two backup locations, but I want that anyway. My backup sources push to an intermediate location and the primary backups pull from there. The intermediate location is smaller so can hold less, but does still keep snapshots.
This means that neither my backup sources nor the main backup sinks need to authenticate with each other, in fact I make sure that they can't, they can only authenticate with the intermediate and it can't authenticate with them⁰. If any one or two of the three parts is compromised there is a chance that the third will be safe. Backing up the credentials for all this is handled separately to make sure I'm not storing the keys to the entire kingdom on any internet connectable hosts. The few bits of data that I have that are truly massively important are backed up with extra measures (including an actual offline backup) on top.
With this separation, verifying backups requires extra steps too. The main backups occasionally verify checksums of the data that hold, and send a copy of the hashes for the latest backup back to the intermediate host(s) where that can read back to compare to hashes generated¹ at the sources² in order to detect certain families of corruption issues.
--------
[0] I think of the arrangement as a soft-offline backup, because like an offline backup nothing on the sources can (directly) corrupt the backup snapshots at the other end.
[1] These are generated at backup time, to reduce false alerts from files modified soon after they are read for sending to the backups.
[2] The hashes are sent to the intermediate, so the comparison could be done there and in fact I should probably do that as it'll make sending alerts when something seems wrong more reliable, but that isn't how I initially set things up and I've not done any major renovations in ages.
>My preferred solution is to let client only write new backups, never delete. The deletion is handled separately (manually or cron on the target).
I do this by making backups visible to users on a `/.snapshots` directory that is the same as the target of the backup script, but mounted NFS read-only.
> My preferred solution is to let client only write new backups, never delete.
I wish for syncoid to add this feature. I want it to only copy snapshots to the backup server. The server then deletes old snapshots. At the moment it requires delete permissions.
It's endlessly surprising how people don't care / don't think about backups. And not just individuals! Large companies too.
I'm consulting for a company that makes around €1 billion annual turnover. They don't make their own backups. They rely on disk copies made by the datacenter operator, which happen randomly, and which they don't test themselves.
Recently a user error caused the production database to be destroyed. The most recent "backup" was four days old. Then we had to replay all transactions that happened during those four days. It's insane.
But the most insane part was, nobody was shocked or terrified about the incident. "Business as usual" it seems.
this is side effect of soc2 auditor approved disaster recovery policies.
company where i worked, had something similar. i spent a couple of months going through all teams, figuring out how disaster recovery policies are implemented (all of them were approved soc auditors).
outcome of my analysis was that in case of major disasters it will be easier to shut down company and go home than trying to recover to working state within reasonable amount of time.
I'd go even a step further: For the big corp, having a point of failure that lives outside its structure can be a feature, and not a bug.
"Oh there goes Super Entrepise DB Partner again" turns into a product next fiscal year, that shutdowns the following year because the scope was too big, but at least they tried to make things better.
RTO/RPO is a thing. Despite many companies declare waht they need SLA of five nines and RPO in minutes... this situations are quite evident what many of them are fine with SLA of 95% SLA and PTO of weeks
Wait, the prod db, like the whole thing? Losing 4 days of data? How does that work. Aren't customers upset? Not doubting your account, but maybe you missed something, because for a $1 billion company, that's likely going to have huge consequences.
Well it was "a" production database, the one that tracks supplier orders and invoices so that suppliers can eventually get paid. The database is populated by a data stream, so after restoration of the old version, they replayed the data stream (that is indeed stored somewhere, but in only one version (not a backup)).
And this was far from painless: the system was unavailable for a whole day, and all manual interventions on the system (like comments, corrections, etc.) that had been done between the restoration date and the incident, were irretrievably lost. -- There were not too many of those apparently, but still.
Thank you for sharing. A curious read. I am looking forward to the next post.
I've been working on backup and disaster recovery software for 10 years. There's a common phrase in our realm that I feel obligated to share, given the nature of this article.
> "Friends don't let friends build their own Backup and Disaster Recovery (BCDR) solution"
Building BCDR is notoriously difficult and has many gotchas. The author hinted at some of them, but maybe let me try to drive some of them home.
- Backup is not disaster recovery: In case of a disaster, you want to be up and running near-instantly. If you cannot get back up and running in a few minutes/hours, your customers will lose your trust and your business will hurt. Being able to restore a system (file server, database, domain controller) with minimal data loss (<1 hr) is vital for the survival of many businesses. See Recovery Time Objective (RTO) and Recovery Point Objective (RPO).
- Point-in-time backups (crash consistent vs application consistent): A proper backup system should support point-in-time backups. An "rsync copy" of a file system is not a point-in-time backup (unless the system is offline), because the system changes constantly. A point-in-time backup is a backup in which each block/file/.. maps to the same exact timestamp. We typically differentiate between "crash consistent backups" which are similar to pulling the plug on a running computer, and "application consistent backups", which involves asking all important applications to persist their state to disk and freeze operations while the backup is happening. Application consistent backups (which is provided by Microsoft's VSS, as mentioned by the author) significantly reduce the chances of corruption. You should never trust an "rsync copy" or even crash consistent backups.
- Murphy's law is really true for storage media: My parents put their backups on external hard drives, and all of r/DataHoarder seems to buy only 12T HDDs and put them in a RAID0. In my experience, hard drives of all kinds fail all the time (though NVMe SSD > other SSD > HDD), so having backups in multiple places (3-2-1 backup!) is important.
(I have more stuff I wanted to write down, but it's late and the kids will be up early.)
Ha. That quote made me chuckle; it reminded me of a performance by the band Alice in Chains, where a similar quote appeared.
Re: BCDR solutions, they also sell trust among B2B companies. Collectively, these solutions protect billions, if not trillions of dollars worth of data, and no CTO in their right mind would ever allow an open-source approach to backup and recovery. This is primarily also due to the fact that backups need to be highly available. Scrolling through a snapshot list is one of the most tedious tasks I've had to do as a sysadmin. Although most of these solutions are bloated and violate userspace like nobody's business, it is ultimately the company's reputation that allows them to sell products. Although I respect Proxmox's attempt at cornering the Broadcom fallout, I could go at length about why it may not be able to permeate the B2B market, but it boils down to a simple formula (not educational, but rather from years of field experience):
> A company's IT spend grows linearly with valuation up to a threshold, then increases exponentially between a certain range, grows polynomially as the company invests in vendor-neutral and anti-lock-in strategies, though this growth may taper as thoughtful, cost-optimized spending measures are introduced.
- Ransomware Protection: Immutability and WORM (Write Once Read Many) backups are critical components of snapshot-based backup strategies. In my experience, legal issues have arisen from non-compliance in government IT systems. While "ransomware" is often used as a buzzword by BCDR vendors to drive sales, true immutability depends on the resiliency and availability of the data across multiple locations. This is where the 3-2-1 backup strategy truly proves its value.
Would like to hear your thoughts on more backup principles!
> An "rsync copy" of a file system is not a point-in-time backup (unless the system is offline), because the system changes constantly. A point-in-time backup is a backup in which each block/file/.. maps to the same exact timestamp.
You can do this with some extra steps in between. Specifically you need a snapshotting file system like zfs. You run the rsync on the snapshot to get an atomic view of the file system.
Of course if you’re using zfs, you might just want to export the actual snapshot at that point.
> having backups in multiple places (3-2-1 backup!) is important
Yeah and for the vast majority of individual cybernauts, that "1" is almost unachievable without paying for a backup service. And at that point, why are you doing any of it yourself instead of just running their rolling backup + snapshot app?
There isn't a person in the world who lives in a different city from me (that "1" isn't protection when there's a tornado or flood or wildfire) that I'd ask to run a computer 24/7 and do maintenance on it when it breaks down.
My solution for this has been to leave a machine running in the office (in order to back up my home machine). It doesn't really need to be on 24/7, it's enough to turn it on every few days just to pull the last few backups.
3-2-1 analogy is old. We have infinite flexibility on where we can put data unlike before cloud servers existed.
I'd at least have file system snapshots locally for easy recovery in case of manual mistakes, have it copied at a remote location using implementation A and let it snapshot there too, copy same amount on another location using implementation B and let it snapshot there too, so not only you'd have durability, implementation bugs on a backup process can also be mitigated.
zfs is a godsend for this and I use Borg as secondary implementation, which seems enough for almost any disasters.
> You should never trust an "rsync copy" or even crash consistent backups.
This leads you to the secret forbidden knowledge that you only need to back up your database(s) and file/object storage. Everything else can be, or has to be depending on how strong that 'never' is, recreated from your provisioning tools. All those Veeam VM backups some IT folks hoard like dragons are worthless.
Exactly. There is no longer any point in backing up an entire "server" or a "disk". Servers and disks are created and destroyed automatically these days. It's the database that matters, and each type of database has its own tooling for creating "application consistent backups".
My personal external backup is two external drives in RAID1 (RAID0 wtfff?). One already failed, of course the Seagate one. It failed silently, too - a few sectors just do not respond to read commands and this was discovered when in-place encrypting the array. (I normally would avoid Seagate consumer drives if it wasn't for brand diversity. Now I have two WD drives purchased years apart.)
It's a home backup so not exactly relevant to most of what you said - just wanted to underscore the point about storage media sucking. Ideally I'd periodically scrub each drives independently (can probably be done by forcing a degraded array mode, but careful not to mess up the metadata!) against checksums made by backup software. This particular failure mode could also be caught by dd'ing to /dev/null.
ZFS really shines here with its built-in "zpool scrub" command and checksumming.
Even though I am preaching "application consistent backups" in my original comment (because that's what's important for businesses), my home backup setup is quite simple and isn't even crash consistent :-) I do: Pull via rsync to backup box & ZFS snapshot, then rsync to Hetzner storage box (ZFS snapshotted there, weekly)
My ZFS pool consists of multiple mirrored vdevs, and I scrub the entire pool once a month. I've uncovered drive failures, and storage controller failures this way. At work, we also use ZFS and we've uncovered even failures of entire product lines of hard drives.
Nice writeup... Although I'm missing a few points...
In my opinion a good backup (system) is only good, if it has been tested to be restorable as fast as possible and the procedure is clear (like in documented).
How often have I heard or seen backups that "work great" and "oh, no problem we have them" only to see them fail or take ages to restore, when the disaster has happened (2 days can be an expensive amount of time in a production environment). Quite too often only parts could be restored.
Another missing aspect is within the snapshots section... I like restic, which provides repository based backup with deduplicated snapshots for FILES (not filesystems). It's pretty much what you want if you don't have ZFS (or other reliable snapshot based filesystems) to keep different versions of your files that have been deleted on the filesystem.
The last aspect is partly mentioned, the better PULL than PUSH part. Ransomware is really clever these days and if you PUSH your backups, it can also encrypt or delete all your backups, because it has access to everything... So you could either use readonly media (like Blurays) or PULL is mandatory. It is also helpful to have auto-snapshotting on ZFS via zfs-auto-snapshot, zrepl or sanoid to go back in time to where the ransomware has started its journey.
> Ransomware is really clever these days and if you PUSH your backups, it can also encrypt or delete all your backups, because it has access to everything
That depends on how you have access to your backup servers configured. I'm comfortable with append-only backup enforcement for push backups[0] with Borg and Restic via SSH, although I do use offline backup drive rotation as a last line of defense for my local backup set. YMMV.
Since you mentioned restic, is there something wrong with using restic append-only with occasional on-server pruning instead of pulling? I thought this was the recommended way of avoiding ransomware problems using restic.
> So you could either use readonly media (like Blurays) or PULL is mandatory.
Or like someone already commented you can use a server that allows push but doesn't allow to mess with older files. You can for example restrict ssh to only the scp command and the ssh server can moreover offer a chroot'ed environment to which scp shall copy the backups. And the server can for example daily rotate that chroot.
The push can then push one thing: daily backups. It cannot log in. It cannot overwrite older backups.
Short of a serious SSH exploit where the ransomware could both re-configure the server to accept all ssh (and not just scp) and escape the chroot box, the ransomware is simply not destroying data from before the ransomware found its way on the system.
My backup procedure does that for the one backup server that I have on a dedicated server: a chroot'ed ssh server that only accepts scp and nothing else. It's of course just one part of the backup procedure, not the only thing I rely on for backups.
P.S: it's not incompatible with also using read-only media
I dont need a backup system. I just need a standardized way to keep 25 years of photos for a family of 4 with their own phones, cameras, downloads, scans etc. I still haven't found anything good.
I'm trialing a NAS with Immich, and then backing up the media and Immich DB dump daily to AWS S3 Deep Archive. It has Android and iOS apps, and enough of the feature set of Google Photos to keep me happy.
You can also store photos/scans on desktops in the same NAS and make sure Immich is picking them up (and then the backup script will catch them if they get imported to Immich). For an HN user it's pretty straight-forward to set up.
Is '25 years of photos' a North American measure of data I was previously unfamiliar with?
As bambax noted, you do in fact need a backup system -- you just don't realise that yet.
And you want a way of sharing data between devices. Without knowing what you've explored, and constraints imposed by your vendors of choice, it's hard to be prescriptive.
FWIW I use syncthing on gnu/linux, microsoft windows, android, in a mesh arrangement, for several collections of stuff, anchored back to two dedicated archive targets (small memory / large storage debian VMs) running at two different sites, and then perform regular snapshots on those using borgbackup. This gives me backups and archives. My RPO is 24h but could easily be reduced to whatever figure I want.
I believe this method won't work if Apple phones / tablets are involved, as you are not allowed to run background tasks (for syncthing) on your devices.
(I have ~500GB of photos, and several 10-200GB collections of docs and miscellaneous files, as unique repositories - none of these experience massive changes, it's mostly incremental differences, so it is pretty frugal with diff-based backup systems.)
Downloads and scans are generally trash unless deemed important.
For the phones and cameras, setup Nextcloud and have it automatically sync to your own home network. Then have a nightly backup to another disk with a health check after it finishes.
After that you can pick either a cloud host which your trust or get another drive of ours into someone else's server to have another locstion for your 2nd backup and you're golden.
I use syncthing... it's great for that purpose, Android is not officially supported but there is a fork, that works fine. Maybe you want to combine it with either ente.io or immich (also available for self-hosted) for photo backup.
I would also distinguish between documents (like PDF and TIFF) and photos - there is also paperless ngx.
PhotoPrism or Immich are solid self-hosted options that handle deduplication and provide good search/tagging for family photos. For cloud, Backblaze B2 + Cryptomator can give you encrypted storage at ~$1/TB/month with DIY scripts for uploads.
Others have already mentioned Syncthing[^1]. Here's what I'm doing on a budget since I don't have a homeserver/NAS or anything like that.
First you need to choose a central device where you're going to send all of the important stuff from other devices like smartphones, laptops, etc. Then you need to setup Syncthing, which works on linux, macos, windows and others. For android there's Syncthing-fork[^2] but for iOS idk.
Setup the folders you want to backup on each device, for android, the folders I recommend to backup are DCIM, documents, downloads. For the most part, everything you care about will be there. But I setup a few others like Android/media/WhatsApp/Media to save all photos shared on chats.
Then on this central device that's receiving everything from others, that's where you do the "real" backups. I my case, I'm doing backups to a external HDD, and also to a cloud provider with restic[^3].
I highly recommend restic, genuinely great software for backups. It is incremental (like BTRFS snapshots), has backends for a bunch of providers, including any S3 compatible storage and if combined with rclone, you have access to virtually any provider. It is encrypted, and because of how it was built, can you still search/navigate your remote snapshots without having to download the entire snapshot (borg[^4] also does this), the most important aspect of this is that you can restore individual folders/files. And this crucial because most providers for cloud storage will charge you more depending on how much bandwidth you have used. I have already needed to restore files and folders from my remote backups in multiple occasions and it works beautifully.
Also external "slow" storage drives are fairly inexpensive now as a third backup if your whole life's images and important documents are at stake.
Always best to keep multiple copies of photos or documents that you care about in multiple places. Houses can flood or burn, computers and storage can fail. No need to be over-paranoid about it, but two copies of important things isn't asking too much of someone.
I recently found that Nextcloud is good enough to "collect" the photos from my family onto my NAS. And my NAS makes encrypted backups to a cloud using restic.
Dirvish [0] is worth looking at, light-weight and providing a good set of functionality (rotation, incremental backups, retention, pre/post scripts). It is a scripted wrapper around rsync [1] so you profit from all that functionality too (remote backups, compression for limited links, metadata/xattr support, various sync criteria, etc.)
This has been a lifesaver for 20+ years, thanks to JW Schultz!
The questions/topics in the article go really well along with it.
It permits you to config more complicated backups more easily. You can inherit and override rules, which is handy if you need to do for example hundreds of similar style backups, with little exceptions. The same with include/exclude patterns, quickly gets complicated with just rsync.
It generates indices for its backups that allow you to search for files over all snapshots taken (which gives you an overview of which snapshots contain some file for you to retrieve/inspect). See dirvish-locate.
Does expiration of snapshots, given your retention strategy (encoded in rules, see dirvish.conf and dirvish-expire).
It consistently creates long rsync commandlines you would otherwise need to do by hand.
In the end you get one directory per snapshot, giving a complete view over what got backed up. Unchanged files are hard-linked thus limiting backup storage consumption. Changed files are stored. But each snapshot has the whole backed up structure in it so you could rsync it back at restore time (or pick selectively individual files if needed). Hence the "virtual".
Furthermore: backup reporting (summary files) which you could be piped into an E-mail or turned into a webpage, good and simple documentation, pre/post scripts (this turns out to be really useful to do DB dumps before taking a backup etc.)
You'll still need to take care of all other aspects of designing your backup storage (SAS controllers/backplanes/cabling, disks, RAID, LVM2, XFS, ...) and networking (10 GbE, switching, routing if needed, ...) if you need that (works too for only local though). Used this successfully in animation film development as an example, where it backed up hundreds of machines and centralized storage for a renderfarm, about 2 PBytes worth (with Coraid and SuperMicro hardware). Rsync traversing the filesystem to find out changes could be challenging at times with enormous FS (even based on only the metadata), but for that we created other backup jobs that where fed with specific file-lists generated by the renderfarm processes, thus skipping the search for changes...
ZFS relates to backups. In my case (among the many things I like about ZFS) is that it preserves hard links which I used to reduce the space requirements for my primary `rsync` backup but which `rsync` blew up copying to my remote backup. (Yes, there's a switch to preserve hard links but it is not sufficiently performant for this application.)
(Episode #256 which is a number that resonates with many of us. ;) )
I built a disaster recovery system using python and borg. It snapshots 51 block devices on a SAN and then uses borg to backup 71 file systems from these snapshots. The entire data set is then synced to S3. And yes, I've tested the result in a offsite: recovering files systems to entirely different block storage and booting VMs, so I'm confident that it would work if necessary, although not terribly quickly, because the recovery automation is complex and incomplete.
I can't share it. But if you contemplate such a thing, it is possible, and the result is extremely low cost. Borg is pretty awesome.
My preferred solution is to let client only write new backups, never delete. The deletion is handled separately (manually or cron on the target).
You can do this with rsync/ssh via the allowed command feature in .ssh/authorized_keys.
Extra nice, you can use importctl to then replicate.
This means that neither my backup sources nor the main backup sinks need to authenticate with each other, in fact I make sure that they can't, they can only authenticate with the intermediate and it can't authenticate with them⁰. If any one or two of the three parts is compromised there is a chance that the third will be safe. Backing up the credentials for all this is handled separately to make sure I'm not storing the keys to the entire kingdom on any internet connectable hosts. The few bits of data that I have that are truly massively important are backed up with extra measures (including an actual offline backup) on top.
With this separation, verifying backups requires extra steps too. The main backups occasionally verify checksums of the data that hold, and send a copy of the hashes for the latest backup back to the intermediate host(s) where that can read back to compare to hashes generated¹ at the sources² in order to detect certain families of corruption issues.
--------
[0] I think of the arrangement as a soft-offline backup, because like an offline backup nothing on the sources can (directly) corrupt the backup snapshots at the other end.
[1] These are generated at backup time, to reduce false alerts from files modified soon after they are read for sending to the backups.
[2] The hashes are sent to the intermediate, so the comparison could be done there and in fact I should probably do that as it'll make sending alerts when something seems wrong more reliable, but that isn't how I initially set things up and I've not done any major renovations in ages.
I do this by making backups visible to users on a `/.snapshots` directory that is the same as the target of the backup script, but mounted NFS read-only.
I wish for syncoid to add this feature. I want it to only copy snapshots to the backup server. The server then deletes old snapshots. At the moment it requires delete permissions.
You'll need to add the --no-elevate-permissions flag to your syncoid job.
I'm consulting for a company that makes around €1 billion annual turnover. They don't make their own backups. They rely on disk copies made by the datacenter operator, which happen randomly, and which they don't test themselves.
Recently a user error caused the production database to be destroyed. The most recent "backup" was four days old. Then we had to replay all transactions that happened during those four days. It's insane.
But the most insane part was, nobody was shocked or terrified about the incident. "Business as usual" it seems.
company where i worked, had something similar. i spent a couple of months going through all teams, figuring out how disaster recovery policies are implemented (all of them were approved soc auditors).
outcome of my analysis was that in case of major disasters it will be easier to shut down company and go home than trying to recover to working state within reasonable amount of time.
"Oh there goes Super Entrepise DB Partner again" turns into a product next fiscal year, that shutdowns the following year because the scope was too big, but at least they tried to make things better.
And this was far from painless: the system was unavailable for a whole day, and all manual interventions on the system (like comments, corrections, etc.) that had been done between the restoration date and the incident, were irretrievably lost. -- There were not too many of those apparently, but still.
I've been working on backup and disaster recovery software for 10 years. There's a common phrase in our realm that I feel obligated to share, given the nature of this article.
> "Friends don't let friends build their own Backup and Disaster Recovery (BCDR) solution"
Building BCDR is notoriously difficult and has many gotchas. The author hinted at some of them, but maybe let me try to drive some of them home.
- Backup is not disaster recovery: In case of a disaster, you want to be up and running near-instantly. If you cannot get back up and running in a few minutes/hours, your customers will lose your trust and your business will hurt. Being able to restore a system (file server, database, domain controller) with minimal data loss (<1 hr) is vital for the survival of many businesses. See Recovery Time Objective (RTO) and Recovery Point Objective (RPO).
- Point-in-time backups (crash consistent vs application consistent): A proper backup system should support point-in-time backups. An "rsync copy" of a file system is not a point-in-time backup (unless the system is offline), because the system changes constantly. A point-in-time backup is a backup in which each block/file/.. maps to the same exact timestamp. We typically differentiate between "crash consistent backups" which are similar to pulling the plug on a running computer, and "application consistent backups", which involves asking all important applications to persist their state to disk and freeze operations while the backup is happening. Application consistent backups (which is provided by Microsoft's VSS, as mentioned by the author) significantly reduce the chances of corruption. You should never trust an "rsync copy" or even crash consistent backups.
- Murphy's law is really true for storage media: My parents put their backups on external hard drives, and all of r/DataHoarder seems to buy only 12T HDDs and put them in a RAID0. In my experience, hard drives of all kinds fail all the time (though NVMe SSD > other SSD > HDD), so having backups in multiple places (3-2-1 backup!) is important.
(I have more stuff I wanted to write down, but it's late and the kids will be up early.)
Re: BCDR solutions, they also sell trust among B2B companies. Collectively, these solutions protect billions, if not trillions of dollars worth of data, and no CTO in their right mind would ever allow an open-source approach to backup and recovery. This is primarily also due to the fact that backups need to be highly available. Scrolling through a snapshot list is one of the most tedious tasks I've had to do as a sysadmin. Although most of these solutions are bloated and violate userspace like nobody's business, it is ultimately the company's reputation that allows them to sell products. Although I respect Proxmox's attempt at cornering the Broadcom fallout, I could go at length about why it may not be able to permeate the B2B market, but it boils down to a simple formula (not educational, but rather from years of field experience):
> A company's IT spend grows linearly with valuation up to a threshold, then increases exponentially between a certain range, grows polynomially as the company invests in vendor-neutral and anti-lock-in strategies, though this growth may taper as thoughtful, cost-optimized spending measures are introduced.
- Ransomware Protection: Immutability and WORM (Write Once Read Many) backups are critical components of snapshot-based backup strategies. In my experience, legal issues have arisen from non-compliance in government IT systems. While "ransomware" is often used as a buzzword by BCDR vendors to drive sales, true immutability depends on the resiliency and availability of the data across multiple locations. This is where the 3-2-1 backup strategy truly proves its value.
Would like to hear your thoughts on more backup principles!
You can do this with some extra steps in between. Specifically you need a snapshotting file system like zfs. You run the rsync on the snapshot to get an atomic view of the file system.
Of course if you’re using zfs, you might just want to export the actual snapshot at that point.
Yeah and for the vast majority of individual cybernauts, that "1" is almost unachievable without paying for a backup service. And at that point, why are you doing any of it yourself instead of just running their rolling backup + snapshot app?
There isn't a person in the world who lives in a different city from me (that "1" isn't protection when there's a tornado or flood or wildfire) that I'd ask to run a computer 24/7 and do maintenance on it when it breaks down.
It's a matter of the value of your data. Or how much it would cost you to lose it.
I'd at least have file system snapshots locally for easy recovery in case of manual mistakes, have it copied at a remote location using implementation A and let it snapshot there too, copy same amount on another location using implementation B and let it snapshot there too, so not only you'd have durability, implementation bugs on a backup process can also be mitigated.
zfs is a godsend for this and I use Borg as secondary implementation, which seems enough for almost any disasters.
This leads you to the secret forbidden knowledge that you only need to back up your database(s) and file/object storage. Everything else can be, or has to be depending on how strong that 'never' is, recreated from your provisioning tools. All those Veeam VM backups some IT folks hoard like dragons are worthless.
It's a home backup so not exactly relevant to most of what you said - just wanted to underscore the point about storage media sucking. Ideally I'd periodically scrub each drives independently (can probably be done by forcing a degraded array mode, but careful not to mess up the metadata!) against checksums made by backup software. This particular failure mode could also be caught by dd'ing to /dev/null.
Even though I am preaching "application consistent backups" in my original comment (because that's what's important for businesses), my home backup setup is quite simple and isn't even crash consistent :-) I do: Pull via rsync to backup box & ZFS snapshot, then rsync to Hetzner storage box (ZFS snapshotted there, weekly)
My ZFS pool consists of multiple mirrored vdevs, and I scrub the entire pool once a month. I've uncovered drive failures, and storage controller failures this way. At work, we also use ZFS and we've uncovered even failures of entire product lines of hard drives.
In my opinion a good backup (system) is only good, if it has been tested to be restorable as fast as possible and the procedure is clear (like in documented).
How often have I heard or seen backups that "work great" and "oh, no problem we have them" only to see them fail or take ages to restore, when the disaster has happened (2 days can be an expensive amount of time in a production environment). Quite too often only parts could be restored.
Another missing aspect is within the snapshots section... I like restic, which provides repository based backup with deduplicated snapshots for FILES (not filesystems). It's pretty much what you want if you don't have ZFS (or other reliable snapshot based filesystems) to keep different versions of your files that have been deleted on the filesystem.
The last aspect is partly mentioned, the better PULL than PUSH part. Ransomware is really clever these days and if you PUSH your backups, it can also encrypt or delete all your backups, because it has access to everything... So you could either use readonly media (like Blurays) or PULL is mandatory. It is also helpful to have auto-snapshotting on ZFS via zfs-auto-snapshot, zrepl or sanoid to go back in time to where the ransomware has started its journey.
That depends on how you have access to your backup servers configured. I'm comfortable with append-only backup enforcement for push backups[0] with Borg and Restic via SSH, although I do use offline backup drive rotation as a last line of defense for my local backup set. YMMV.
0 - https://marcusb.org/posts/2024/07/ransomware-resistant-backu...
That depends on your goal, right? If it took me six months to recover my family photo backups, that'd be fine by me.
Or like someone already commented you can use a server that allows push but doesn't allow to mess with older files. You can for example restrict ssh to only the scp command and the ssh server can moreover offer a chroot'ed environment to which scp shall copy the backups. And the server can for example daily rotate that chroot.
The push can then push one thing: daily backups. It cannot log in. It cannot overwrite older backups.
Short of a serious SSH exploit where the ransomware could both re-configure the server to accept all ssh (and not just scp) and escape the chroot box, the ransomware is simply not destroying data from before the ransomware found its way on the system.
My backup procedure does that for the one backup server that I have on a dedicated server: a chroot'ed ssh server that only accepts scp and nothing else. It's of course just one part of the backup procedure, not the only thing I rely on for backups.
P.S: it's not incompatible with also using read-only media
On the face of it "append-only access (no changes)" seems sound to me
You can also store photos/scans on desktops in the same NAS and make sure Immich is picking them up (and then the backup script will catch them if they get imported to Immich). For an HN user it's pretty straight-forward to set up.
As bambax noted, you do in fact need a backup system -- you just don't realise that yet.
And you want a way of sharing data between devices. Without knowing what you've explored, and constraints imposed by your vendors of choice, it's hard to be prescriptive.
FWIW I use syncthing on gnu/linux, microsoft windows, android, in a mesh arrangement, for several collections of stuff, anchored back to two dedicated archive targets (small memory / large storage debian VMs) running at two different sites, and then perform regular snapshots on those using borgbackup. This gives me backups and archives. My RPO is 24h but could easily be reduced to whatever figure I want.
I believe this method won't work if Apple phones / tablets are involved, as you are not allowed to run background tasks (for syncthing) on your devices.
(I have ~500GB of photos, and several 10-200GB collections of docs and miscellaneous files, as unique repositories - none of these experience massive changes, it's mostly incremental differences, so it is pretty frugal with diff-based backup systems.)
For the phones and cameras, setup Nextcloud and have it automatically sync to your own home network. Then have a nightly backup to another disk with a health check after it finishes.
After that you can pick either a cloud host which your trust or get another drive of ours into someone else's server to have another locstion for your 2nd backup and you're golden.
I would also distinguish between documents (like PDF and TIFF) and photos - there is also paperless ngx.
Deleted Comment
First you need to choose a central device where you're going to send all of the important stuff from other devices like smartphones, laptops, etc. Then you need to setup Syncthing, which works on linux, macos, windows and others. For android there's Syncthing-fork[^2] but for iOS idk.
Setup the folders you want to backup on each device, for android, the folders I recommend to backup are DCIM, documents, downloads. For the most part, everything you care about will be there. But I setup a few others like Android/media/WhatsApp/Media to save all photos shared on chats.
Then on this central device that's receiving everything from others, that's where you do the "real" backups. I my case, I'm doing backups to a external HDD, and also to a cloud provider with restic[^3].
I highly recommend restic, genuinely great software for backups. It is incremental (like BTRFS snapshots), has backends for a bunch of providers, including any S3 compatible storage and if combined with rclone, you have access to virtually any provider. It is encrypted, and because of how it was built, can you still search/navigate your remote snapshots without having to download the entire snapshot (borg[^4] also does this), the most important aspect of this is that you can restore individual folders/files. And this crucial because most providers for cloud storage will charge you more depending on how much bandwidth you have used. I have already needed to restore files and folders from my remote backups in multiple occasions and it works beautifully.
[^1]: https://github.com/syncthing/syncthing [^2]: https://github.com/Catfriend1/syncthing-android [^3]: https://github.com/restic/restic [^4]: https://github.com/borgbackup/borg
I have used pCloud for years with no issue.
Also external "slow" storage drives are fairly inexpensive now as a third backup if your whole life's images and important documents are at stake.
Always best to keep multiple copies of photos or documents that you care about in multiple places. Houses can flood or burn, computers and storage can fail. No need to be over-paranoid about it, but two copies of important things isn't asking too much of someone.
For me one win/mac with backblaze. Dump everything to that machine. Second ext. Drive backup just in case.
still need to back it up though as a NAS/RAID isnt backup.
This has been a lifesaver for 20+ years, thanks to JW Schultz!
The questions/topics in the article go really well along with it.
[0] https://dirvish.org/ [1] https://rsync.samba.org/
It generates indices for its backups that allow you to search for files over all snapshots taken (which gives you an overview of which snapshots contain some file for you to retrieve/inspect). See dirvish-locate.
Does expiration of snapshots, given your retention strategy (encoded in rules, see dirvish.conf and dirvish-expire).
It consistently creates long rsync commandlines you would otherwise need to do by hand.
In the end you get one directory per snapshot, giving a complete view over what got backed up. Unchanged files are hard-linked thus limiting backup storage consumption. Changed files are stored. But each snapshot has the whole backed up structure in it so you could rsync it back at restore time (or pick selectively individual files if needed). Hence the "virtual".
Furthermore: backup reporting (summary files) which you could be piped into an E-mail or turned into a webpage, good and simple documentation, pre/post scripts (this turns out to be really useful to do DB dumps before taking a backup etc.)
You'll still need to take care of all other aspects of designing your backup storage (SAS controllers/backplanes/cabling, disks, RAID, LVM2, XFS, ...) and networking (10 GbE, switching, routing if needed, ...) if you need that (works too for only local though). Used this successfully in animation film development as an example, where it backed up hundreds of machines and centralized storage for a renderfarm, about 2 PBytes worth (with Coraid and SuperMicro hardware). Rsync traversing the filesystem to find out changes could be challenging at times with enormous FS (even based on only the metadata), but for that we created other backup jobs that where fed with specific file-lists generated by the renderfarm processes, thus skipping the search for changes...
ZFS relates to backups. In my case (among the many things I like about ZFS) is that it preserves hard links which I used to reduce the space requirements for my primary `rsync` backup but which `rsync` blew up copying to my remote backup. (Yes, there's a switch to preserve hard links but it is not sufficiently performant for this application.)
(Episode #256 which is a number that resonates with many of us. ;) )
For my archlinux setup, configuration and backup strategy: https://github.com/gchamon/archlinux-system-config
For the backup system, I've cooked an automation layer on top of borg: https://github.com/gchamon/borg-automated-backups
I can't share it. But if you contemplate such a thing, it is possible, and the result is extremely low cost. Borg is pretty awesome.