Readit News logoReadit News
binwiederhier commented on Mind the encryptionroot: How to save your data when ZFS loses its mind   sambowman.tech/blog/posts... · Posted by u/6581
zielmicha · 3 months ago
Can you explain this in more detail? It doesn't seem true on a first glance.

If you enable compression on ZFS that runs on top of dmcrypt volume, it will naturally happen before encryption (since dmcrypt is the lower layer). It's also unclear how it could be much faster, since dmcrypt generally is bottlenecked on AES-NI computation (https://blog.cloudflare.com/speeding-up-linux-disk-encryptio...), which ZFS has to do too.

binwiederhier · 3 months ago
Oh my bad. I misread your comment. You are doing ZFS on top of dmcrypt, not dmcrypt images/volumes on top of ZFS.
binwiederhier commented on Mind the encryptionroot: How to save your data when ZFS loses its mind   sambowman.tech/blog/posts... · Posted by u/6581
wkat4242 · 3 months ago
> Lesson: Test backups continuously so you get immediate feedback when they break.

This is a very old lesson that should have been learned by now :)

But yeah the rest of the points are interesting.

FWIW I rarely use ZFS native encryption. Practically always I use it on top of cryptsetup (which is a frontend for LUKS) on Linux, and GELI on FreeBSD. It's a practice from the time ZFS didn't support encryption and these days I just keep doing what I know.

binwiederhier · 3 months ago
ZFS encryption is much more space efficient than dmcrypt+unencrypted ZFS when combined with zstd compression. This is because it can do compress-then-encrypt instead of encrypt-then-(not-really-)compress. It is also much much faster.

Source: I work for a backup company that uses ZFS a lot.

binwiederhier commented on Adding my home electricity uptime to status.href.cat   aggressivelyparaphrasing.... · Posted by u/todsacerdoti
joezydeco · 4 months ago
I use ntfy for a whole bunch of personal projects. THANK YOU for keeping this service up and running.
binwiederhier · 4 months ago
I love hearing that. Anything worth sharing? I love hearing how people use it. My favorite one is the guy protecting his apple tree from thieves by adding a camera and motion sensor and then sending himself a notification with the picture to catch the apple thief.
binwiederhier commented on Adding my home electricity uptime to status.href.cat   aggressivelyparaphrasing.... · Posted by u/todsacerdoti
fusionadvocate · 4 months ago
How does ntfy compares to Pushover?
binwiederhier · 4 months ago
Disclaimer: I am the ntfy maintainer. Pleasantly surprised to be mentioned, hehe.

Pushover is an amazing tool and works well. In my obviously biased opinion though, I think that ntfy has a ton more features than Pushover and is fully open source. You can self host all aspects of it or you can use the hosted version on ntfy.sh for free, without signups, or pay for higher limits.

I suggest you try out ntfy;-)

binwiederhier commented on Debian 13 “Trixie”   debian.org/News/2025/2025... · Posted by u/ducktective
binwiederhier · 4 months ago
Thank you to all the Debian volunteers that make Debian and all its derivatives possible. It's remarkable how many people and businesses have been enabled by your work. Thank you!

On a personal note, Trixie is very exciting for me because my side project, ntfy [1], was packaged [2] and is now included in Trixie. I only learned about the fact that it was included very late in cycle when the package maintainer asked for license clarifications. As a result the Debian-ized version of ntfy doesn't contain a web app (which is a reaaal bummer), and has a few things "patched out" (which is fine). I approached the maintainer and just recently added build tags [3] to make it easier to remove Stripe, Firebase and WebPush, so that the next Debian-ized version will not have to contain (so many) awkward patches.

As an "upstream maintainer", I must say it isn't obvious at all why the web app wasn't included. It was clearly removed on purpose [4], but I don't really know what to do to get it into the next Debian release. Doing an "apt install ntfy" is going to be quite disappointing for most if the web app doesn't work. Any help or guidance is very welcome!

[1] https://github.com/binwiederhier/ntfy

[2] https://tracker.debian.org/pkg/ntfy

[3] https://github.com/binwiederhier/ntfy/pull/1420

[4] https://salsa.debian.org/ahmadkhalifa/ntfy/-/blob/debian/lat...

binwiederhier commented on Make Your Own Backup System – Part 1: Strategy Before Scripts   it-notes.dragas.net/2025/... · Posted by u/Bogdanp
immibis · 5 months ago
My personal external backup is two external drives in RAID1 (RAID0 wtfff?). One already failed, of course the Seagate one. It failed silently, too - a few sectors just do not respond to read commands and this was discovered when in-place encrypting the array. (I normally would avoid Seagate consumer drives if it wasn't for brand diversity. Now I have two WD drives purchased years apart.)

It's a home backup so not exactly relevant to most of what you said - just wanted to underscore the point about storage media sucking. Ideally I'd periodically scrub each drives independently (can probably be done by forcing a degraded array mode, but careful not to mess up the metadata!) against checksums made by backup software. This particular failure mode could also be caught by dd'ing to /dev/null.

binwiederhier · 5 months ago
ZFS really shines here with its built-in "zpool scrub" command and checksumming.

Even though I am preaching "application consistent backups" in my original comment (because that's what's important for businesses), my home backup setup is quite simple and isn't even crash consistent :-) I do: Pull via rsync to backup box & ZFS snapshot, then rsync to Hetzner storage box (ZFS snapshotted there, weekly)

My ZFS pool consists of multiple mirrored vdevs, and I scrub the entire pool once a month. I've uncovered drive failures, and storage controller failures this way. At work, we also use ZFS and we've uncovered even failures of entire product lines of hard drives.

binwiederhier commented on Make Your Own Backup System – Part 1: Strategy Before Scripts   it-notes.dragas.net/2025/... · Posted by u/Bogdanp
mekster · 5 months ago
How do you mean can’t be sure if it recovers? It’s not hoping for inconsistent states to be recovered by the db but they’re supposed to be in good state with file system snapshotting.

https://serverfault.com/a/806305

https://zrepl.github.io/v0.2.1/configuration/snapshotting.ht...

binwiederhier · 5 months ago
Ha! I did not expect a reference to `innodb_flush_log_at_trx_commit` here. I wrote a blog post a few years ago about MySQL lossless semi-sync replication [1] and I've had quite enough of innodb_flush_log_at_trx_commit for a lifetime :-)

Depending on the database you're using, and on your configuration, they may NOT recover, or require manual intervention to recover. There is a reason that MSSQL has a VSS writer in Windows, and that PostgreSQL and MySQL have their own "dump programs" that do clean backups. Pulling the plug (= file system snapshotting) without involving the database/app is risky business.

Databases these days are really resilient, so I'm not saying that $yourfavoriteapp will never recover. But unless you involve the application or a VSS writer (which does that for you), you cannot be sure that it'll come back up.

[1] https://blog.heckel.io/2021/10/19/lossless-mysql-semi-sync-r...

binwiederhier commented on Make Your Own Backup System – Part 1: Strategy Before Scripts   it-notes.dragas.net/2025/... · Posted by u/Bogdanp
mekster · 5 months ago
For regular DB like MySQL/PostgreSQL, just snapshot on zfs without thinking.
binwiederhier · 5 months ago
Databases these days are pretty resilient to restoring from crash consistent backups like that, so yes, you'll likely be fine. It's a good enough approach for many cases. But you can't be sure that it really recovers.

However, ZFS snapshots alone are not a good enough backup if you don't off-site them somewhere else. A server/backplane/storage controller could die or corrupt your entire zpool, or the place could burn down. Lots of ways to fail. You gotta at least zfs send the snapshots somewhere.

binwiederhier commented on Make Your Own Backup System – Part 1: Strategy Before Scripts   it-notes.dragas.net/2025/... · Posted by u/Bogdanp
kijin · 5 months ago
Exactly. There is no longer any point in backing up an entire "server" or a "disk". Servers and disks are created and destroyed automatically these days. It's the database that matters, and each type of database has its own tooling for creating "application consistent backups".
binwiederhier · 5 months ago
This strongly depends on your environment and on your RTO/RPO.

Sure, there are environments that have automatically deployed, largely stateless servers. Why back them up if you can recreate them in an hour or two ;-)

Even then, though, if we're talking about important production systems with an RTO of only a few minutes, then having a BCDR solution with instant virtualization is worth your weight in gold. I may be biased though, given that I professionally write BCDR software, hehe.

However, many environments are not like that: There are lots of stateful servers out there with bespoke configurations, lots of "the customer needed this to be that way and it doesn't fit our automation". Having all servers backed up the same way gives you peace of mind if you manage servers for a living. Being able to just spin up a virtual machine of a server and run things from a backup while you restore or repair the original system is truly magical.

u/binwiederhier

KarmaCake day2847July 21, 2014
About
Head of Engineering at Slide & founder and maintainer of ntfy.sh

https://ntfy.sh - https://slide.tech - https://heckel.io - https://github.com/binwiederhier

View Original