Readit News logoReadit News
viraptor · 3 years ago
> Most subvolumes can be mounted with noatime, except for /home where I frequently need to sort files by modification time.

That doesn't sound right. Noatime turns off recording of the last access time, not modification.

traceroute66 · 3 years ago
> Most subvolumes can be mounted with noatime

This noatime thing is an old-wive's tale that needs to die.

AFAIK, most "modern" filesystems (XFS,BTRFS etc.) all default to relatime

relatime maintains atime but without the overhead

EDIT TO ADD:

Actually,I've just done a bit of searching .... relatime has been the kernel mount default since >= 2.6.30 ! [1]

[1] https://kernelnewbies.org/Linux_2_6_30 (scroll to 1.11. Filesystems performance improvements)

kilburn · 3 years ago
> but without the overhead

The cost of atime is an extra write every time you read something.

Relatime changes this to one atime update per day (by default), low enough that it usually doesn't matter.

However, that update per day may have significant impact when you are using Copy-on-Write filesystems (btrfs, zfs). Each time the atime field is updated you are creating a new metadata block for that file. Old blocks can be reclaimed by the garbage collector (at an extra cost), but not if they exist in some snapshot.

All of this means that if you use btrfs/zfs and have lots of small files and take snapshots at least once per day, there's a noticeable performance difference between relative and noatime.

I've been using noatime everywhere for several years and I've never noticed any downside. This is definitely my recommended solution.

jdc · 3 years ago
I prefer lazyatime.
pdenton · 3 years ago
IIRC, noatime is useful everywhere except /var/spool or /var/mail where certain daemons may depend on access time correctness.
londons_explore · 3 years ago
You can't depend on access time correctness... Because someone else can come along and grep through all your files and now they're all accessed right now.
cesarb · 3 years ago
> After freeing the new SATA SSD, I also filled it with butter. Yes, all the way, no GPT, no MBR, just Btrfs, whose subvolumes were used in place of partitions

I would not recommend doing that. It might work for now, but there's a high risk of the disk being seen as "empty" (since it has no partition table) by some tool (or even parts of the motherboard firmware), which could lead to data loss. Having an MBR, either the traditional MBR or the "protective MBR" used by GPT, prevents that, since tools which do not understand that particular partition scheme or filesystem would then treat the disk as containing data of an unknown type, instead of being completely empty; and the cost is just a couple of megabytes of wasted disk space, which is a trivial amount at current disk sizes (and btrfs itself probably "wastes" more than that in space reserved for its data structures). Nowadays, I always use GPT, both because of its extra resilience (GPT has a backup copy at the end of the disk) and the MBR limits (both on partition size and the number of possible partition types).

LanternLight83 · 3 years ago
Neat! I went looking for ways to create a protective MBR, learned a lot from the Gentoo wiki, some interesting info about how Windows does things in the link below, but the way to achieve this seem to be to just format the disk as GPT and then truncate it to the MBR (or just use one big GPT partition).

https://thestarman.pcministry.com/asm/mbr/GPT.htm

cesarb · 3 years ago
I would also not recommend formatting the disk as GPT and "truncating it to the protective MBR". Not only there's a good chance of the GPT re-appearing on its own, either because some software noticed it was corrupted and copied it from the backup copy at the end of the disk, or because some software noticed it was missing and created a new one, but also there's a chance of it once again being treated as if the whole disk was empty (since the "protective MBR" says it's a GPT disk, and the GPT has no entries). If you want to have just the single MBR sector, then create a traditional MBR with a single partition spanning the whole disk instead of a GPT "protective MBR". But that will not gain much, since you should align your partitions (IIRC, usually to multiples of 1 megabyte) for performance and reliability reasons (not as important on an HDD, where you can align to just 4096 bytes or even 512 bytes depending on the HDD model, but very important on an SSD), and the space "wasted" by that alignment is more than enough to fit the GPT.
genghizkhan · 3 years ago
I would prefer to do this on zfs, for which there is a lovely installation guide on the openzfs docs site.

https://openzfs.github.io/openzfs-docs/Getting%20Started/Nix...

kaba0 · 3 years ago
I can vouch for how good it works. Using it on my personal laptop.
grumpyprole · 3 years ago
I tried ZFS-on-linux with Ubuntu 21.10 and it ate my data (ZFS panics when accessing certain files). Sure, Ubuntu does have a habit of using unstable kernels, but I was still disappointed. It should be stable at this point.
yjftsjthsd-h · 3 years ago
Neat:) I would never use btrfs myself[0], but very happy to see people exploring all variations of these ideas. The one thing that's starting to bug me though, as I read blog posts about installing nixos: why is the install process so imperative/non-declarative? Once the system is up, the whole thing fits in configuration.nix, but to get there we still have to use masses of shell commands. Is anyone working on bridging that last gap and supporting partitions, filesystems, and mounts (I think that's all that's left?) from nix itself?

[0] I lost 2 root filesystems to btrfs, probably because it couldn't handle space exhaustion. I'm paranoid now.

rrix2 · 3 years ago
I have a fork of `justdoit.nix' https://github.com/cleverca22/nix-tests/blob/master/kexec/ju... which generates installer images with an auto-partitioning script and stub "configuration.nix" embedded in it with basically just enough of a system to get nixops or morph to deploy to it. it's kind of a pain to get that working the first few times since you have to wait for an image to bake and then test it in QEMU, and then make changes for nvme, etc, but it's brought up three systems now.
yjftsjthsd-h · 3 years ago
Thanks! I'll have to give it a spin:)
nix23 · 3 years ago
I loose every two years (when i test it again) a volume to btrfs, last time 2 month ago, with that simple "trick":

-Fill your rootpartionion as root with "dd if=/dev/urandom of=./blabla bs=3m"

-rm blabla && sync (we don't want to be unfair to such a fragile system)

-Reboot and end up with unbootable /

It's a mess, for a filesystem i would declare it as alpha stage.

cmurf · 3 years ago
I can't reproduce this on a 5.17.5 kernel and loop device. So if you're able to trivially reproduce it, I'm guessing it's configuration specific. It's still a bug, but to find it means providing more detail about that configuration, including the kernel version.

Ideally reproduce it with mainline kernel or most recent stable. And post the details on the linux-btrfs list. They are responsive to bug reports.

If you depend on LTS kernels then it's, OK to also include in the report the problem happens with X kernel but not Y kernel. Upstream only backports a subset of fixes and features to stable. Maybe the fix was missed and should have gone to stable or maybe it was too hard to backports.

These are general rules for kernel development, it's not btrfs specific. You'll find XFS and i915 devs asking for testing with more recent kernels too.

But in any case, problems won't get fixed without a report on an appropriate list.

londons_explore · 3 years ago
All these "clever" filesystems can never guarantee not to run out of space for their own metadata. That's because even to delete a file they might need more space in the journal, or to un-copy-on-write some metadata.

The mistake however is that even though it isn't practical to make theoretical guarantees that the filesystem won't end up full and broken, it is very possible to make such a thing only happen in exceeding unlikely cases. One runaway dd isn't that...

Fnoord · 3 years ago
A lot of consumer grade SSDs and flash (microSD, eMMC) don't like it when they are near full. That's why you should set a reserved space and quotas. Notice your dd trick requires root which ignores the reserved space. At some point, its PEBCAK.
mekster · 3 years ago
Why isn't btrfs declared abandoned and just let people move on?

Everytime I read about it, someone is losing data.

Thank god Ubuntu makes zfs very easy to use. No reason to even consider touching btrfs.

curt15 · 3 years ago
How does zfs handle that test?
yakubin · 3 years ago
It would probably make sense to display <subdomain>.srht.site for submissions which match this domain pattern, similar to github.io sites.
chocolatesnake · 3 years ago
+1. This style of domain shortening should use https://publicsuffix.org/ to determine how to trim subdomains. And lo, https://publicsuffix.org/list/public_suffix_list.dat contains "srht.site".
anotherhue · 3 years ago
Perhaps we can call this a "Considered State" system. No more haphazardly rearranging bytes on your drive. Managed OS objects, linked at boot and your home dir / config dirs under VCS.

I use nixos with zfs on /home, /nix and /persist. Everything else is tmpfs, including /etc. Mostly you can configure applications to read config from /persist, but when not, a bind mount from /etc/whatever to /persist/whatever works pretty well.

I will never use a computer any other way again.

kilburn · 3 years ago
> To make use of snapshots, the backup drive gotta be Btrfs as well. The compression level was turned up to 14 this time (default was 3):

Isn't this useless? My understanding is that compression is only done at file write time. When you "btrfs send" a snapshot, the data is streamed over without recompression, so there's no point in setting up a higher compression level in the backup disk.