I used to be in the "why BTRFS" camp and religiously would only install ext4 without LVM on Fedora for my laptops, desktops, and servers. When I saw so many subsequent releases persist in offering BTRFS by default, I decided to try it for a recent laptop install, because honestly, the appeal of deduplication, checksumming, snapshotting, and so many other features that modern FSes generally come with (e.g., ZFS), I just decided to jump the gun and went ahead and installed it.
I can safely say it has not presented any problem with me thus far, and I am at the stage of my life where I realize that I don't have the time to fiddle as much with settings. If the distributions are willing to take that maintenance on their shoulders, I'm willing to trust them and deal with the consequences – at least I know I'm not alone.
It's obviously not there as a NAS filesystem, ZFS drop-in replacement, etc. But if what you take away from that is that BTRFS is no good as a filesystem on a single drive system, you're missing out. Just a few weeks ago I used a snapshot to get myself out of some horrible rebase issue that lost half my changes. Could I have gone to the reflog and done other magic? Probably. But browsing my .snapshots directory was infinitely easier!
Snapshots are the best thing in the world for me on Arch. I'm specifically using it cause I like to tinker with exotic hardware and it has the most sane defaults for most of the things I care about. Pacman is great, but the AUR can be a bit a sketchy sometimes on choices that package authors make. Having snapshots every time package changes happen that I can roll back to from my boot loader is _awesome_. If you've ever used a distro that does kernel backups in your boot loader, it's like that, except it's whole packages at a time! And being able to use subvolumes to control which parts of a snapshot to restore is awesome. I can roll back my system without touching my home directory, even on a single drive setup!
But browsing my .snapshots directory was infinitely easier!
I second this however I don't use the filesystem to get this functionality. I most often use XFS and have a cronjob that calls an old perl script called "rsnapshot" [1] that makes use of hardlinks to deal with duplicate content and save space. One can create both local and remote snapshots. Similar to your situation I have used this to fix corrupted git repos which I could have done within git itself but rsnapshot was many times easier and I am lazy.
By the way, "git reflog" can usually get you out of horribly botched rebases without using special filesystem features: git reset --hard <sha1 of last good state from reflog>
You can typically make a branch and push it up to the server, and I usually do this before any rebase. In future development scenarios (mac, windows) will you be able to do this reliably in the future?
BTRFS works fine. I use it on my everyday laptop without problems. Compression can help on devices with not a lot of disk, and also copy on write. However, BTRFS has its drawbacks, for example it's tricky to have a swapfile on it (now it's possible with some special attributes).
Also I wouldn't trust BTRFS for the purpose of data archiving, for the fact that ext4 is a proven (and simpler) filesystem, thus it's less likely to became corrupt, and it's more likely to being able to recover data from it if it will become corrupt (or the disk has some bad sectors and that sort of stuff).
> Also I wouldn't trust BTRFS for the purpose of data archiving, for the fact that ext4 is a proven (and simpler) filesystem, thus it's less likely to became corrupt, and it's more likely to being able to recover data from it if it will become corrupt (or the disk has some bad sectors and that sort of stuff).
On the contrary; I'm using btrfs and not ext4 on NAS (Synology) specifically, because the former does checksumming and bitrot detection and the latter does not.
I'm not sure how you prove that ext4 is less likely to become corrupt. But it is easily shown that it's less likely to inform you that there's corruption.
Quite a lot of the assumptions of earlier file systems is the hardware either returns correctness, or reports a problem e.g. uncorrectable read error or media error. That's been shown to be untrue even with enterprise class hardware, largely by the ZFS developers, hence why it exists. And also why ZFS has had quite a lot less "bad press" where Btrfs wasn't developed in a kind of skunkworks, it was developed out in the open where quite a lot of early users were using it with ordinary every day hardware.
And as it turns out, we see most hardware by make/model doing mostly the right things, a small number of make/models, making up a significant minority of usage volume, don't do the right things. Hence, Btrfs has always had full checksumming of data and metadata. Both XFS and ext4 were running into the same kinds of problems Btrfs (and ZFS before it) revealed - torn writes, misdirected writes, bit rot, memory bit flips, and even SSD's exhibit prefail behavior by returning either zeros or garbage instead of data (or metadata). XFS and ext4 subsequently added metadata checksums, which further reinforced the understanding that devices sometimes do the wrong thing and also lie about it.
It is true that overwriting filesystems have a better chance of repairing metadata inconsistencies. A big reason why is locality. They have fixed locations on disk for different kinds of metadata, thus a lot of correct assumptions can be made about what should be in that location. Btrfs doesn't have that at all, it has very few fixed locations for metadata (pretty much just the super blocks). Since no assumptions can be made about what's been found in metadata areas, it's harder to fix.
So the strategy is different with Btrfs (and probably ZFS too since it has a fairly nascent fsck even compared to Btrfs's) - cheap and fast replication of data via snapshots and send/receive, which requires no deep traversal of either the source or destination. And equally cheap and fast restore (backwards replication) using the same method. Conversely, conventional backup and restore are meaningfully different when reversing, so you have to test both the backup and restore to really understand if your backup method is reliable. That's going to be your disaster go to rather than trying to fix them. Fixing is almost certainly going to take much longer than restoring. If you don't have current backups, at least Btrfs now has various rescue mount options to make the file system more tolerant of broken file systems, but as a consequence you also have to mount read-only. Pretty good chance you can still get your data out, even if it's inconvenient to have to wipe the file system and create a new one. It'll still be faster than mucking with repair.
Also, Btrfs since kernel 5.3 has both read time and write time tree checkers, that verify certain trees for consistency, not just blindly accepting checksums. Various problems are exposed and stopped before they can cause worse problems, and even helps find memory bitflips and btrfs bugs. Btrfs doesn't just complain about hardware related issues, it'll rat itself out if it's to blame for the problem - which at this point isn't happening any more often than ext4 or XFS in very large deployments (millions of instances).
I didn't know about the swapfile thing... but TIL. I had been wondering how to to make a non-snapshotted volume for some other reasons, though, so that's a 2 birds with one stone thing, thank you!
I have not used BTRFS for years, but I remember at some point a BTRFS regression prevented me from booting my system. It is hard to regain trust after such a meltdown from such a fundamental component. That said, I believe my Synology NAS uses btrfs and it has never had an issue.
I've been in the same boat. Around 2012 or 2013 I put BTRFS on my DIY NAS/media server. For some reason, totally unprovoked the array just went belly up with no warning or logs. I tried and tried without success and couldn't recover it. Fortunately I had good, recent backups and restored to ext4+LVM and I'm still there 10 years later.
BTRFS sounded cool with all it's new features, but the reality is that ext4+LVM does absolutely everything I need and it's never given me any issues.
I'm sure BTRFS is much more robust these days, but I'm still gun shy!
It certainly used to be the case that BTRFS had some nasty behaviour patterns when it started to run low on space. It could well be that it has not presented any problem for you yet.
On the other hand, those days might be behind it. I haven't kept track.
There are still edge cases but I can count on one hand the number of users who have run into it since Fedora switched to Btrfs by default (on desktops) two years ago (and Cloud edition in the most recent Fedora release late last year).
The upstream maintainer of btrfs-progs also maintains btrfs maintenance https://github.com/kdave/btrfsmaintenance which is a set of scripts for doing various tasks, one of which is a periodic filtered balance focusing on data block groups. I don't run it myself, because well (a) I haven't run into such a bug in years myself (b) I want to run into such a bug so I can report it and get it fixed. The maintenance script basically papers over any remaining issues, for those folks who are more interested in avoiding issues than bug reporting (a completely reasonable position).
There's been a metric ton of work on ENOSPC bugs over the years, but a large pile were set free with the ticketed ENOSPC system quite a few years ago now (circa 2015? 2016?)
bcachefs is probably the other big name here, but since distros still seem to pick btrfs, I don't think it's considered "production ready" yet. The bcachefs website still labels it as "beta" quality.
Are you using a UPS on the desktop? A recent HN thread highlighted BTRFS issues, especially with regards to dataloss on power loss. Also, there's a "write hole issue" on some RAID configurations, RAID 4 or 6 I think.
That said, I'm thinking about leaving a BTRFS partition unmounted and mounting it only to perform backups, taking advantage of the snapshotting features.
This would seem to suggest that ANY raid configuration is unacceptable.
Device replace
--------------
>Device replace and device delete insist on being able to read or reconstruct all data. If any read fails due to an IO error, the delete/replace operation is aborted and the administrator must remove or replace the damaged data before trying again.
Device replace isn't something where "mostly ok" is a good enough status.
I was an early adopter and some bad experiences early on made it a bitter pill. I swore it off for a decade, and about year and half came back around. It's MUCH MUCH better now. With automated "sensible" settings for btrfsmaintenance tools it's actually just fine now.
I honestly don't see the point of Btrfs. ZFS is more mature, more stable, better defaults, etc. The only reason Btrfs existed was because ZFS wasn't GPL. But now we have a way of running ZFS on Linux Btrfs seems utterly redundant.
I mean I'm all for choice on Linux but when it comes to file systems I'd rather have fewer choices but those choices be absolutely rock solid. Btrfs might have gotten to that stage now however it's eaten enough peoples data (including mine) over the years that I can't help wondering why people bothered to persist with using it when a better option was available.
Anecdata: Once upon a time I installed a SuSE with their default choice of ReiserFS (v3) as root partition. Couple of months later that filesystem was dead beyond repair. I don't know whether I did something wrong, but I've been very wary of "defaults" ever since. That said, that was a different time and I tend to see a ZFS or a BTRFS in my near future.
One of the a little weird things about btrfs is that some Software (e.g. OBS) seems to have a hard time to get the free space on disk. Maybe because they assume the usual ways of getting free space works (which doesn't on btrfs)
> If the distributions are willing to take that maintenance on their shoulders, I'm willing to trust them and deal with the consequences – at least I know I'm not alone.
But then they make changes that add an insane amount of complexity, and suddenly you're running into random errors and googling all the time to try to find the magical fixes to all the problems you didn't have before.
Although this would be an interesting way to drag some of my old NTFS filesystems kicking & screaming into the 21st century, I'd never do one of these in-place conversions again. I tried to go from ext3 to btrfs several years ago - and it would catastrophically fail after light usage. (W're talking less than a few hours of desktop-class usage. In retrospect I think it was `autodefrag/defragment` that would kill it.) I tried that conversion a few times and it never worked, I think I even tried to go from ext3->ext4->btrfs. This was on an Arch install so (presumably) it was the latest and greatest kernel & userspace available at the time.
I eventually gave up (/ got sick of doing restores) and just copied the data into a fresh btrfs volume. That worked "great" up until I realized (a) I had to turn off CoW for a bunch of things I wanted to snapshot, (b) you can't actually defrag in practice because it unlinks shared extents and (c) btrfs on a multi-drive array has a failure mode that will leave your root filesystem readonly; which is just a footgun that shouldn't exist in a production-facing filesystem. - I should add that these were not particularly huge filesytems: the ext3 conversion fiasco was ~64G, and my servers were like ~200G and ~100G respectively. I also was doing "raid1"/"raid10" style setups, and not exercising the supposedly broken raid5/raid6 code in any way.
I think I probably lost three or four filesystems which were supposed to be "redundant" before I gave up and switched to ZFS. Between (a) & (b) above btrfs just has very few advantages compared to ZFS. Really the only thing going for it was being available in mainline kernel builds. (Which, frankly, I don't consider that to be an advantage the way the GPL zealots on the LKML seem to think it is.)
> ...btrfs just has very few advantages compared to ZFS. Really the only thing going for it was being available in mainline kernel builds.
ZFS doesn't have defrag, and BtrFS does.
There was a paper recently on purposefully introducing fragmentation, and the approach could drastically reduce performance on any filesystem that was tested.
This can be fixed in BtrFS. I don't see how to recover from this on ZFS, apart from a massive resilver.
I'm pretty dependent on the ability to deduplicate files in place without massive overhead. The built in defrag on BTRFS is unfortunate but I think you can defragment and re-deduplicate.
I don't know, I'm just hoping for a filesystem that can get these features right to come along...
In-place conversion of NTFS? You either still believe in a god or need to google the price of harddrives these days. Honest question tho, why would anybody do in-place conversion of partitions?
>You either still believe in a god or need to google the price of harddrives these days.
That was pretty funny, and I agree a thousand times over. When I was younger (read: had shallower pockets) I was willing to spend time on these hacks to avoid the need for intermediate storage. Now that I'm wiser, crankier, and surrounded by cheap storage: I would rather just have Bezos send me a pair of drives in <24h to chuck in a sled. They can populate while I'm occupied and/or sleeping.
My time spent troubleshooting this crap when it inevitably explodes is just not worth the price of a couple of drives; and if I still manage to cock everything up at least the latter approach leaves me with one or more backup copies. If everything goes according to plan well hey the usable storage on my NAS just went up ;-). I feel bad for the people that will inevitably run this command on the only copy of their data. (Though I would hope the userland tool ships w/ plenty of warnings to the contrary.)
Just because something is cheap doesn't mean I'm fine with buying it for a one-shot use.
Buying an extra disk for just the conversion is wasteful, and then you need space to keep it stashed forever when you never use it. Not at all sustainable, I'd rather leave the hardware on the market for people who _actually_ need it.
So you buy an external 1TB drive just for the sake of the conversion, then create a new partition, then copy your 1TB of data over, then... what? Wipe your PC, boot into a live CD, then copy the partition over? Do you find this easier/more worthwhile than an in-place conversion? How/why?
From the same person that made WinBtrfs and Quibble, a Windows NT Btrfs installable filesystem and bootloader. And yes, with all of that one can boot and run Windows natively on Btrfs, at least in theory.
That's in common with the conversion from ext[234] and reiserfs, too. Makes it easy to both undo the conversion and inspect the original image in case the btrfs metadata became wrong somehow.
In a former life I ran a web site with a co-founder. We needed to upgrade our main system (we only had 2), and had mirrored RAID1 hard drives, some backup but not great. We tested the new system, it appeared to work fine, so the plan was to take it to the colo, rsync the old system to the new one, make sure everything ran okay, then bring the old system home.
We did the rsync, started the new system, it seemed to be working okay, but then we started seeing some weird errors. After some investigation, it looked like the rsync didn't work right. We were tired, it was getting late, so we decided to put one of the original mirrors in the new system since we knew it worked.
Started up the new system with the old mirror, it ran for a while, then started acted weird too. At that point we only had 1 mirror left, were beat, and decided to pack the old and new system up and bring it all back to the office (my co-founder's house!) and figure out what was going on. We couldn't afford to lose the last mirror.
After making another mirror in the old system, we started testing the new system. It seemed to work fine with 1 disk in either bay (it had 2). But when we put them in together and started doing I/O from A to B, it corrupted drive A. We weren't even writing to drive A!
For the next test, I put both drives on 1 IDE controller instead of each on its own controller. (Motherboards had 2 IDE controllers, each supported 2 drives). That worked fine.
It turns out there was a defect on the MB and if both IDE ports were active, it got confused and sent data to the wrong drive. We needed the CPU upgrade so ended up running both drives on 1 IDE port and it worked fine until we replaced it a year later.
But we learned a valuable lesson: never ever use your production data when doing any kind of upgrade. Make copies, trash them, but don't use the originals. I think that lesson applies to the idea of doing an inplace conversion from NTFS to Btrfs, even if it says it keeps a backup. Do yourself a favor and copy the whole drive first, then mess around with the copy.
I used btrfs on an EC2 instance with two local SSDs that were mirrored for a CI pipeline running Concourse. It would botch up every few months, and I got to automating setup so that it was easy to recreate. I never did find the actual source of the instance botch-up though. It was either the local PostgreSQL instance running on btrfs, btrfs, or the Concourse software. I pretty much ruled out the PostgreSQL being the originating source of the issue, but didn't get further than that. I don't know if anyone would suspect mdadm.
Other than whatever that instability was, I can say that the performance was exceptional and would use that setup again, with more investigation into causes of the instability.
What I really want: ext4 performance with instant snapshots plus optional transparent compression when it can improve performance. There is only one promise to deliver this AFAIK: bcachefs, but it still isn't mature yet.
You can actually use a sparse zvol pretty decently for this too. You don't get the file level checksumming or some of the other features but you can still snapshot the volume and get pretty good performance out of it. I've got a few programs that don't get along too well with zfs that I use that way.
I can safely say it has not presented any problem with me thus far, and I am at the stage of my life where I realize that I don't have the time to fiddle as much with settings. If the distributions are willing to take that maintenance on their shoulders, I'm willing to trust them and deal with the consequences – at least I know I'm not alone.
I second this however I don't use the filesystem to get this functionality. I most often use XFS and have a cronjob that calls an old perl script called "rsnapshot" [1] that makes use of hardlinks to deal with duplicate content and save space. One can create both local and remote snapshots. Similar to your situation I have used this to fix corrupted git repos which I could have done within git itself but rsnapshot was many times easier and I am lazy.
[1] - https://wiki.archlinux.org/title/rsnapshot
Also I wouldn't trust BTRFS for the purpose of data archiving, for the fact that ext4 is a proven (and simpler) filesystem, thus it's less likely to became corrupt, and it's more likely to being able to recover data from it if it will become corrupt (or the disk has some bad sectors and that sort of stuff).
On the contrary; I'm using btrfs and not ext4 on NAS (Synology) specifically, because the former does checksumming and bitrot detection and the latter does not.
Quite a lot of the assumptions of earlier file systems is the hardware either returns correctness, or reports a problem e.g. uncorrectable read error or media error. That's been shown to be untrue even with enterprise class hardware, largely by the ZFS developers, hence why it exists. And also why ZFS has had quite a lot less "bad press" where Btrfs wasn't developed in a kind of skunkworks, it was developed out in the open where quite a lot of early users were using it with ordinary every day hardware.
And as it turns out, we see most hardware by make/model doing mostly the right things, a small number of make/models, making up a significant minority of usage volume, don't do the right things. Hence, Btrfs has always had full checksumming of data and metadata. Both XFS and ext4 were running into the same kinds of problems Btrfs (and ZFS before it) revealed - torn writes, misdirected writes, bit rot, memory bit flips, and even SSD's exhibit prefail behavior by returning either zeros or garbage instead of data (or metadata). XFS and ext4 subsequently added metadata checksums, which further reinforced the understanding that devices sometimes do the wrong thing and also lie about it.
It is true that overwriting filesystems have a better chance of repairing metadata inconsistencies. A big reason why is locality. They have fixed locations on disk for different kinds of metadata, thus a lot of correct assumptions can be made about what should be in that location. Btrfs doesn't have that at all, it has very few fixed locations for metadata (pretty much just the super blocks). Since no assumptions can be made about what's been found in metadata areas, it's harder to fix.
So the strategy is different with Btrfs (and probably ZFS too since it has a fairly nascent fsck even compared to Btrfs's) - cheap and fast replication of data via snapshots and send/receive, which requires no deep traversal of either the source or destination. And equally cheap and fast restore (backwards replication) using the same method. Conversely, conventional backup and restore are meaningfully different when reversing, so you have to test both the backup and restore to really understand if your backup method is reliable. That's going to be your disaster go to rather than trying to fix them. Fixing is almost certainly going to take much longer than restoring. If you don't have current backups, at least Btrfs now has various rescue mount options to make the file system more tolerant of broken file systems, but as a consequence you also have to mount read-only. Pretty good chance you can still get your data out, even if it's inconvenient to have to wipe the file system and create a new one. It'll still be faster than mucking with repair.
Also, Btrfs since kernel 5.3 has both read time and write time tree checkers, that verify certain trees for consistency, not just blindly accepting checksums. Various problems are exposed and stopped before they can cause worse problems, and even helps find memory bitflips and btrfs bugs. Btrfs doesn't just complain about hardware related issues, it'll rat itself out if it's to blame for the problem - which at this point isn't happening any more often than ext4 or XFS in very large deployments (millions of instances).
BTRFS sounded cool with all it's new features, but the reality is that ext4+LVM does absolutely everything I need and it's never given me any issues.
I'm sure BTRFS is much more robust these days, but I'm still gun shy!
On the other hand, those days might be behind it. I haven't kept track.
The upstream maintainer of btrfs-progs also maintains btrfs maintenance https://github.com/kdave/btrfsmaintenance which is a set of scripts for doing various tasks, one of which is a periodic filtered balance focusing on data block groups. I don't run it myself, because well (a) I haven't run into such a bug in years myself (b) I want to run into such a bug so I can report it and get it fixed. The maintenance script basically papers over any remaining issues, for those folks who are more interested in avoiding issues than bug reporting (a completely reasonable position).
There's been a metric ton of work on ENOSPC bugs over the years, but a large pile were set free with the ticketed ENOSPC system quite a few years ago now (circa 2015? 2016?)
Does any other FS on Linux provide those?
https://www.redhat.com/en/blog/look-vdo-new-linux-compressio...
That said, I'm thinking about leaving a BTRFS partition unmounted and mounting it only to perform backups, taking advantage of the snapshotting features.
https://btrfs.wiki.kernel.org/index.php/Status
Device replace
--------------
>Device replace and device delete insist on being able to read or reconstruct all data. If any read fails due to an IO error, the delete/replace operation is aborted and the administrator must remove or replace the damaged data before trying again.
Device replace isn't something where "mostly ok" is a good enough status.
I mean I'm all for choice on Linux but when it comes to file systems I'd rather have fewer choices but those choices be absolutely rock solid. Btrfs might have gotten to that stage now however it's eaten enough peoples data (including mine) over the years that I can't help wondering why people bothered to persist with using it when a better option was available.
Try to fill your root-filesystem with dd, then remove the file and sync, reboot and enjoy a non booting OS ;) It's like they don't test it at all.
But then they make changes that add an insane amount of complexity, and suddenly you're running into random errors and googling all the time to try to find the magical fixes to all the problems you didn't have before.
I eventually gave up (/ got sick of doing restores) and just copied the data into a fresh btrfs volume. That worked "great" up until I realized (a) I had to turn off CoW for a bunch of things I wanted to snapshot, (b) you can't actually defrag in practice because it unlinks shared extents and (c) btrfs on a multi-drive array has a failure mode that will leave your root filesystem readonly; which is just a footgun that shouldn't exist in a production-facing filesystem. - I should add that these were not particularly huge filesytems: the ext3 conversion fiasco was ~64G, and my servers were like ~200G and ~100G respectively. I also was doing "raid1"/"raid10" style setups, and not exercising the supposedly broken raid5/raid6 code in any way.
I think I probably lost three or four filesystems which were supposed to be "redundant" before I gave up and switched to ZFS. Between (a) & (b) above btrfs just has very few advantages compared to ZFS. Really the only thing going for it was being available in mainline kernel builds. (Which, frankly, I don't consider that to be an advantage the way the GPL zealots on the LKML seem to think it is.)
ZFS doesn't have defrag, and BtrFS does.
There was a paper recently on purposefully introducing fragmentation, and the approach could drastically reduce performance on any filesystem that was tested.
This can be fixed in BtrFS. I don't see how to recover from this on ZFS, apart from a massive resilver.
https://www.usenix.org/system/files/login/articles/login_sum...
"Btrfs, ext4, F2FS, XFS, ZFS all age - up to 22x on HDD, up to 2x on SSD."
https://pdfs.semanticscholar.org/b743/7111bf04a803878ebacbc2...
The paper also mentions ZFS:
https://www.cs.unc.edu/~porter/pubs/fast17.pdf
I don't know, I'm just hoping for a filesystem that can get these features right to come along...
That was pretty funny, and I agree a thousand times over. When I was younger (read: had shallower pockets) I was willing to spend time on these hacks to avoid the need for intermediate storage. Now that I'm wiser, crankier, and surrounded by cheap storage: I would rather just have Bezos send me a pair of drives in <24h to chuck in a sled. They can populate while I'm occupied and/or sleeping.
My time spent troubleshooting this crap when it inevitably explodes is just not worth the price of a couple of drives; and if I still manage to cock everything up at least the latter approach leaves me with one or more backup copies. If everything goes according to plan well hey the usable storage on my NAS just went up ;-). I feel bad for the people that will inevitably run this command on the only copy of their data. (Though I would hope the userland tool ships w/ plenty of warnings to the contrary.)
Buying an extra disk for just the conversion is wasteful, and then you need space to keep it stashed forever when you never use it. Not at all sustainable, I'd rather leave the hardware on the market for people who _actually_ need it.
Now that's really interesting.
We did the rsync, started the new system, it seemed to be working okay, but then we started seeing some weird errors. After some investigation, it looked like the rsync didn't work right. We were tired, it was getting late, so we decided to put one of the original mirrors in the new system since we knew it worked.
Started up the new system with the old mirror, it ran for a while, then started acted weird too. At that point we only had 1 mirror left, were beat, and decided to pack the old and new system up and bring it all back to the office (my co-founder's house!) and figure out what was going on. We couldn't afford to lose the last mirror.
After making another mirror in the old system, we started testing the new system. It seemed to work fine with 1 disk in either bay (it had 2). But when we put them in together and started doing I/O from A to B, it corrupted drive A. We weren't even writing to drive A!
For the next test, I put both drives on 1 IDE controller instead of each on its own controller. (Motherboards had 2 IDE controllers, each supported 2 drives). That worked fine.
It turns out there was a defect on the MB and if both IDE ports were active, it got confused and sent data to the wrong drive. We needed the CPU upgrade so ended up running both drives on 1 IDE port and it worked fine until we replaced it a year later.
But we learned a valuable lesson: never ever use your production data when doing any kind of upgrade. Make copies, trash them, but don't use the originals. I think that lesson applies to the idea of doing an inplace conversion from NTFS to Btrfs, even if it says it keeps a backup. Do yourself a favor and copy the whole drive first, then mess around with the copy.
Other than whatever that instability was, I can say that the performance was exceptional and would use that setup again, with more investigation into causes of the instability.