I haven't used TrueNAS since it was still called FreeNAS.
I liked FreeNAS for awhile, but after a certain point I kind of just learned how to properly use Samba and NFS and ZFS, and after that I kind of felt like it was just getting in the way.
Nowadays, my "NAS" is one of those little "mini gaming PCs" you an buy on Amazon for around ~$400, and I have three 8-bay USB hard drive enclosures, each filled with 16TB drives all with ZFS. I lose six drives to the RAID, so total storage is about ~288TB, but even though it's USB it's actually pretty fast; fast enough for what I need to for anyway, which is to watch videos off Jellyfin or host a Minecraft server.
I am not 100% sure who TrueNAS is really for, at least in the "install it yourself" sense; if you know enough about how to install something like TrueNAS, you probably don't really need it...
I was like this in the "I love to spend a lot of time mucking about with my server and want to squeeze everything out of it that I can" phase.
In the last few years I've transitioned to "My family just wants plex to work and I could give a shit about the details". I think I'm more of the target audience. When I had my non-truenas zfs set up I just didn't pay a lot of attention, and when something broke it was like re-learning the whole system over again.
My way of dealing with this is to ensure everything is provisioned and managed via gitops. I have a homelab repo with a combination of Ansible, Terraform (Tofu), and FluxCD. I don't have to remember how to do anything manually, except for provisioning a new bare metal machine (I have a readme file and a couple of scripts for that).
I accidentally gave myself the opportunity to test out my automations when I decided I wanted to rename my k8s nodes (FQDN rather than just hostname). When I did that, everything broke, and I decided it would be easier to simply re-provision than to troubleshoot. I was up and running with completely rebuilt nodes in around an hour.
I can setup Samba, NFS, and ZFS manually myself, but why would I want to? Configuring samba users & shares via SSH sucks. It's tedious. It's error prone. It's boring.
Similarly while docker's CLI interface is relatively nice, it's even nicer to just take my phone, open a browser, and push "update" or "restart" in a little gui to quickly get things back up & going. Or to add new services. Or whatever else I want. Sure I could SSH in from my phone, but that's awful. I could go get a laptop whenever I need to do something, but if Jellyfin or Plex or whatever is cranky and I'm already sitting on the couch, I don't want to have to get up and go find a laptop. I want to just hit "restart service" without moving.
And that's the point of things like TrueNAS or Unraid or whatever. It makes things nicer to use from more interfaces in more places.
Yeah; once you get deep enough, you realize you can just install things yourself, configure them with Ansible, or Nix, or whatever, and have full control.
But for probably 90% of users, they just want a UI they can click through and mostly use the defaults, then log into now and then to make sure things are good.
The UI is also especially helpful for focusing on things that matter, like setting up scrubs, notifications, etc. (though even there I think TrueNAS could do better).
It's why Synology persists, despite growing more and more hostile to their NAS owners.
> I can setup Samba, NFS, and ZFS manually myself, but why would I want to? Configuring samba users & shares via SSH sucks. It's tedious. It's error prone. It's boring.
I agree for the most part, though even vanilla Ubuntu has Cockpit if you need a GUI.
Personally I find that getting it set up with NixOS is pretty straightforward and it's "set and forget", and generally it's not too hard to find configurations you can just copypaste done "correctly". And of course, if you break something you can just reboot and choose a previous generation. Of course restarting still requires SSHing in and `systemctl restart myapp`, so YMMV.
I want to configure it myself because now I know exactly how it works. The configuration options I’ve chosen won’t change unless I change them. Disaster recovery will be easy because when I move the disks to a new machine, LVM will just start working.
> fast enough for what I need to for anyway, which is to watch videos off Jellyfin or host a Minecraft server.
4k Blu-ray rips peak at over 100 Mbps, but usually average around 80 Mbps. I don't know how much disk I/O a Minecraft server does ... I wouldn't think it would do all that much. USB2 (high-speed) bandwidth should be plenty for that; although filling the array and scrubbing/resilvering would be painful.
Even though I have over four hundred Blu-rays, I would of course NEVER condone breaking the DRM and putting them on Jellyfin no matter how easy it is or how stupid I think that law is because that would be a crime according to the DMCA and I'm a good boy who would never ever break the law.
That said, I have lots of home movies that just so happen to be at the exact same bitrates as Blu-rays and after the initial setup, I've never really had any issues with them choking or any bandwidth weirdness. Minecraft doesn't use a ton of disk IO, especially since it is rare that anyone plays on my server other than me.
I do occasionally do stuff that requires decent bandwidth though, enough to saturate a WiFi connection at the very least, and the USB3 + USB SS + Thunderbolt never seems to have much of an issue getting to Wifi speeds.
What do you mean by USB hard drive enclosures? Are you limiting the RAID (8 bay) throughput by a single USB line?! That's like towing a ferrari with a bicycle.
I have one enclosure plugged into a USB 3.0 line, another plugged into a "super speed" line, and one plugged into a Thunderbolt line (shared with my 10GbE Thunderbolt card with a 2x20 gigabit splitter).
This was deliberate, each is actually on a separate USB controller. Assuming I'm bottlenecked by the slowest, I'm limited to 5 gigabits per RAID, but for spinners that's really not that bad.
ETA: It's just a soft RAID with ZFS, I set up each 8-bay enclosure with its own RAIDZ2, and then glued all three together into one big pool mounted at `/tank`. I had to do a bit of systemd chicanery with NixOS to make sure it mounted after the USB stuff started, but that was like ten lines of config I do exactly once, so it wasn't a big deal.
>I liked FreeNAS for awhile, but after a certain point I kind of just learned how to properly use Samba and NFS and ZFS, and after that I kind of felt like it was just getting in the way.
I've been a mostly happy TrueNAS user for about four years, but I'm starting to feel this way.
I recently wrote about expanding my 4-disk raidz1 pool to a 6-disk raidz2 pool.[1] I did everything using ZFS command-line tools because what I wanted wasn't possible through the TrueNAS UI.
A developer from iXsystems (the company that maintains TrueNAS) read my post and told me that creating a ZFS pool from the zfs command-line utility is not supported, and so I may hit bugs when I use the pool in TrueNAS.
I was really surprised that TrueNAS can't just accept whatever the state of the ZFS pool is. It feels like an overreach that TrueNAS expects to manage all ZFS interactions.
I'm converting more of my infrastructure to NixOS, and I know a lot of people just manage their NAS with NixOS, which is sounding more and more appealing to me.
This has been a project of tech hoarding over the last ~8 years, but basically I wanted "infinite storage". I wanted to be able to do pretty much any project and know that no matter how crazy I am, I'll have enough space for it. Thus far, even with all my media and AI models and stock data and whatnot, I'm sitting around ~45TB.
On the off chance that I do start running low on space, there's plenty of stuff I can delete if I need to, but of course I probably won't need to for the foreseeable future.
> I'm converting more of my infrastructure to NixOS, and I know a lot of people just manage their NAS with NixOS, which is sounding more and more appealing to me.
Yeah, that's what I do, I have my NixOS run a Samba on my RAID, and it works fine. It was like fifteen lines of config, version controlled, and I haven't thought about it in months.
TrueNAS is just web-based configuration management. As long as you only use the web UI, your system state can be distilled down to the config file it generates.
If you do a vanilla FreeBSD+samba+NFS+ZFS setup, you'll need to edit several files around the file system, which are easy to forget months down the line in case of adjustment or disaster recovery.
I'm starting a rebuild of my now ancient home server, which has been running Windows with WSL2 Docker containers.
At first, I thought I might just go with TrueNAS. It can manage my containers and my storage. But it's got proprietary bits, and I don't necessarily want to be locked into their way of managing containers.
Then my plan was to run Proxmox with a TrueNAS VM managing a ZFS raidz volume, so I could use whatever I want for container management (I'm going with Podman)
But the more I've researched and planned out this migration, the more I realize that it's pretty easy to do all the stuff I want from TrueNAS, by myself. Setting up ZFS scrubbing and SMART checks, and email alerts when something fishy happens, is pretty easy.
I'm beginning to really understand the UNIX "do one thing and do it well" philosophy.
Yeah, same. Almost all of the NAS packages sacrifice something - they're great places to start, but just getting Samba going with Ubuntu is easy enough.
I have a similar setup (a dell wyse 5070 connected to an 8 bay enclosure) though I do not use RAID, I simply have some simple rsync script between a few of a drives. I collect old 1-2TB hard drives as cold storage and leave them on a bookshelf. The rsync scripts only run once a week for the non-criticial stuff.
Not to jinx it, but I have never had a hard drive failure since around 2008!
I think that's a good idea. It's a more manual setup but also more space efficient (if you skip backing up eg linux isos) and causes less wear than RAID.
Do you think the power consumption matters on your box here? Should you care about the "USB bottleneck"? How do you organize this thing so it's not a mess of USB cables? I kinda wanna make it look esthetically nice compared to something like a proper nas box.
> Do you think the power consumption matters on your box here?
It's actually not too bad; the main "server" idles at around 14W and the power supply for it only goes to 100W under load. The drive bays go up to 100W (I think) but generally idle around 20W each. All together it idles at around ~70-80W.
Not that impressive BUT it replaced a big rack mount server that idled at about 250W and would go up to a kilowatt under load.
> Should you care about the "USB bottleneck"?
Not really, at least not for what I'm doing. I generally can get pretty decent speeds and I think network is often the bottleneck more than the drives themselves.
> How do you organize this thing so it's not a mess of USB cables?
I don't :). It's a big mess of USB cables that's hidden in a closet. It doesn't look pretty at all.
The difference between what you've built and TrueNAS may well only become evident if your ZFS becomes corrupted in the future. That isn't to say YOU won't be able to fix it in the future, but I wouldn't assume that the average TrueNAS user could.
i have 4x 4TB drives that are in my dead QNAP NAS.
i've wanted to get a NAS running again, but while the QNAP form factor is great, the QNAP OS was overkill – difficult to manage (too many knobs and whistles) – and ultimately not reliable.
so, i'm at a junction: 1) no NAS (current state), 2) custom NAS (form factor dominates this discussion – i don't want a gaming tower), or 3) back to an off-the-shelf brand (poor experience previously).
maybe the ideal would be a Mac Mini that i could plug 4 HDDs into, but that setup would be cost-inefficient. so, it's probably a custom build w/ NixOS or an off-the-shelf, but i'm lacking the motivation to get back into the game.
I tried a Mac Mini for a while, but they're just not designed to run headless and I ultimately abandoned it because I wanted it to either work or be fixable remotely. Issues I had:
- External enclosure disconnected frequently (this is more of an issue with the enclosure and its chipset, but I bought a reputable one)
- Many services can't start without being logged in
- If you want to use FileVault, you'll have to input your password when you reboot
Little things went wrong too frequently that needed an attended fix.
If you go off the shelf, I recommend Synology, but make sure you get an Intel model with QSV if you plan to transcode video. You can also install Synology OS to your own hardware using Xpenology - its surprisingly stable, moreso than the mac mini was for me.
I do recommend those little Beelink computers with an AMD CPU.
They can be had for a bit less than the Mac mini, and I had no issues getting headless Linux working on there. I even have hardware transcoding in Jellyfin working with VAAPI. I think it cost me about $400.
I was in the same boat - QNAP OS was a complete mess. Ended up nuking it and throwing Ubuntu on there instead. Nothing fancy, just basic config, but it actually works now. Other option is pay Unraid.
What are your requirements for a NAS? What do you want to use it for? Is it just to experiment with what a NAS is, or do you have a specific need like multiple computers needing common, fast storage?
I use a QNAP 8-bay JBOD enclosure (TL-D800S) connected via SFF-8088 to a mini-itx PC - I find the form factor pretty good and don't have to deal with QNAP OS.
TrueNAS on a good bit of hardware - in my case the latest truegreen NAS is fantastic. You build it, it runs, it's bulletproof. Putting Jellyfin and/or plex on top of it is fantastic.
I do both. The primary server runs Proxmox and I have a physical TrueNAS box as backup server, so I have to do it by hand on Proxmox.
“Have to”, since I no longer suggest virtualizing TrueNAS even with PCI passthru. I will say the same about zfs-over-USB, but you do you. I’ve had too many bad experiences with both (for those not on the weeds here, both are officially very much not supported and recommended, but they _do_ work).
I really like the TrueNAS value prop - it makes something I’m clearly capable of by hand much easier and less tedious. I back up both my primary zfs tank and well as my PBS storage to it, plus cold backups. It does scheduling, alerts, configuration, and shares, and nothing else. I never got the weird K8s mini cluster they ship - seems like a weird thing that clashes with the core philosophy of just offering a NAS OS.
Shouldn't it be more of a "why" to install TrueNAS on a RPi?
The only reason I can see is "I have one that I don't use". Because otherwise...
Idle power isn't all that much better than a low power Intel N100 or something similar. And it's all downhill from there. Network transfer speeds and disk transfers will all be kneecapped by the (lack of) available PCIe lanes. Available RAM or CPU speeds are even worse...
That's addressed in the second section of the article:
> I've found numerous times, running modern applications on slower hardware is an excellent way to expose little configuration flaws and misconceptions that lead to learning how to run the applications much better on more capable machines.
It's less about the why, and more about the 'why not?' :)
I explicitly don't recommend running TrueNAS on a Pi currently, at the end (though I don't see a problem with anyone doing it for the fun, or if they need an absolutely tiny build and want to try Arm):
> Because of the current UEFI limitations, I would still recommend running TrueNAS on higher-end Arm hardware (like Ampere servers).
On a somewhat related note, would you trust a Pi based NAS long term? I've not tried doing one since the Pi 4 which understandably because of its hardware limitations left a lot to be desired, but that part aside I was still finding the pi as a piece of hardware somewhat quirky and unpredictable - power especially, I can't count the number of times simply unplugging a usb keyboard would cause it to reboot.
I have actually made a Raspberry Pi based NAS and found it was a pain.
The SATA controller isn't terrible, but it and other hardware areas have had many strange behaviors over the years to the point of compiling the kernel being needed to fiddle with some settings to get a hardware device to do what it's supposed to.
Even if you're using power that is well supported eventually you seem to hit internal limits and get problems. That's when you see people underclocking the chip to move some of this phantom power budget to other chips. Likewise you have to power most everything from a separate source which pushes me even closer to a "regular PC" anyhow.
I just grab an old PC from Facebook for under $100. The current one is a leftover from the DDR3 + Nvidia 1060 gaming era. It's a quad core with HT so I get 8 threads. Granted most of those threads cause the system to go into 90% usage even when running jobs with only 2 threads, probably because the real hardware being used there is something like AVX and it can't be shared between all of the cores at the same time.
The SATA controller has been a bit flaky, but you can pick up 4-port SATA cards for about $10 each.
When my Raspberry Pi fails I need to start looking at configurations and hacks to get the firmware/software stack to work.
When my $100 random PC fails I look at the logs to find out what hardware component failed and replace it.
> The SATA controller has been a bit flaky, but you can pick up 4-port SATA cards for about $10 each.
If your build allows the extra money for an LSI or real raid controller is well worth it. The no-name PCI-e sata cards are flakey and very slow. Putting an LSI in my NAS was a literal 10x performance boost, particularly with zfs which tends to have all of the drives active at once.
I am curious about the slow part. I use these crappy SATA cards and I am sure they are crappy, but the drives are only going to give 100MB/s in bursts and they have an LVM cache (or ZFS stuff) on them to sustain more short-term writes.
I get if I was wiring up NVME drives that are going to go 500MB/s and higher all the time.
What I really care about with the SATA and what I mean by flaky is I when I have to reboot a system physically every day because the controller stays on in some way even if it gets a soft `reboot` command and then Linux fills up with IO timeouts because the controller seems to stop working after X amount of time.
> If your build allows the extra money for an LSI or real raid controller is well worth it.
Keep in mind if you get a real RAID controller and want to use ZFS, you probably want to ensure it can present the disks to the OS as JBOD. Sometimes this requires flashing a non-RAID firmware.
That said, slightly older LSI cards are quite cheap on eBay, and the two I've bought have worked perfectly for many years.
> probably because the real hardware being used there is something like AVX and it can't be shared between all of the cores at the same time.
That's not the right explanation; each physical core has its own vector ALUs for handling SSE and AVX instructions. The chip's power budget is shared between cores, but not the physical transistors doing the vector operations.
I don't know about TrueNAS, but with Proxmox the two random 10$ SATA-cards I tried only gave me issues. With first one OS wouldn't boot, second seemed to work fine, but connected drives disappeared as soon as I wrote to them.
Used server-grade LSI cards seem to be the way to go. Too bad they're power hungry based on what I've read.
I do too. For many use cases it's awesome to use an ESP32, Raspberry Pi or Arduino when I want to stash some little widget that can sip a small battery over the next week. It's equally awesome that in many scenarios you can be net positive in your consumption with a super simple solar panel that provides a few watts here and there into a battery.
But at home things are different. While I want to use as little power as possible, the realistic plan for being sustainable at home is to use solar with batteries. That's a plan I actually think can matter and that I am able to participate in for relatively low cost ($10k)
Messing around with a system to save a few watts for me in this context isn't very valuable.
Yeah this is what keeps me from considering old PCs for NAS.
Maybe stating the blindingly obvious but seems like there is a gap in the market for a board or full kit with a high efficiency ~1-10W CPU and a bunch of SATA and PCIe ports.
Probably not very good. I selected large spinning hard drives because I could get them at a good price for 2TB each and I wanted to setup a RAID5-like system in ZFS and btrfs (lesson learned, btrfs doesn't actually support this correctly) and I wanted to get at least 10TB with redundancy.
I don't know how much each of those SATA disks take up, but probably more than a single Raspberry Pi does.
Likewise it has a few case fans in it that may be pointless. I would prefer it never has a heating issue versus saving a few "whurrr" sounds off in a closet somewhere that nobody cares about.
It's also powering that Nvidia 1060 that I do almost nothing with on the NAS. I don't even bother to enable the Jellyfin GPU transcoding configuration because I prefer to keep my library encoded in h264 right now anyhow as I have not yet made the leap to a newer codec because the different smart TVs have varying support. And sometimes my daughter has a friend come over that has a weird Amazon tablet thing that only does a subset of things correctly.
The 1060 isn't an amazing card really, but it could do some basic Ollama inference if I wanted. I think it has 6GB of memory, which is pretty low, but usable for some small LLMs.
On the one hand it is good to discover that someone is tackling getting TianoCore working on the Raspberry Pi 5.
On the other hand, they still have the destructive backspace behaviour, and inefficient recursive implementation, that breaks the boot loader spinners that the NetBSD and other boot loaders display. It's a tiny thing, but if one is used to the boot sequence the absence of a spinner makes the experience ever so slightly jarring.
I should add, by the way, that this nicely demonstrates M. Geerling's point here about catching bugs by running things on a Pi.
The TianoCore's unnecessarily recursive implementation of a destructive BS is slow enough, on a Pi 4, and in combination with how the boot loaders themselves emitted their spinners, that I could just, very occasionally, see parts of spinner characters flashing very briefly on the screen when the frame refresh timing was just right; which led me to look into what was going on.
Wasn't there a post somewhere on HN yesterday about how slowing down your programs can help you catch problems? Using low end hardware is an automatic way of forcing that :)
I have been using a Raspberry Pi 4 (8 GB RAM) as my NAS for nearly 5 years. It is incredibly reliable. I run the following software on it: Ubuntu 64-bit, Samba, Jenkins, Postgres and MariaDB. I have attached external hard drives through a USB hub (because Pi does not necessarily have enough power for the external hard drive). I git push to public Samba folders on the Pi, and trigger Jenkins, which builds and installs my server using docker in the Pi.
It was a fun project and looked cool but never really worked that well. It was quite unstable and drives seemed to disconnect and reconnect a lot. There are probably better quality connectors out there but I think for a NAS you really want proper SATA connections.
I eventually built my own box and went with OMV again. I like it because it's just userland software you install on Debian. Some of the commenters here who think TrueNAS is overkill might want to check out OMV if they haven't already.
To be honest I still only have a few TB of storage on it, probably not really enough to be worth all the hassle of building and configuring a PC, but it was more about the journey which was fun.
This is fun for learning purposes, but even with the PCIe 3 bus the Pi just isn't that great a server when compared to an Intel N-series machine.
I have two "normal" NAS devices, but I would like to find a compact N100 board with multiple SATA ports, (like some of the stackable HATs for the Pi, some of which will take 4 disks directly on the PCB) to put some older discarded drives to good use.
My go-to solution software-wise is actually to install Proxmox, set up ZFS on it and then drop in a lightweight LXC that exposes the local filesystem via SMB, because I like to tweak the "recycle bin" option and some Mac-specific flags--I've been using that setup for a while, also off Proxmox: https://taoofmac.com/space/notes/2024/11/09/1940#setting-up-...
There are tons of mini-itx N100 boards with onboard 2.5G or 10G ethernet and 6-8 sata ports for a few hundred bucks available (on Amazon for example). search “n100 mini itx nas”.
There are also some decent 6 and 8 bay mini-itx cases.
I went looking for completed systems
recently and couldn’t find any integrators that make them. Surprised nobody will take a few hundred bucks to plug in all the motherboard power/reset pin headers for me.
The article states "I currently run an Ampere Arm server in my rack with Linux and ZFS as my primary storage server" and this is just explaining how to try it out on the Pi, which I found surprisingly interesting. I am glad people like the N100s and wish they would find more relevant articles to talk about them.
The warning about death of three SSDs doesn't inspire too much confidence to be honest... do you think it was due to the usage patterns of Proxmox default settings for ZFS?
Over time and lots of reading random information sources, I got notes about disabling several settings that are cool for datacenter-levels of storage, but useless for the kinds of "Raspberry-pi tied with a couple USB disks" that I was interested in.
The Odroid H4+ might be what you're looking for. It's a N97 SBC from a South Korean manufacturer that's been around a while. The "+" variant has 4 SATA ports. With an adapter board, 2-4 NVMe drives can be attached as well.
QNAP TS-435XeU is a $600 1U short-depth (11") case with quad hotswap SATA, dual NVME, dual 10GbE copper, dual 2.5GbE, 4-32GB DDR4 SODIMM Arm NAS that would benefit from OSS community attention. Includes hardware support for ZFS encryption.
Based on a Marvell/Armada CN9130 SoC which supports ECC, it has mainline Linux support, and public-but-non-upstream code for uboot. With local serial console and a bit of effort, the QNAP OS can be replaced by Arm Debian/Devuan with ZFS.
Rare combo of low power, small size, fast network, ECC memory and upstream-friendly Linux. QNAP also sell a 10GbE router based on the same SoC, which is a successor to the Armada 388 in Helios4 NAS (RIP), https://kobol.io/helios4/
No UEFI support, so TrueNAS for Arm won't work out of the box.
If you can build your own Uboot you should be able to enable UEFI no? It’s not as full featured as EDK2 etc, but it works to boot Linux and I think the BSDs.
I liked FreeNAS for awhile, but after a certain point I kind of just learned how to properly use Samba and NFS and ZFS, and after that I kind of felt like it was just getting in the way.
Nowadays, my "NAS" is one of those little "mini gaming PCs" you an buy on Amazon for around ~$400, and I have three 8-bay USB hard drive enclosures, each filled with 16TB drives all with ZFS. I lose six drives to the RAID, so total storage is about ~288TB, but even though it's USB it's actually pretty fast; fast enough for what I need to for anyway, which is to watch videos off Jellyfin or host a Minecraft server.
I am not 100% sure who TrueNAS is really for, at least in the "install it yourself" sense; if you know enough about how to install something like TrueNAS, you probably don't really need it...
In the last few years I've transitioned to "My family just wants plex to work and I could give a shit about the details". I think I'm more of the target audience. When I had my non-truenas zfs set up I just didn't pay a lot of attention, and when something broke it was like re-learning the whole system over again.
I accidentally gave myself the opportunity to test out my automations when I decided I wanted to rename my k8s nodes (FQDN rather than just hostname). When I did that, everything broke, and I decided it would be easier to simply re-provision than to troubleshoot. I was up and running with completely rebuilt nodes in around an hour.
In my experience, a vanilla install and some daemons sprinkled on top works better than these GUI flavours.
Less breakage, fewer quirks, more secure.
YMMV and I’m not saying you’re wrong - just my experience
Similarly while docker's CLI interface is relatively nice, it's even nicer to just take my phone, open a browser, and push "update" or "restart" in a little gui to quickly get things back up & going. Or to add new services. Or whatever else I want. Sure I could SSH in from my phone, but that's awful. I could go get a laptop whenever I need to do something, but if Jellyfin or Plex or whatever is cranky and I'm already sitting on the couch, I don't want to have to get up and go find a laptop. I want to just hit "restart service" without moving.
And that's the point of things like TrueNAS or Unraid or whatever. It makes things nicer to use from more interfaces in more places.
But for probably 90% of users, they just want a UI they can click through and mostly use the defaults, then log into now and then to make sure things are good.
The UI is also especially helpful for focusing on things that matter, like setting up scrubs, notifications, etc. (though even there I think TrueNAS could do better).
It's why Synology persists, despite growing more and more hostile to their NAS owners.
I agree for the most part, though even vanilla Ubuntu has Cockpit if you need a GUI.
Personally I find that getting it set up with NixOS is pretty straightforward and it's "set and forget", and generally it's not too hard to find configurations you can just copypaste done "correctly". And of course, if you break something you can just reboot and choose a previous generation. Of course restarting still requires SSHing in and `systemctl restart myapp`, so YMMV.
A few big shares is all I really need; I no longer create a share for every single idea/thing I can think of.
4k Blu-ray rips peak at over 100 Mbps, but usually average around 80 Mbps. I don't know how much disk I/O a Minecraft server does ... I wouldn't think it would do all that much. USB2 (high-speed) bandwidth should be plenty for that; although filling the array and scrubbing/resilvering would be painful.
That said, I have lots of home movies that just so happen to be at the exact same bitrates as Blu-rays and after the initial setup, I've never really had any issues with them choking or any bandwidth weirdness. Minecraft doesn't use a ton of disk IO, especially since it is rare that anyone plays on my server other than me.
I do occasionally do stuff that requires decent bandwidth though, enough to saturate a WiFi connection at the very least, and the USB3 + USB SS + Thunderbolt never seems to have much of an issue getting to Wifi speeds.
Deleted Comment
I have one enclosure plugged into a USB 3.0 line, another plugged into a "super speed" line, and one plugged into a Thunderbolt line (shared with my 10GbE Thunderbolt card with a 2x20 gigabit splitter).
This was deliberate, each is actually on a separate USB controller. Assuming I'm bottlenecked by the slowest, I'm limited to 5 gigabits per RAID, but for spinners that's really not that bad.
ETA: It's just a soft RAID with ZFS, I set up each 8-bay enclosure with its own RAIDZ2, and then glued all three together into one big pool mounted at `/tank`. I had to do a bit of systemd chicanery with NixOS to make sure it mounted after the USB stuff started, but that was like ten lines of config I do exactly once, so it wasn't a big deal.
?!?
How do you fill 288 TB? Is it mostly media?
>I liked FreeNAS for awhile, but after a certain point I kind of just learned how to properly use Samba and NFS and ZFS, and after that I kind of felt like it was just getting in the way.
I've been a mostly happy TrueNAS user for about four years, but I'm starting to feel this way.
I recently wrote about expanding my 4-disk raidz1 pool to a 6-disk raidz2 pool.[1] I did everything using ZFS command-line tools because what I wanted wasn't possible through the TrueNAS UI.
A developer from iXsystems (the company that maintains TrueNAS) read my post and told me that creating a ZFS pool from the zfs command-line utility is not supported, and so I may hit bugs when I use the pool in TrueNAS.
I was really surprised that TrueNAS can't just accept whatever the state of the ZFS pool is. It feels like an overreach that TrueNAS expects to manage all ZFS interactions.
I'm converting more of my infrastructure to NixOS, and I know a lot of people just manage their NAS with NixOS, which is sounding more and more appealing to me.
[1] https://mtlynch.io/raidz1-to-raidz2/
[2] https://www.reddit.com/r/truenas/comments/1m7b5e0/migrating_...
I kind of purposefully don't fill it up :).
This has been a project of tech hoarding over the last ~8 years, but basically I wanted "infinite storage". I wanted to be able to do pretty much any project and know that no matter how crazy I am, I'll have enough space for it. Thus far, even with all my media and AI models and stock data and whatnot, I'm sitting around ~45TB.
On the off chance that I do start running low on space, there's plenty of stuff I can delete if I need to, but of course I probably won't need to for the foreseeable future.
> I'm converting more of my infrastructure to NixOS, and I know a lot of people just manage their NAS with NixOS, which is sounding more and more appealing to me.
Yeah, that's what I do, I have my NixOS run a Samba on my RAID, and it works fine. It was like fifteen lines of config, version controlled, and I haven't thought about it in months.
If you do a vanilla FreeBSD+samba+NFS+ZFS setup, you'll need to edit several files around the file system, which are easy to forget months down the line in case of adjustment or disaster recovery.
At first, I thought I might just go with TrueNAS. It can manage my containers and my storage. But it's got proprietary bits, and I don't necessarily want to be locked into their way of managing containers.
Then my plan was to run Proxmox with a TrueNAS VM managing a ZFS raidz volume, so I could use whatever I want for container management (I'm going with Podman)
But the more I've researched and planned out this migration, the more I realize that it's pretty easy to do all the stuff I want from TrueNAS, by myself. Setting up ZFS scrubbing and SMART checks, and email alerts when something fishy happens, is pretty easy.
I'm beginning to really understand the UNIX "do one thing and do it well" philosophy.
Now I just run Ubuntu/Samba and use KVM and docker for anything that doesn't need access to the underlying hardware.
Not to jinx it, but I have never had a hard drive failure since around 2008!
It's actually not too bad; the main "server" idles at around 14W and the power supply for it only goes to 100W under load. The drive bays go up to 100W (I think) but generally idle around 20W each. All together it idles at around ~70-80W.
Not that impressive BUT it replaced a big rack mount server that idled at about 250W and would go up to a kilowatt under load.
> Should you care about the "USB bottleneck"?
Not really, at least not for what I'm doing. I generally can get pretty decent speeds and I think network is often the bottleneck more than the drives themselves.
> How do you organize this thing so it's not a mess of USB cables?
I don't :). It's a big mess of USB cables that's hidden in a closet. It doesn't look pretty at all.
i've wanted to get a NAS running again, but while the QNAP form factor is great, the QNAP OS was overkill – difficult to manage (too many knobs and whistles) – and ultimately not reliable.
so, i'm at a junction: 1) no NAS (current state), 2) custom NAS (form factor dominates this discussion – i don't want a gaming tower), or 3) back to an off-the-shelf brand (poor experience previously).
maybe the ideal would be a Mac Mini that i could plug 4 HDDs into, but that setup would be cost-inefficient. so, it's probably a custom build w/ NixOS or an off-the-shelf, but i'm lacking the motivation to get back into the game.
- External enclosure disconnected frequently (this is more of an issue with the enclosure and its chipset, but I bought a reputable one)
- Many services can't start without being logged in
- If you want to use FileVault, you'll have to input your password when you reboot
Little things went wrong too frequently that needed an attended fix.
If you go off the shelf, I recommend Synology, but make sure you get an Intel model with QSV if you plan to transcode video. You can also install Synology OS to your own hardware using Xpenology - its surprisingly stable, moreso than the mac mini was for me.
They can be had for a bit less than the Mac mini, and I had no issues getting headless Linux working on there. I even have hardware transcoding in Jellyfin working with VAAPI. I think it cost me about $400.
ZFS on macOS sucks really bad, too, so that rules out the obvious alternative.
“Have to”, since I no longer suggest virtualizing TrueNAS even with PCI passthru. I will say the same about zfs-over-USB, but you do you. I’ve had too many bad experiences with both (for those not on the weeds here, both are officially very much not supported and recommended, but they _do_ work).
I really like the TrueNAS value prop - it makes something I’m clearly capable of by hand much easier and less tedious. I back up both my primary zfs tank and well as my PBS storage to it, plus cold backups. It does scheduling, alerts, configuration, and shares, and nothing else. I never got the weird K8s mini cluster they ship - seems like a weird thing that clashes with the core philosophy of just offering a NAS OS.
It actually has been considerably more reliable than the MediaSonics than they replaced.
The cost is very low, but it would have to be a nvme only build.
The only reason I can see is "I have one that I don't use". Because otherwise...
Idle power isn't all that much better than a low power Intel N100 or something similar. And it's all downhill from there. Network transfer speeds and disk transfers will all be kneecapped by the (lack of) available PCIe lanes. Available RAM or CPU speeds are even worse...
> I've found numerous times, running modern applications on slower hardware is an excellent way to expose little configuration flaws and misconceptions that lead to learning how to run the applications much better on more capable machines.
It's less about the why, and more about the 'why not?' :)
I explicitly don't recommend running TrueNAS on a Pi currently, at the end (though I don't see a problem with anyone doing it for the fun, or if they need an absolutely tiny build and want to try Arm):
> Because of the current UEFI limitations, I would still recommend running TrueNAS on higher-end Arm hardware (like Ampere servers).
imo, if the software doesn't work without issue on my Pi, it isn't good enough for prod.
The SATA controller isn't terrible, but it and other hardware areas have had many strange behaviors over the years to the point of compiling the kernel being needed to fiddle with some settings to get a hardware device to do what it's supposed to.
Even if you're using power that is well supported eventually you seem to hit internal limits and get problems. That's when you see people underclocking the chip to move some of this phantom power budget to other chips. Likewise you have to power most everything from a separate source which pushes me even closer to a "regular PC" anyhow.
I just grab an old PC from Facebook for under $100. The current one is a leftover from the DDR3 + Nvidia 1060 gaming era. It's a quad core with HT so I get 8 threads. Granted most of those threads cause the system to go into 90% usage even when running jobs with only 2 threads, probably because the real hardware being used there is something like AVX and it can't be shared between all of the cores at the same time.
The SATA controller has been a bit flaky, but you can pick up 4-port SATA cards for about $10 each.
When my Raspberry Pi fails I need to start looking at configurations and hacks to get the firmware/software stack to work.
When my $100 random PC fails I look at the logs to find out what hardware component failed and replace it.
If your build allows the extra money for an LSI or real raid controller is well worth it. The no-name PCI-e sata cards are flakey and very slow. Putting an LSI in my NAS was a literal 10x performance boost, particularly with zfs which tends to have all of the drives active at once.
I get if I was wiring up NVME drives that are going to go 500MB/s and higher all the time.
What I really care about with the SATA and what I mean by flaky is I when I have to reboot a system physically every day because the controller stays on in some way even if it gets a soft `reboot` command and then Linux fills up with IO timeouts because the controller seems to stop working after X amount of time.
Keep in mind if you get a real RAID controller and want to use ZFS, you probably want to ensure it can present the disks to the OS as JBOD. Sometimes this requires flashing a non-RAID firmware.
That said, slightly older LSI cards are quite cheap on eBay, and the two I've bought have worked perfectly for many years.
That's not the right explanation; each physical core has its own vector ALUs for handling SSE and AVX instructions. The chip's power budget is shared between cores, but not the physical transistors doing the vector operations.
Used server-grade LSI cards seem to be the way to go. Too bad they're power hungry based on what I've read.
But at home things are different. While I want to use as little power as possible, the realistic plan for being sustainable at home is to use solar with batteries. That's a plan I actually think can matter and that I am able to participate in for relatively low cost ($10k)
Messing around with a system to save a few watts for me in this context isn't very valuable.
Maybe stating the blindingly obvious but seems like there is a gap in the market for a board or full kit with a high efficiency ~1-10W CPU and a bunch of SATA and PCIe ports.
I don't know how much each of those SATA disks take up, but probably more than a single Raspberry Pi does.
Likewise it has a few case fans in it that may be pointless. I would prefer it never has a heating issue versus saving a few "whurrr" sounds off in a closet somewhere that nobody cares about.
It's also powering that Nvidia 1060 that I do almost nothing with on the NAS. I don't even bother to enable the Jellyfin GPU transcoding configuration because I prefer to keep my library encoded in h264 right now anyhow as I have not yet made the leap to a newer codec because the different smart TVs have varying support. And sometimes my daughter has a friend come over that has a weird Amazon tablet thing that only does a subset of things correctly.
The 1060 isn't an amazing card really, but it could do some basic Ollama inference if I wanted. I think it has 6GB of memory, which is pretty low, but usable for some small LLMs.
On the other hand, they still have the destructive backspace behaviour, and inefficient recursive implementation, that breaks the boot loader spinners that the NetBSD and other boot loaders display. It's a tiny thing, but if one is used to the boot sequence the absence of a spinner makes the experience ever so slightly jarring.
* https://github.com/NumberOneGit/edk2/blob/master/MdeModulePk...
* https://github.com/tianocore/edk2/blob/master/MdeModulePkg/U...
* https://tty0.social/@JdeBP/114658278210981731
* https://tty0.social/@JdeBP/114659884938990579
The TianoCore's unnecessarily recursive implementation of a destructive BS is slow enough, on a Pi 4, and in combination with how the boot loaders themselves emitted their spinners, that I could just, very occasionally, see parts of spinner characters flashing very briefly on the screen when the frame refresh timing was just right; which led me to look into what was going on.
It was a fun project and looked cool but never really worked that well. It was quite unstable and drives seemed to disconnect and reconnect a lot. There are probably better quality connectors out there but I think for a NAS you really want proper SATA connections.
I eventually built my own box and went with OMV again. I like it because it's just userland software you install on Debian. Some of the commenters here who think TrueNAS is overkill might want to check out OMV if they haven't already.
To be honest I still only have a few TB of storage on it, probably not really enough to be worth all the hassle of building and configuring a PC, but it was more about the journey which was fun.
I have two "normal" NAS devices, but I would like to find a compact N100 board with multiple SATA ports, (like some of the stackable HATs for the Pi, some of which will take 4 disks directly on the PCB) to put some older discarded drives to good use.
My go-to solution software-wise is actually to install Proxmox, set up ZFS on it and then drop in a lightweight LXC that exposes the local filesystem via SMB, because I like to tweak the "recycle bin" option and some Mac-specific flags--I've been using that setup for a while, also off Proxmox: https://taoofmac.com/space/notes/2024/11/09/1940#setting-up-...
There are also some decent 6 and 8 bay mini-itx cases.
I went looking for completed systems recently and couldn’t find any integrators that make them. Surprised nobody will take a few hundred bucks to plug in all the motherboard power/reset pin headers for me.
The mini itx boards are also interesting, but more if you're actually building the machine yourself. I'm typing this from a 16 core AMD one.
Over time and lots of reading random information sources, I got notes about disabling several settings that are cool for datacenter-levels of storage, but useless for the kinds of "Raspberry-pi tied with a couple USB disks" that I was interested in.
Based on a Marvell/Armada CN9130 SoC which supports ECC, it has mainline Linux support, and public-but-non-upstream code for uboot. With local serial console and a bit of effort, the QNAP OS can be replaced by Arm Debian/Devuan with ZFS.
Rare combo of low power, small size, fast network, ECC memory and upstream-friendly Linux. QNAP also sell a 10GbE router based on the same SoC, which is a successor to the Armada 388 in Helios4 NAS (RIP), https://kobol.io/helios4/
No UEFI support, so TrueNAS for Arm won't work out of the box.