I wanted to join the NAS club for so long but it was always either bang for buck for prebuilts ($1k for 4 bays, with 4Gb RAM and the CPU is slower than my previous phone?!) or lack of availability for 4+ SATA boards.
Last year I had aha moment when I learned about SAS boards.
Result - for ~$1k I have 8 bay NAS that's also strong HTPC and console capable of running games like Silksong with no sweat.
Before I learned about SAS cards my biggest limiter was number of SATA slots. If not for that card I would be looking at $600+ niche mobos.
But the biggest winner was 5500GT. I was looking at N300 for lower power consumption, but Ryzen draws 1W at idle and has APU capable of serious gaming (not 4k Elden Ring, but c'mon).
I like terramaster if you're looking for budget. software is a bit potato (it's all there, and you can install apps, just 7-8/10 polished, not 10/10), hardware build quality is solid
for 10 bays i like asus lockerstore, also two NVMes, times I've bought those, a bit north of 1000
i do not have affiliations or interests with either company, just a data hoarder
that's within the last year so not sure if anything changed in the last few months in light of things
Main benefit of Ryzen is that previously I was aiming for something like Atom or N300 for low power consumption - now, with Ryzen I have CPU with 10x computing power, powerful integrated GPU, that idles at 1W, and I can do serious gaming with it.
NAS in this context is more about home server then an actual plain network attached storage.
I usually call mine NAS too, despite technically using it more for selfhosting - and basically never mounting it's data volumes. I think this applies to almost everyone using them nowadays, which is also the reason why the pre built ones are less and less relevant - because they always have super conservative CPUs (which are more then sufficient for a NAS that's actually used as a plain network attached storage - but severely underperforming if the user wants to run services on it.)
> My only complaint about the motherboards has been that buying them from one of the Chinese e-tail sites like AliExpress is considered problematic by some.
I love some AliExpress deals for some of my hobby purchases, but a NAS motherboard is not one of them.
The contrast between the fashionable case, boutique Noctua fans, and then an AliExpress motherboard doesn’t inspire confidence in the priorities for this build. When it comes to a NAS I prioritize the core components from well known manufacturers and vendors. With everything from hobby gear to test equipment AliExpress has been a gamble on wherever you’re getting the real deal or a ghost shift knockoff or QA reject for me. It’s the last place I’d be shopping for core NAS components.
That's a Topton board from the official Topton store on AliExpress. It's strictly equivalent to buying straight from the manufacturer and will be shipped by Topton not AliExpress.
The Jonsbo case is probably both cheaper and easier to buy from AliExpress than Amazon too to be fair. It's also a Chinese brand.
I wrote this blog and I completely agree with you both!
This listing should be as close to a sure thing as you can get on AliExpress. But even then, dealing with AliExpress when things go sideways isn't always all that great.
I have purchased this motherboard from this link, I'd do it again in a heartbeat if I needed to, and I wouldn't fault anyone for avoiding AliExpress either.
If somebody wants to buy from a different vendor, it's sometimes possible to find resellers on Amazon or eBay (myself included). The prices might be a bit more expensive, but some folks might think it's worth it.
I talk about it in the blog, it's my understanding that there's shortages of the Intel CPUs which I would expect to drive up the pricing.
I was looking at N class CPU builds and very quickly realized they're not for me. For storage only over LAN, low power prioritized use they're a valid play, but there are some tradeoffs that are not obvious at first glance.
See those NVMEs he stuck in there? They're running at 1/8th their rated link speed. Yeah...12.5%. (PCIe 4.0 x4 vs PCIe 3.0 x1). This board is one of the better ones on pcie layouts [0], but 9 gen3 lanes is thin no matter how you look at it so all those boards have to cut corners somewhere on that
I decided I'm better off with a ebay AM4 build - way better pcie situation, ECC, way more powerful cpu, more sata, cheaper, 6x nvme all with X4 lanes, standard fan compatibility. Main downsides being no quicksync, power consumption and fast ECC UDIMMS are scarce. That was for a proxmox/nas hybrid build though so more emphasis on performance
With NAS builds you're usually network constrained. Most of the numbers you see on storage side are gigabytes while network is gigabits. i.e. 1/8th
So if you crunch the numbers you'll see storage (esp flash) is much faster than network, meaning NAS can cut corners on storage. If you're running something like a VM that accesses the storage directly then suddenly storage speed matters
Yes, mostly. PCIE 3.0 x1 lane gets you 500mb/s [1] not much point to it compared to a regular ssd on sata3 which could do 600mb/s. So if you want very fast storage you would also think about it.
Realistically for your DIY stuff you're talking a different beast to these NASs. Bang for your buck you'd be attaching the mass 3.5" storage in an external second hand JBOD enclosure and the main device would be dealing with the faster storage and have an HBA to connect to it.
[1] edit: as Havoc pointed out I need my coffee should be 2gb/s which does change the point.
Wait. You build a new one every -year-?! How does one establish the reliability of the hardware (particularly the aliexpress motherboard), not to mention data retention, if its maximum life expectancy is 365 days?
Somebody else shared this, but I'm the blog's author. I think you're asking good questions, but they're not from the point-of-view of the people who actually benefit from these blogs.
I've answered the "You build one every year?" question quite a few times over the years.
These blogs have a shelf life. After about a year, newer hardware is available that's a better a better value. And after 2-3 years, it starts to get difficult to even find the many of the parts.
I don't replace my NAS every year, but every now and then I do keep my yearly NAS for my own purposes, but 2026 won't be one of those years.
> How does one establish the reliability of the hardware...?
One guy on the Internet is--and always will be--an anecdote, I could use this NAS until its hardware was obsolete and I'd still be unable to establish any kind of actual reliability for the hardware. Unfortunately, I don't have the time or money required to elevate beyond being a piece of anecdotal data.
However, there's a sizable audience out there who have realized that they need some kind of storage beyond what they already have, but haven't implemented a solution yet. I think if you put yourself in their shoes, you'll realize that anything is better than nothing in regards to their important data. My hope is that these blogs move that audience along towards finding that solution.
> One guy on the Internet is--and always will be--an anecdote
That's true of course. The problem, in my view, is that this is how everyone on the internet acts especially the "reviewers" or "builders" or "DIYers". It's not just you, so don't take this as a personal attack.
Almost all articles and videos about tech (and other things now too) do the equivalent of "unboxing review". When it's not strictly an unboxing, it's usually like "I've had this phone/laptop/GPU/backpack/sprinkling system/etc for a month, and here is my review"
I stopped putting much weight on online reviews and guides because of that. Almost everyone who does them uses whatever they are advertising for _maybe_ a month and moves on to the next thing. Even if I'm looking for an older thing all reviews are from the month (or even day) it was released and there is very little to non a year or 2 after because understandably they don't get views/clicks. Even when there are later reviews, they are in the bucket of "This thing is 3 years old now. Is it still worth it in 2025? I bought a new one to review and used it for a month"
Not to mention that when reviewers DO face a problem, they contact the company, get a replacement and just carry on. Assuming everyone will be in the same position. From their prospective, it's understandable. They can't make a review saying "Welp, we got a defective one. nothing to see here". On the other hand, if half the reviewer faced problems, and documented it, then maybe the pattern will be clearer.
Yes, every reviewer is a "one guy on the internet" and "is--and always will be-- an anecdote". No one is asking every reviewer to be come Consumer Reports and test hundreds of models and collect user feedback to establish reliability scores. But at the same time if each did something similar it would be a lot more useful than what they do.
I'll give you a concrete example off the top of my mind --a Thermapen from ThermoWorks.
When I was looking for "the best kitchen thermometer" the Thermapen was the top result/review everywhere. Its accuracy, speed and build quality were all things every review outlined. It was a couple of years old by then and all the reviews were from 2 years ago. I got one and 6-8 months later, it started developing cracks all over the body. A search online then showed that this is actually a very common issue with Thermapens. You can contact them and they might send you another one of the older models if they still have them (they didn't in my case) but it'll also crack again. Maybe you can buy the new one?
May sound petty to put that one example on the spotlight, but very similar thing happened to me with a Pixel 4, a Thinkpad P2, a Sony wireless headphones, a Bose speaker, and many more that I'm forgetting. All had stellar "3 week use reviews". After 6 months to a year and they all broke down in various ways. Then it becomes very easy to know what to search for and the problems are "yeah, that just always happens with this thing"
> One guy on the Internet is--and always will be--an anecdote
It seems like you have a specific person in mind as the audience member (yourself?), but the piece could benefit from a wider view. Given your hardware choices, reliability seems to not be a factor at all, but near-term cost (at the expense of long-term cost).
I'm not expecting you to personally test your hardware choices, but make choices based on the aggregate accounts of others, like the rest of us.
I would imagine the mean audience member would want to buy something they can set and forget, which would necessitate the reliable choice. That is wholly different than a person who is excited about tinkering with a brand new NAS each year
For some people, building a NAS, or a fuller home-lab, is a hobby in itself. Posts like this are generally written by one of those people for an audience of those sorts of people. Nothing wrong with that. I was someone like that myself, some time ago.
On a more cynical note: if the blog is popular enough, those affiliate links might be worth more than a few pennies and a post about previous years builds with links to those years choice of tech, will not see anything like the same traction. It wouldn't get attention on HN for a start (at least not until a few more years time, when it might be part of an interesting then-and-now comparison).
Looks like they built a new NAS, but kept using the same drives. Which given the number of drive bays in the NAS probably make up a large majority of the overall cost in something like this.
Edit: reading comprehension fail - they bought drives earlier, at an unspecified price, but they weren't from the old NAS - I agree, when lifetimes of drives are measures in decades and huge amounts of tbw it seems pretty silly to buy new ones every time.
Built a NAS last winter using the same case. Temps for HDDs used to be in mid-50s C with no fan and about 40 with the stock fan. The case-native backplane thingamajig does not provide any sort of pwm control if the fan is plugged in, so it's either full blast or nothing. I swapped the fan for a Thermalright TL-B12 and the HDDs are now happily chugging along at about 37 with the fan barely perceptible. Hddfancontrol ramps it up based on the output of smartctl.
Case can actually fit a low-profile discrete GPU, there's about half height worth of space.
> any sort of pwm control if the fan is plugged in, so it's either full blast or nothing
Got a new network switch that runs somewhat hot (TP-Link) and it's behaving the same way, built-in fan runs either not at all, or at 100% (and noisy at that). Installed OpenWRT on it briefly, before discovering 10Gbe NIC didn't work with OpenWRT, and it had much better fan control. Why is it so hard to just place a basic curve on the fan control based on the hardware temperature? All the sensors and controllers are there apparently, just a software thing...
I have an impression that both noise level and power consumption are not a priority for home network equipment manufactures. After moving to a new house and connecting to another ISP I've got an ISP modem-router which: 1. has a fan and while it's quiet it's not silent 2. consumes around 20 Wt, not much but working 24x7 it would cost around £45/year at current electricity rates.
I think it's technically possible to make a modem which will consume less power and use passive coiling but I don't think they (ISP and device manufacturer) care.
Very sad that HDDs, SSDs, and RAM are all increasing in price now, but I just made a 4 x 24 TB ZFS pool with Seagate Barracudas on sale at $10 / TB [1]. This seems like a pretty decent price even though the Barracudas are rated for 2400 hours per year [2] but this is the same spec that the refurbished Exos drives are rated for.
By the way, interesting to see that OP has no qualms about buying cheap Chinese motherboards, but splurged for an expensive Noctua fan when the cheaper Thermalright TL-B12 perform just as well for a lot cheaper (although the Thermalright could be slightly louder and perhaps be a slightly more annoying spectrum).
Also, it is mildly sad that there aren't many cheap low power (< 500 W) power supplies for SFX form factor. The SilverStone Technology SX500-G 500W SFX that was mentioned retails for the same price as 750 W and 850 W SFX PSUs on Amazon! I heard good things about getting Delta flex 400 W PSUs from Chinese websites --- some companies (e.g. YTC) mod them to be fully modular, and they are supposedly quite efficient (80 Plus Gold/Platinum) and quiet, but I haven't tested them out yet. On Taobao, those are like $30.
This is Black Friday pricing at least, if you're willing to shuck. Seagate drives are still sub-$10/TB which... a single 24-26TB is enough for all my photos (ever), media and some dataset backup for work. I'm planning to backup photos and other "glacier"-tier media like YouTube channels to BluRay (a disk or two per year). It's at the point where I'd rather just pay the money and forget about it for 5-10 years.
I built the case from Makerbeam and printed panels, an old Corsair SF600 and a 4 year old ITX system with one of Silverstone's backplanes. They make up to 5 drives in a 3x5-1/4 bay form factor. It's a little overpowered (a 5950X), but I also use it as a generic server at home and run a shared ZFS pool with 2x mirrored vdevs. Even with inefficient space it's more than I need. I put in a 1080ti for transcoding or odd jobs that need a little CUDA (like photo tagging). Runs ResNet50-class models easily enough. I also wondered about treating it as a single-node SLURM server.
That's only for ZFS deduplication which you should never enable unless you have very, very specific use cases.
For normal use, 2GB of RAM for that setup would be fine. But more RAM is more readily available cache, so more is better. It is certainly not even close to a requirement.
There is a lot of old, often repeated ZFS lore which has a kernel of truth but misleads people into thinking it's a requirement.
ECC is better, but not required. More RAM is better, not a requirement. L2ARC is better, not required.
It's a good price but the Barracuda line isn't intended for NAS use so it's unclear how reliable they are. But it's still tempting to roll the dice given how expensive drive prices are right now.
Holy smokes, the NAS in idle consumes more power than my UNAS Pro with 4x8TB HDD and 2X8TB SSD, as well as a Mac mini M1 with a 2TB Samsung T7 SSD, and my 4 access points and 4 protect cameras combined.
For reference, the UNAS Pro comes with 10G networking, and will deliver roughly 500MB/s from a 4 HDD RAID5 array, and close to 1GB/s from the SSDs (which it never gets a chance to do, as I use them for photos/documents).
My entire "network stack", including firewall, switch, everything POE, hue bridge, tado bridge, Homey Pro, UPS, and whatever else, consumes 96W in total, and does pretty much all my family and I need, at reasonable speeds. Our main storage is in the cloud though, so YMMV.
"the NAS in idle consumes more power than my UNAS Pro with 4x8TB HDD and 2X8TB SSD, as well as a Mac mini M1 with a 2TB Samsung T7 SSD, and my 4 access points and 4 protect cameras combined."
I know that's not true. I say this as someone who measures the power consumption of individual components, and even individual rails with a clamp meter. The OP measures an idle power of 67W. He has 6 x 8TB HDDs. These typically consume 5W idling (not spun down). So the OP's NAS without drives is probably around 37W.
A UNAS Pro without drives reportedly consumes 20W with no drives. Adding 4 x 8TB at 5W per drive, means your UNAS Pro config with drives probably idles at 40W (again, drives not spun down). That means you are 17W under his NAS idle power. So you claim your remaining hardware (Mac mini, 4 APs, 4 cameras) run in under 17W... Yeah that's not possible. 17W is peanuts; it's half the power of a phone's fast charger (~30W).
PS: for the OP, an easy way to further reduce power consumption is to replace your 500W PSU with a smaller one, like 250-300W which is still amply over-specced for your build. Because the typical efficiency curve of a PSU drops sharply at very low loads. For example at idle when your NAS pulls 67W from the wall it's very probable it supplies only ~50W to the internal components, so it's running at 10% load and it's only 50/67 = 75% efficient. The smallest load for which the 80 Plus Gold standard requires a minimum efficiency is 20%. If you downgrade to a 250W PSU you are enforcing a minimum 20% load for which the 80 Plus Gold standard requires minimum 87% efficiency. The load at the wall would thus drop to 50/.87 = 57W thereby saving you 10W.
96W is what's reported at the wall including everything. The switch reports 36W PoE consumption The Mac Mini is 5-6W, and the UNAS Pro around 35W with drives (4xHDD, 2xSSD).
So ~75W in total for everything PoE, Mac Mini and UNAS Pro. I was 8.5W over, so remove the Mac Mini from the equation.
The rest of the consumption (21W) is made up of a UDM Pro with a 4TB WD Red, USW Pro Max 16 POE, Hue Bridge, Tado Bridge, Homey Pro, and a Unifi UPS Tower.
and yes, that's at idle (drives spinning). It does rise to 120-130W when everything is doing "something".
did you forget the Mac Mini M1 in that comparison ?
My setup, UNAS and Mac Mini M1, with 10Gbps networking, will easily perform as well as the NAS in question, but the Mac Mini only uses 4.6W idle, making it much more efficient.
As for ZFS vs Btrfs, they're about equal unless you're doing some very specific things. For most normal server stuff or NAS stuff, Btrfs is every bit as competent as ZFS. Snapshots, compression, RAID1+, recovery, bitrot detection, they're pretty much equal. ZFS as an advantage with RAIDZ1/2 as Btrfs apparently hasn't managed to make RAID5/6 stable in the past decade. You can however run RAID1 across multiple devices with multiple copies, which is not quite the same, but also not terrible.
The RAM usage of ZFS is also largely a myth. Yes, it will use RAM if available, but that is mostly because it was designed with it's own file cache, which was probably fine on Solaris, and to some extent on FreeBSD, but Linux uses a shared block cache, and instead of files being cached in the shared cache, ZFS will cache them, making it look like it hogs RAM.
> the NAS in idle consumes more power than my UNAS Pro with 4x8TB HDD and 2X8TB SSD, as well as a Mac mini M1 with a 2TB Samsung T7 SSD, and my 4 access points and 4 protect cameras combined.
Are your drives spun? 70w is a pretty low bar. The nas by itself is probably 40w with drives, Mac mini is another 7-10w (especially at wall) and now we are at 50w, so 20w left for 4 AP and cameras
drives are spinning. 4x8TB WD Red Plus, which uses 3.4W idle, and assuming 20W for the NAS it's at ~34W (measured 35W). Mac Mini uses 4.6W idle (headless). POE consumption (measured by switch) is 37W (I'm aware there's overhead in AC/DC conversion).
All in all the total consumption at the wall is 96W, but as i have written in another comment, i was 7-8W off, meaning the quoted setup of mine uses 7-8W more than the 66.7W OPs NAS idles at.
It's part of the 3-2-1 backup setup, but where other people have their "offsite backup" in the cloud, I keep my working copy there, and have backups at home.
I outsourced operations of it though. I have self hosted for decades, and for the first time in 15-20 years, I'm able to take a vacation and not bring my laptop in case something breaks.
As for main storage, as was probably evident from my comment, I don't have 30TB of cloud storage. We have our important stuff in the cloud, and "everything else" at home, but nothing at home is accessible from the internet unless you're on a VPN.
I would like to point people to the Odroid H4 series of boards. N97 or N355, 2*2.5GbE, 4*SATA, 2 W in idle. Also has extension boards to turn it into a router for example.
The developer hardkernel also publishes all relevant info such as board schematics.
And the best feature is they have in-band ECC, which can correct one-bit and detect two-bit errors. No other Alder Lake-N or Twin Lake SBC exposes this feature in UEFI.
There is the ASUS NUC 13 Rugged, which also exposes the in-band ECC, but it is both much more expensive and much slower (it uses either a 2-core or a 4-core Atom CPU, while ODROID uses either a 4-core or an 8-core CPU of the same Gracemont-based series).
I also have an older Odroid HC4, it's been years it is running smoothly and not only I cannot use 1000$ for a NAS as the current post implied but the power consumption seems crazy to me for a mere disk-over-network usage (using a 500W power supply).
I like the extensive benchmark from hardkernel, the only issue is that any ARM-based product is very tricky to boot and the only savior is armbian.
Yep, I've had mine running for over a year now without issue. It idles at 34w with all 4 drives running. I ended up making a custom "case" for it: https://github.com/cbsmith402/storage-loaf
I've had an H3 for a few years and it runs amazing. Very low power usage, small footprint and great stability. I run it with an M.2 ssd for power considerations.
Before that I had a full size NAS with an efficient Fujitsi motherboard, pico-psu, 12V adaptor and spinning HDD's. That required so much extra work for so little power efficiency gains vs the Odroid.
Last year I had aha moment when I learned about SAS boards.
My final build was:
Result - for ~$1k I have 8 bay NAS that's also strong HTPC and console capable of running games like Silksong with no sweat.Before I learned about SAS cards my biggest limiter was number of SATA slots. If not for that card I would be looking at $600+ niche mobos.
But the biggest winner was 5500GT. I was looking at N300 for lower power consumption, but Ryzen draws 1W at idle and has APU capable of serious gaming (not 4k Elden Ring, but c'mon).
Awesome! Now erase your Truenas, install XCP-ng, passthrough the LSI to the TrueNas VM :)
I like terramaster if you're looking for budget. software is a bit potato (it's all there, and you can install apps, just 7-8/10 polished, not 10/10), hardware build quality is solid
for 10 bays i like asus lockerstore, also two NVMes, times I've bought those, a bit north of 1000
i do not have affiliations or interests with either company, just a data hoarder
that's within the last year so not sure if anything changed in the last few months in light of things
I usually call mine NAS too, despite technically using it more for selfhosting - and basically never mounting it's data volumes. I think this applies to almost everyone using them nowadays, which is also the reason why the pre built ones are less and less relevant - because they always have super conservative CPUs (which are more then sufficient for a NAS that's actually used as a plain network attached storage - but severely underperforming if the user wants to run services on it.)
I love some AliExpress deals for some of my hobby purchases, but a NAS motherboard is not one of them.
The contrast between the fashionable case, boutique Noctua fans, and then an AliExpress motherboard doesn’t inspire confidence in the priorities for this build. When it comes to a NAS I prioritize the core components from well known manufacturers and vendors. With everything from hobby gear to test equipment AliExpress has been a gamble on wherever you’re getting the real deal or a ghost shift knockoff or QA reject for me. It’s the last place I’d be shopping for core NAS components.
The Jonsbo case is probably both cheaper and easier to buy from AliExpress than Amazon too to be fair. It's also a Chinese brand.
This listing should be as close to a sure thing as you can get on AliExpress. But even then, dealing with AliExpress when things go sideways isn't always all that great.
I have purchased this motherboard from this link, I'd do it again in a heartbeat if I needed to, and I wouldn't fault anyone for avoiding AliExpress either.
If somebody wants to buy from a different vendor, it's sometimes possible to find resellers on Amazon or eBay (myself included). The prices might be a bit more expensive, but some folks might think it's worth it.
I talk about it in the blog, it's my understanding that there's shortages of the Intel CPUs which I would expect to drive up the pricing.
See those NVMEs he stuck in there? They're running at 1/8th their rated link speed. Yeah...12.5%. (PCIe 4.0 x4 vs PCIe 3.0 x1). This board is one of the better ones on pcie layouts [0], but 9 gen3 lanes is thin no matter how you look at it so all those boards have to cut corners somewhere on that
I decided I'm better off with a ebay AM4 build - way better pcie situation, ECC, way more powerful cpu, more sata, cheaper, 6x nvme all with X4 lanes, standard fan compatibility. Main downsides being no quicksync, power consumption and fast ECC UDIMMS are scarce. That was for a proxmox/nas hybrid build though so more emphasis on performance
[0] https://youtu.be/1YJ0s_LxXgU?t=690
So if you crunch the numbers you'll see storage (esp flash) is much faster than network, meaning NAS can cut corners on storage. If you're running something like a VM that accesses the storage directly then suddenly storage speed matters
Realistically for your DIY stuff you're talking a different beast to these NASs. Bang for your buck you'd be attaching the mass 3.5" storage in an external second hand JBOD enclosure and the main device would be dealing with the faster storage and have an HBA to connect to it.
[1] edit: as Havoc pointed out I need my coffee should be 2gb/s which does change the point.
I've answered the "You build one every year?" question quite a few times over the years.
These blogs have a shelf life. After about a year, newer hardware is available that's a better a better value. And after 2-3 years, it starts to get difficult to even find the many of the parts.
I don't replace my NAS every year, but every now and then I do keep my yearly NAS for my own purposes, but 2026 won't be one of those years.
> How does one establish the reliability of the hardware...?
One guy on the Internet is--and always will be--an anecdote, I could use this NAS until its hardware was obsolete and I'd still be unable to establish any kind of actual reliability for the hardware. Unfortunately, I don't have the time or money required to elevate beyond being a piece of anecdotal data.
However, there's a sizable audience out there who have realized that they need some kind of storage beyond what they already have, but haven't implemented a solution yet. I think if you put yourself in their shoes, you'll realize that anything is better than nothing in regards to their important data. My hope is that these blogs move that audience along towards finding that solution.
That's true of course. The problem, in my view, is that this is how everyone on the internet acts especially the "reviewers" or "builders" or "DIYers". It's not just you, so don't take this as a personal attack.
Almost all articles and videos about tech (and other things now too) do the equivalent of "unboxing review". When it's not strictly an unboxing, it's usually like "I've had this phone/laptop/GPU/backpack/sprinkling system/etc for a month, and here is my review"
I stopped putting much weight on online reviews and guides because of that. Almost everyone who does them uses whatever they are advertising for _maybe_ a month and moves on to the next thing. Even if I'm looking for an older thing all reviews are from the month (or even day) it was released and there is very little to non a year or 2 after because understandably they don't get views/clicks. Even when there are later reviews, they are in the bucket of "This thing is 3 years old now. Is it still worth it in 2025? I bought a new one to review and used it for a month"
Not to mention that when reviewers DO face a problem, they contact the company, get a replacement and just carry on. Assuming everyone will be in the same position. From their prospective, it's understandable. They can't make a review saying "Welp, we got a defective one. nothing to see here". On the other hand, if half the reviewer faced problems, and documented it, then maybe the pattern will be clearer.
Yes, every reviewer is a "one guy on the internet" and "is--and always will be-- an anecdote". No one is asking every reviewer to be come Consumer Reports and test hundreds of models and collect user feedback to establish reliability scores. But at the same time if each did something similar it would be a lot more useful than what they do.
I'll give you a concrete example off the top of my mind --a Thermapen from ThermoWorks.
When I was looking for "the best kitchen thermometer" the Thermapen was the top result/review everywhere. Its accuracy, speed and build quality were all things every review outlined. It was a couple of years old by then and all the reviews were from 2 years ago. I got one and 6-8 months later, it started developing cracks all over the body. A search online then showed that this is actually a very common issue with Thermapens. You can contact them and they might send you another one of the older models if they still have them (they didn't in my case) but it'll also crack again. Maybe you can buy the new one?
May sound petty to put that one example on the spotlight, but very similar thing happened to me with a Pixel 4, a Thinkpad P2, a Sony wireless headphones, a Bose speaker, and many more that I'm forgetting. All had stellar "3 week use reviews". After 6 months to a year and they all broke down in various ways. Then it becomes very easy to know what to search for and the problems are "yeah, that just always happens with this thing"
It seems like you have a specific person in mind as the audience member (yourself?), but the piece could benefit from a wider view. Given your hardware choices, reliability seems to not be a factor at all, but near-term cost (at the expense of long-term cost).
I'm not expecting you to personally test your hardware choices, but make choices based on the aggregate accounts of others, like the rest of us.
I would imagine the mean audience member would want to buy something they can set and forget, which would necessitate the reliable choice. That is wholly different than a person who is excited about tinkering with a brand new NAS each year
They're tagged for the post and year so must be worth it to go to that trouble rather then using generic tag for the whole blog.
tag=diyans2024-20, tag=diynas2025-20,tag=diynas2026-20
For some people, building a NAS, or a fuller home-lab, is a hobby in itself. Posts like this are generally written by one of those people for an audience of those sorts of people. Nothing wrong with that. I was someone like that myself, some time ago.
On a more cynical note: if the blog is popular enough, those affiliate links might be worth more than a few pennies and a post about previous years builds with links to those years choice of tech, will not see anything like the same traction. It wouldn't get attention on HN for a start (at least not until a few more years time, when it might be part of an interesting then-and-now comparison).
Deleted Comment
Edit: reading comprehension fail - they bought drives earlier, at an unspecified price, but they weren't from the old NAS - I agree, when lifetimes of drives are measures in decades and huge amounts of tbw it seems pretty silly to buy new ones every time.
Case can actually fit a low-profile discrete GPU, there's about half height worth of space.
Got a new network switch that runs somewhat hot (TP-Link) and it's behaving the same way, built-in fan runs either not at all, or at 100% (and noisy at that). Installed OpenWRT on it briefly, before discovering 10Gbe NIC didn't work with OpenWRT, and it had much better fan control. Why is it so hard to just place a basic curve on the fan control based on the hardware temperature? All the sensors and controllers are there apparently, just a software thing...
I think it's technically possible to make a modem which will consume less power and use passive coiling but I don't think they (ISP and device manufacturer) care.
By the way, interesting to see that OP has no qualms about buying cheap Chinese motherboards, but splurged for an expensive Noctua fan when the cheaper Thermalright TL-B12 perform just as well for a lot cheaper (although the Thermalright could be slightly louder and perhaps be a slightly more annoying spectrum).
Also, it is mildly sad that there aren't many cheap low power (< 500 W) power supplies for SFX form factor. The SilverStone Technology SX500-G 500W SFX that was mentioned retails for the same price as 750 W and 850 W SFX PSUs on Amazon! I heard good things about getting Delta flex 400 W PSUs from Chinese websites --- some companies (e.g. YTC) mod them to be fully modular, and they are supposedly quite efficient (80 Plus Gold/Platinum) and quiet, but I haven't tested them out yet. On Taobao, those are like $30.
[1] https://www.newegg.com/seagate-barracuda-st24000dm001-24tb-f...
[2] https://www.seagate.com/content/dam/seagate/en/content-fragm...
I built the case from Makerbeam and printed panels, an old Corsair SF600 and a 4 year old ITX system with one of Silverstone's backplanes. They make up to 5 drives in a 3x5-1/4 bay form factor. It's a little overpowered (a 5950X), but I also use it as a generic server at home and run a shared ZFS pool with 2x mirrored vdevs. Even with inefficient space it's more than I need. I put in a 1080ti for transcoding or odd jobs that need a little CUDA (like photo tagging). Runs ResNet50-class models easily enough. I also wondered about treating it as a single-node SLURM server.
How much RAM did you install? Did you follow the 1GB per 1TB recommendation for ZFS? (i.e. 96GB of RAM)
For normal use, 2GB of RAM for that setup would be fine. But more RAM is more readily available cache, so more is better. It is certainly not even close to a requirement.
There is a lot of old, often repeated ZFS lore which has a kernel of truth but misleads people into thinking it's a requirement.
ECC is better, but not required. More RAM is better, not a requirement. L2ARC is better, not required.
[1] https://daniel.lawrence.lu/blog/2024-04-26-small-form-factor...
That's a remarkably good price. If I had $1.5k handy I'd be sorely tempted (even tho it's Seagate).
I'm way to bothered by how long it would take to resilver the disks that size.
For reference, the UNAS Pro comes with 10G networking, and will deliver roughly 500MB/s from a 4 HDD RAID5 array, and close to 1GB/s from the SSDs (which it never gets a chance to do, as I use them for photos/documents).
My entire "network stack", including firewall, switch, everything POE, hue bridge, tado bridge, Homey Pro, UPS, and whatever else, consumes 96W in total, and does pretty much all my family and I need, at reasonable speeds. Our main storage is in the cloud though, so YMMV.
I know that's not true. I say this as someone who measures the power consumption of individual components, and even individual rails with a clamp meter. The OP measures an idle power of 67W. He has 6 x 8TB HDDs. These typically consume 5W idling (not spun down). So the OP's NAS without drives is probably around 37W.
A UNAS Pro without drives reportedly consumes 20W with no drives. Adding 4 x 8TB at 5W per drive, means your UNAS Pro config with drives probably idles at 40W (again, drives not spun down). That means you are 17W under his NAS idle power. So you claim your remaining hardware (Mac mini, 4 APs, 4 cameras) run in under 17W... Yeah that's not possible. 17W is peanuts; it's half the power of a phone's fast charger (~30W).
PS: for the OP, an easy way to further reduce power consumption is to replace your 500W PSU with a smaller one, like 250-300W which is still amply over-specced for your build. Because the typical efficiency curve of a PSU drops sharply at very low loads. For example at idle when your NAS pulls 67W from the wall it's very probable it supplies only ~50W to the internal components, so it's running at 10% load and it's only 50/67 = 75% efficient. The smallest load for which the 80 Plus Gold standard requires a minimum efficiency is 20%. If you downgrade to a 250W PSU you are enforcing a minimum 20% load for which the 80 Plus Gold standard requires minimum 87% efficiency. The load at the wall would thus drop to 50/.87 = 57W thereby saving you 10W.
So ~75W in total for everything PoE, Mac Mini and UNAS Pro. I was 8.5W over, so remove the Mac Mini from the equation.
The rest of the consumption (21W) is made up of a UDM Pro with a 4TB WD Red, USW Pro Max 16 POE, Hue Bridge, Tado Bridge, Homey Pro, and a Unifi UPS Tower.
and yes, that's at idle (drives spinning). It does rise to 120-130W when everything is doing "something".
About 10x difference in CPU performance, 4x in RAM, zfs vs btrfs, quicksync, kubernetes/docker etc.
Doesn't make the unify an inferior machine - it just reflects a narrower specialized focus on serving files...and yes does so with lower idle draw.
My setup, UNAS and Mac Mini M1, with 10Gbps networking, will easily perform as well as the NAS in question, but the Mac Mini only uses 4.6W idle, making it much more efficient.
As for ZFS vs Btrfs, they're about equal unless you're doing some very specific things. For most normal server stuff or NAS stuff, Btrfs is every bit as competent as ZFS. Snapshots, compression, RAID1+, recovery, bitrot detection, they're pretty much equal. ZFS as an advantage with RAIDZ1/2 as Btrfs apparently hasn't managed to make RAID5/6 stable in the past decade. You can however run RAID1 across multiple devices with multiple copies, which is not quite the same, but also not terrible.
The RAM usage of ZFS is also largely a myth. Yes, it will use RAM if available, but that is mostly because it was designed with it's own file cache, which was probably fine on Solaris, and to some extent on FreeBSD, but Linux uses a shared block cache, and instead of files being cached in the shared cache, ZFS will cache them, making it look like it hogs RAM.
Are your drives spun? 70w is a pretty low bar. The nas by itself is probably 40w with drives, Mac mini is another 7-10w (especially at wall) and now we are at 50w, so 20w left for 4 AP and cameras
All in all the total consumption at the wall is 96W, but as i have written in another comment, i was 7-8W off, meaning the quoted setup of mine uses 7-8W more than the 66.7W OPs NAS idles at.
It's part of the 3-2-1 backup setup, but where other people have their "offsite backup" in the cloud, I keep my working copy there, and have backups at home.
I outsourced operations of it though. I have self hosted for decades, and for the first time in 15-20 years, I'm able to take a vacation and not bring my laptop in case something breaks.
As for main storage, as was probably evident from my comment, I don't have 30TB of cloud storage. We have our important stuff in the cloud, and "everything else" at home, but nothing at home is accessible from the internet unless you're on a VPN.
The developer hardkernel also publishes all relevant info such as board schematics.
I like the extensive benchmark from hardkernel, the only issue is that any ARM-based product is very tricky to boot and the only savior is armbian.
The rated power supply spec is the maximum it can provide, not the actual consumption of the device.
Before that I had a full size NAS with an efficient Fujitsi motherboard, pico-psu, 12V adaptor and spinning HDD's. That required so much extra work for so little power efficiency gains vs the Odroid.