I appreciate the message of this article. I've played with half a dozen types of home NAS / RAID / storage solutions over the decades.
The best way I can describe it is:
There are people who just want to use a car to get from A to B; there are those who enjoy the act of driving, maybe take it to the track on a lapping day; and there are those who enjoy having a shell of a car in the garage and working on it. There's of course a definite overlap and Venn diagram :-).
My approach / suggestion - Understand what type are you in relation to any given technology vs what is the author's perspective.
I will never resent the time (oh God so much time!) I've spent in the past mucking with homelabs and storage systems. Good memories and tons of learning! today I have family and kids and just need my storage to work. I'm in a different Venn circle than the author - sure I have knowledge and experience and could conceivably save a few bucks (eh not as given as articles make it seem;), as long as I value my time appropriately low and don't mind the necessary upkeep and potential scheduled and unscheduled "maintenance windows" to my non-techie users.
But I must admit I'm in the turn-key solution phase of my life and have cheerfully enjoyed a big-name NAS over last 5 years or so :).
The trick with old computers harnessed as NAS is the often increased space, power, and setup/patching/maintenance work requirements, compared to hopefully some learning experience and a sense of control.
> But I must admit I'm in the turn-key solution phase of my life and have cheerfully enjoyed a big-name NAS over last 5 years or so :).
You know, I thought I was too, so I threw in the towel and migrated one my NAS to TrueNAS, since it's supposed to be one of those "turn-key solutions that doesn't require maintenance", and everything got slower, harder to maintain and even managed to somehow screw up one of my old disks when I added it to my pool.
The next step after that was to migrate to NixOS and bit the bullet to ensure the stuff actually works. I'd love to just give someone money and not having to care, but it seems the motto of "If you want something done correctly, you have to do it yourself" lives deep in me, and I just cannot stomach loosing the data on my NAS, so it ends up really hard to trust any of those paid-for solutions when they're so crap.
I wouldn't call TrueNAS, or anything where you're installing an OS on custom hardware, "turn-key". That's saved for the Synologys and UGREENs and Ubiquitis of the world.
It's curious that you would choose NixOS for a system that "just works". As much as I like the core ideas of Nix(OS)—reproducibility, declarative configuration, snapshots and atomic upgrades/rollbacks—, having used it for a few years on several machines, I've found it to be opposite of that. It often requires manual intervention before an upgrade, since packages are frequently renamed and API changes are common. The Nix store caches a lot of data, which is good, but it also requires frequent garbage collection to recover space. The errors when something goes wrong are cryptic, and troubleshooting is an exercise in frustration. The documentation is some variation of confusing, sparse, outdated, or nonexistent. I'm sure that to a Nix veteran these might not be issues, but even after a few years of usage, I find it as hostile and impractical to use as on the first day. Using it for a server would be unthinkable for me.
For my personal NAS machine, I've used a Debian server with SnapRAID and mergerfs for nearly a decade now, using a combination of old and new HDDs. Debian is rock-solid, and I've gone through a couple of major version upgrades without issues. This setup is flexible, robust, easy/cheap to expand, and requires practically zero maintenance. I could automate the SnapRAID sync and "scrub", but I like doing it manually. Best of all, it's conceptually and technically simple to understand, and doesn't rely on black magic at the filesystem level. All my drives are encrypted with LUKS and use standard ext4. SnapRAID is great, since if one data drive fails, I don't lose access to the entire array. I've yet to experience a drive failure, though, so I haven't actually tested that in practice.
So I would recommend this approach if you want something simple, mostly maintenance-free, while remaining fully in control.
I appreciate your perspective and effort, as I said, I was there too :-). Nothing you mention would be considered "turn key" by anybody other than very specific subset of Hacker News audience, and may inadvertently prove my point :-).
Another way to put it - My home lab has production and non-production environments.
Non-production is my kubernetes cluster running all the various websites, AI workflows, and other cool tools i love playing with.
Production is everything in between my wife typing in google.com and google; or between my kids and their favorite shows on Jellyfin.
You can guess which one has the managed solutions, and which one has my admittedly-reliable-but-still-requires-technical-expertise-to-fix-when-down unmanaged solutions.
> My approach / suggestion - Understand what type are you in relation to any given technology vs what is the author's perspective.
Similarly what I was once told when looking at private planes was "What's your mission?" and they've stuck with me ever since, even if I'm never gonna buy a plane.
One person's mission might be backing up their family photos while someone else's mission is a full *arr stack.
I personally think big-box computer retailers that build custom turn-key computers (e.g. Microcenter) should get into the NAS game by partnering with unraid and Fractal. It's as turnkey as any commercial NAS I've ever used but comes with way more flexibility and future proofing and the ability for users to get hyper technical if they want and tweak everything in the system.
It's wild how much more cost effective this would be than pretty much any commercial NAS offering. It's ridiculous when you consider total system lifecycle cost (with how easy it is to upgrade unraid storage pools).
Looking right now and my local Microcenter builds essentially three things: desktop PCs, some kind of "studio" PC, and "Racing Simulators". Turnkey NASs would move a lot of inventory I'd wager.
I think the Terramaster NASes are about as close to this as you can get, they even have an internal USB header that seems purpose-added for the Unraid boot disk.
That said, I prefer straight Debain to Unraid. I feel Unraid saves you a weekend on the command line setting it up the first time (nothing wrong with that!), but after playing with the trial I just went back to Debian, I didn't feel like there was $250 of value there for me ¯\_(ツ)_/¯. Almost everything on my server is in Linuxserver.io Docker containers anyways, and I greatly prefer just writing a Docker Compose file over clicking through a ton of GUI drop downs. Once you're playing with anything beyond SMB shares, you're likely either technically savvy or blindly following a guide anyways, so running commands through ssh is actually easier to follow along with a guide than clicking in a UI, since you can just copy and paste. YMMV.
I think there is a slight modification to this, at least for me, there are things tech related for me I want turnkey - I've used MacOS for years because I want a unix system with a decent GUI that I don't have to manage (the fact that the machines work well is a nice addon). I've got a synology, it works.
I don't have unlimited bandwidth or time and want to continue the tinkering phase on things that interest me rather than the tools that enable such.
I'm an not the relentless explorer and experimenter that you're sort of patronizing with this comment. I'm somebody who knows that you can put together a NAS with an old desktop somebody will give you for free, slap Debian Stable on it, RAID5 (4 or fewer) or RAID6 (5 or a few more) a bunch of drives together, and throw a samba share on the network in less than a day (minus drive clearing time for encryption.)
It is not some sort of learning and growing experience. The entirety of the maintenance on the first one I put together somewhere between 10-15 years ago is to apt-get update and dist-upgrade on it periodically, upgrade the OS to the latest stable whenever I get around to it, and when I log in and get a message that a disk is failing or failed, shut it down until I can buy a replacement. This happens once every 4 or 5 years.
The trick with big-name NAS is that they go out of business, change their terms, or install spyware on your computer and you end up involved in tons of drama over your own data. This guide is even a bit overblown. Just use MDADM.* It will always be there, it will always work, you can switch OSes or move the drives to another system and the new one will instantly understand your drives - they really become independent of the computer altogether. When it comes to encryption, all of the above goes for LUKS through cryptsetup. The box is really just a dumb box that serves shares, it's the drives that are smart.
I guess MDADM is a (short) learning experience, but it's not one that expires. LUKS through cryptsetup is also very little to learn (remember to write zeros to the drive after encrypting it), but it's something that turnkey solutions are likely to ignore, screw up, or lock you into something proprietary through. Instead of getting a big SSD for a boot drive, just use one of those tiny PCIe cards, as small and cheap as you can get it. If it dies, just buy another one, slap it in, install Debian, and you'll be running again in an hour.
With all this I'm not talking about a "homelab" or any sort of social club, just a computer that serves storage. The choice isn't between making it into a lifestyle/personality or subscribing to the managed experience. Somehow people always seem to make it into that.
tl;dr: use any old desktop, just use Debian Stable, MDADM, and cryptsetup. Put the OS on a 64G PCIe or even a thumb drive (whatever you have laying around.)
* Please don't use ZFS, you don't need it and you don't understand it (if you do, ignore me), if somebody tells you your NAS needs 64G of RAM they are insane. All it's going to do is turn you into somebody who says that putting together a NAS is too hard and too expensive.
* my original post was in no way intended to be patronizing - it was merely to point out that before investing effort, it pays to understand one's goals and priorities :-). Some activities have rabbit holes, which are fun, but only if you're into that sort of thing.
* now though, I will indulge in pointing out my absolute favourite hacker news type of post : "how dare you insinuate this is tricky or difficult or time consuming for anybody - you merely [series of acronyms and technologies and activities nobody in my family could do after a month's study, while hand waving over risks and issues and costs]" 0:-)
Id also argue if you can setup md you can probably figure out how to setup zfs. It looks scary on the RAM, because it uses “idle” ram, but it will immediately release it when any other app needs it. People use ZFS on raspberry Pi’s all the time without problems.
I've self-hosted web apps (typically IIS and SQL Server) for over 20 years.
While using desktops for this has sometimes been nice, the big things I want out of a server are
- low power usage when running 24/7
- reliable operation
- quiet operation
- performance but they don't need much
So I've had dual Xeon servers and 8-core Ryzen servers but my favorites are a miniForums with a mobile Ryzen quad core, and my UGREEN NAS. They check all the boxes for server / NAS. Plus both were under $300 before upgrades / storage drives.
Often my previous gaming desktop sells for a lot more than that ... I just sold my 4 year old video card for $220. Not sure what the rest of the machine will be used for, but it's not a good server because the 12-core CPU simply isn't power efficient enough.
> So I've had dual Xeon servers and 8-core Ryzen servers but my favorites are a miniForums with a mobile Ryzen quad core, and my UGREEN NAS.
I just ordered my first minisforum box (MS-02 Ultra) to serve as my main storage NAS + homelab... first time ordering any of these Chinese boxes, but nothing else checked off all he requirements I had as well as it. Hopefully works out well for me.
It is water-cooled and whisper quiet aside from the GPU fans. So yes there are options.. but right now selling the RAM alone might pay for a whole mini-server. I'm going to try to sell it locally to a PC gamer though, get some proper use out of it!
Server boards (like with xeons) won't have eco modes and will not be made for such use case. Nor server cpus. Idle servers are wasted money, so they are not designed for such use cases.
Agree and and I wonder what the cost tradeoff is using your old hardware or buying new power efficient equipment. I even recently thought about buying a mac mini to use as a home server.
I would imagine so depending on software / use case.
I run Windows Server 2022 to support IIS / SQL Server so it's not a perfect fit for me personally, but I suspect for many home servers or NAS setup it would work well.
I was pretty disappointed to find out that none of the ms-01 ms-a1 or ms-a2 have a ATX power button header. This means you need to solder wires to the tiny tactile switch and connect those to something like a pi-kvm to get true power control/status and ipmi/redfish
Just seems like something simple they could have easily included if they wanted to really target the homelab space
“I repurposed an old gaming PC with a Ryzen 1600x, 24GB of RAM, and an old GTX 1060 for my NAS since I had most of the parts already.”
Wouldn’t running something like this 24/7 cause a substantial energy consumption? Costs of electricity being one thing, carbon footprint an another. Do we really want such a setup running in each household in addition to X other devices?
In addition to energy, the biggest reason I no longer use old desktops as servers is the space they take up. If you live in an apartment or condo and don't have extra rooms, even having a desktop tower sitting in a corner somewhere is a lot less visually appealing than a small NSA or mini-PC you can stick on a shelf somewhere.
Tastes differ. I personally find the 36U IBM rack in the corner of my apartment more visually appealing than some of my other furniture, and consolidating equipment in a rack with built-in PDUs makes it easier to run everything through a single UPS in an apartment where rewiring is not an option.
? It’s not like the machine would be custom built for him.
Are you saying it’s fine to drive a huge truck if you’re single and just need to get around the block to buy a pack of eggs, just because the emissions are nothing compared to those required for making that smaller, more efficient car that you could buy instead?
> Wouldn’t running something like this 24/7 cause a substantial energy consumption?
Obviously depends on the actual usage, and parent's specific setup, lots of motherboards/CPUs/GPUs/RAM allow you to tune the frequencies and allows you to downclock almost anything. Finally, we have no idea about the energy source in this case, could be they live in a country with lots of wind and solar power, if we're being charitable.
> could be they live in a country with lots of wind and solar power, if we're being charitable.
Because solar wind and hydro have no impact on the environment at all. Or nuclear.
I wish people would understand that waste is waste. Even less waste is still waste.
(I don't argue for fossil fuels here, mind you.)
Plus, the countries have shared grids. Any kWh you use can't be used by someone else, so may come from coal when they do, for all you know. It's a false rationalization.
Reusing existing hardware is a great gameplan. Really happy with my build and glad I didn't go for out of the box.
>In general, you want to get the fastest boot drive you can.
Pretty much all NAS like operation systems run in memory, so in general you're better off running the OS from some shitty 128gb sata ssd and using the nvme for data/cache/similar where it actually matters. Some OS are even happy to use a usb stick but that only works for OS designed to accommodate this (unraid I think does). Something like proxmox would destroy the stick.
Also, on HDDs - worth reading up on SMR drives before buying. And these days considering an all flash build if you don't have TBs of content
Never used proxmox myself, but is that the common issue of "logs written to flash consuming writes"? Or something else? The former is probably just changing a line in the config to fix, if it's just that.
> And these days considering an all flash build if you don't have TBs of content
Maybe we're thinking in different scales, but doesn't almost all NAS' have more than 1TB of content? My own personal NAS currently has 16TB in total, I don't want to even imagine what the cost of that would be if I went with SSDs instead of HDDs. I still have SSD for caching, but main data store in a NAS should most likely be HDDs unless you have so much money you just have to spend it.
> Maybe we're thinking in different scales, but doesn't almost all NAS' have more than 1TB of content?
Depends on what you’re storing. With fast gigabit internet there just isn’t much of a need to store ahem Linux isos locally anymore as anything can be procured in a couple mins. Most people just aren’t producing that much original data on their own either (exceptions exist ofc - people in video making space etc)
Plus it’s not that expensive anymore. I’ve got around 6TB of 100% mirrored flash without even trying (was aiming for speed and redundancy). Most of it used enterprise ones. Think I paid around 50 a TB.
Re proxmox - some of their multi node orchestration stuff is famous for chewing up drives at wild rates for some people. People losing a 1% of ssd life every couple days. Hasn’t affected me so haven’t looked into details
Unraid weirdly requires booting off of a USB for the base OS. I think it's to manage licensing.
SSDs are generally expected to be used as write-through caches with the main disc pool. However, if you have a bunch you can add them to a ZFS array and it works pretty much flawlessly.
I've been running homebuilt NAS for a decade. My advice is going to irritate the purists:
* Don't use raid5. Use btrfs-raid1 or use mdraid10 with >=2 far-copies.
* Don't use raid6. Use btrfs-raid1c3 or use mdraid10 with >=3 far-copies.
* Don't use ZFS on Linux. If you really want ZFS, run FreeBSD.
The multiple copy formats outperform the parity formats on reads by a healthy margin, both in btrfs and in mdraid. They're also remarkably quieter in operation and when scrubbing, night and day, which matters to me since mine sits in a corner of my living room. When I switched from raid6 to 3-far-copy-mdraid10, the performance boost was nice, but I was completely flabbergasted by the difference in the noise level during scrubs.
Yes, they're a bit less space efficient, but modern storage is so cheap it doesn't matter, I only store about 10TB of data on it.
I use btrfs: it's the most actively tested and developed filesystem in Linux today, by a very wide margin. The "best" filesystem is the one which is the most widely tested and developed, IMHO. If btrfs pissed in your cheerios ten years ago and you can't figure out how to get over it, use ext4 with metadata_csum enabled, I guess.
I use external USB enclosures, which is something a lot of people will say not to do. I've managed to get away with it for a long time, but btrfs is catching some extremely rare corruption on my current NAS, I suspect it's a firmware bug somehow corrupting USB3 transfer data but I haven't gotten to the bottom of it yet: https://lore.kernel.org/linux-btrfs/20251111170142.635908-1-...
I use mergerfs + snapraid on my HDDs for “cold” storage for the same reason: noise. Snapraid sync and scrub runs at 4am when I am not in the same room as the NAS.
The drives stay spun down 99% of the time, because I also use a ZFS mirrored pool on SSDs for “hot” files, although Btrfs could also work if you're opposed to ZFS because it's out of tree.
I also use mergerfs 'ff' (first found) create order, and put the SSDs first in the ordered fstab list of the mergerfs mount point. This gives me tiered storage: newly created files and reads hit the SSDs first. I use a mover script that runs nightly with the SnapRAID sync/scrub to keep space on the SSDs open.
I was wondering what the parent's beef was with ZFS on Linux. I have a box I might change over (B-to-L) and I haven't come across any significant discontent.
Better? No absolutely not. Capable? Without a doubt. I have a multi bay nas and it's like 1/6the the size of my pc case. My nas also makes removing and replacing drives trivial. There's a million guides online for my particular nas already and software written with it in mind. It also draws a lot less power than my gaming pc and has a lot quieter operation.
It's difficult for me to accept it's better given all the above.
At San Francisco electricity prices of ~$0.50/kWh, using an old gaming PC/workstation instead of a lower power platform will cost you hundreds of dollars per year in electricity. The cost of an N100-based NAS gets dwarfed by the electricity cost of reusing old hardware.
I've been building and running various home servers for years. Currently I have a n eBay special FreeBSD quad xeon (based on the desktop socket) with 64GB ECC and a cheap SAS/SATA card running two ZFS arrays.
On a side note: I hate web GUI's. I used to think they were the best thing since sliced bread but the constant churn combined with endless menus and config options with zero hints or direct help links led me to hate them. The best part is the documentation is always a version or two behind and doesn't match the latest and greatest furniture arrangement. Maybe that has improved but I'd rather understand the tools themselves.
The best way I can describe it is:
There are people who just want to use a car to get from A to B; there are those who enjoy the act of driving, maybe take it to the track on a lapping day; and there are those who enjoy having a shell of a car in the garage and working on it. There's of course a definite overlap and Venn diagram :-).
My approach / suggestion - Understand what type are you in relation to any given technology vs what is the author's perspective.
I will never resent the time (oh God so much time!) I've spent in the past mucking with homelabs and storage systems. Good memories and tons of learning! today I have family and kids and just need my storage to work. I'm in a different Venn circle than the author - sure I have knowledge and experience and could conceivably save a few bucks (eh not as given as articles make it seem;), as long as I value my time appropriately low and don't mind the necessary upkeep and potential scheduled and unscheduled "maintenance windows" to my non-techie users.
But I must admit I'm in the turn-key solution phase of my life and have cheerfully enjoyed a big-name NAS over last 5 years or so :).
The trick with old computers harnessed as NAS is the often increased space, power, and setup/patching/maintenance work requirements, compared to hopefully some learning experience and a sense of control.
You know, I thought I was too, so I threw in the towel and migrated one my NAS to TrueNAS, since it's supposed to be one of those "turn-key solutions that doesn't require maintenance", and everything got slower, harder to maintain and even managed to somehow screw up one of my old disks when I added it to my pool.
The next step after that was to migrate to NixOS and bit the bullet to ensure the stuff actually works. I'd love to just give someone money and not having to care, but it seems the motto of "If you want something done correctly, you have to do it yourself" lives deep in me, and I just cannot stomach loosing the data on my NAS, so it ends up really hard to trust any of those paid-for solutions when they're so crap.
For my personal NAS machine, I've used a Debian server with SnapRAID and mergerfs for nearly a decade now, using a combination of old and new HDDs. Debian is rock-solid, and I've gone through a couple of major version upgrades without issues. This setup is flexible, robust, easy/cheap to expand, and requires practically zero maintenance. I could automate the SnapRAID sync and "scrub", but I like doing it manually. Best of all, it's conceptually and technically simple to understand, and doesn't rely on black magic at the filesystem level. All my drives are encrypted with LUKS and use standard ext4. SnapRAID is great, since if one data drive fails, I don't lose access to the entire array. I've yet to experience a drive failure, though, so I haven't actually tested that in practice.
So I would recommend this approach if you want something simple, mostly maintenance-free, while remaining fully in control.
Non-production is my kubernetes cluster running all the various websites, AI workflows, and other cool tools i love playing with.
Production is everything in between my wife typing in google.com and google; or between my kids and their favorite shows on Jellyfin.
You can guess which one has the managed solutions, and which one has my admittedly-reliable-but-still-requires-technical-expertise-to-fix-when-down unmanaged solutions.
Similarly what I was once told when looking at private planes was "What's your mission?" and they've stuck with me ever since, even if I'm never gonna buy a plane.
One person's mission might be backing up their family photos while someone else's mission is a full *arr stack.
It's wild how much more cost effective this would be than pretty much any commercial NAS offering. It's ridiculous when you consider total system lifecycle cost (with how easy it is to upgrade unraid storage pools).
Looking right now and my local Microcenter builds essentially three things: desktop PCs, some kind of "studio" PC, and "Racing Simulators". Turnkey NASs would move a lot of inventory I'd wager.
That said, I prefer straight Debain to Unraid. I feel Unraid saves you a weekend on the command line setting it up the first time (nothing wrong with that!), but after playing with the trial I just went back to Debian, I didn't feel like there was $250 of value there for me ¯\_(ツ)_/¯. Almost everything on my server is in Linuxserver.io Docker containers anyways, and I greatly prefer just writing a Docker Compose file over clicking through a ton of GUI drop downs. Once you're playing with anything beyond SMB shares, you're likely either technically savvy or blindly following a guide anyways, so running commands through ssh is actually easier to follow along with a guide than clicking in a UI, since you can just copy and paste. YMMV.
I don't have unlimited bandwidth or time and want to continue the tinkering phase on things that interest me rather than the tools that enable such.
It is not some sort of learning and growing experience. The entirety of the maintenance on the first one I put together somewhere between 10-15 years ago is to apt-get update and dist-upgrade on it periodically, upgrade the OS to the latest stable whenever I get around to it, and when I log in and get a message that a disk is failing or failed, shut it down until I can buy a replacement. This happens once every 4 or 5 years.
The trick with big-name NAS is that they go out of business, change their terms, or install spyware on your computer and you end up involved in tons of drama over your own data. This guide is even a bit overblown. Just use MDADM.* It will always be there, it will always work, you can switch OSes or move the drives to another system and the new one will instantly understand your drives - they really become independent of the computer altogether. When it comes to encryption, all of the above goes for LUKS through cryptsetup. The box is really just a dumb box that serves shares, it's the drives that are smart.
I guess MDADM is a (short) learning experience, but it's not one that expires. LUKS through cryptsetup is also very little to learn (remember to write zeros to the drive after encrypting it), but it's something that turnkey solutions are likely to ignore, screw up, or lock you into something proprietary through. Instead of getting a big SSD for a boot drive, just use one of those tiny PCIe cards, as small and cheap as you can get it. If it dies, just buy another one, slap it in, install Debian, and you'll be running again in an hour.
With all this I'm not talking about a "homelab" or any sort of social club, just a computer that serves storage. The choice isn't between making it into a lifestyle/personality or subscribing to the managed experience. Somehow people always seem to make it into that.
tl;dr: use any old desktop, just use Debian Stable, MDADM, and cryptsetup. Put the OS on a 64G PCIe or even a thumb drive (whatever you have laying around.)
* Please don't use ZFS, you don't need it and you don't understand it (if you do, ignore me), if somebody tells you your NAS needs 64G of RAM they are insane. All it's going to do is turn you into somebody who says that putting together a NAS is too hard and too expensive.
* my original post was in no way intended to be patronizing - it was merely to point out that before investing effort, it pays to understand one's goals and priorities :-). Some activities have rabbit holes, which are fun, but only if you're into that sort of thing.
* now though, I will indulge in pointing out my absolute favourite hacker news type of post : "how dare you insinuate this is tricky or difficult or time consuming for anybody - you merely [series of acronyms and technologies and activities nobody in my family could do after a month's study, while hand waving over risks and issues and costs]" 0:-)
Consider mergerfs + snapraid.
Id also argue if you can setup md you can probably figure out how to setup zfs. It looks scary on the RAM, because it uses “idle” ram, but it will immediately release it when any other app needs it. People use ZFS on raspberry Pi’s all the time without problems.
While using desktops for this has sometimes been nice, the big things I want out of a server are
- low power usage when running 24/7
- reliable operation
- quiet operation
- performance but they don't need much
So I've had dual Xeon servers and 8-core Ryzen servers but my favorites are a miniForums with a mobile Ryzen quad core, and my UGREEN NAS. They check all the boxes for server / NAS. Plus both were under $300 before upgrades / storage drives.
Often my previous gaming desktop sells for a lot more than that ... I just sold my 4 year old video card for $220. Not sure what the rest of the machine will be used for, but it's not a good server because the 12-core CPU simply isn't power efficient enough.
I just ordered my first minisforum box (MS-02 Ultra) to serve as my main storage NAS + homelab... first time ordering any of these Chinese boxes, but nothing else checked off all he requirements I had as well as it. Hopefully works out well for me.
Sell the gaming GPU and put in something that does video out, or use a CPU with an iGPU.
Big gaming cases with quiet fans are quiet.
Selling the GPU and tuning or swapping the CPU can put money in your pocket to pay for storage.
Big case also means big space.
I run Windows Server 2022 to support IIS / SQL Server so it's not a perfect fit for me personally, but I suspect for many home servers or NAS setup it would work well.
https://www.reddit.com/r/UgreenNASync/comments/1nr2j39/encry...
It's possible because you can install a different OS, TrueNAS, etc. but it's not something I personally worry about.
I was pretty disappointed to find out that none of the ms-01 ms-a1 or ms-a2 have a ATX power button header. This means you need to solder wires to the tiny tactile switch and connect those to something like a pi-kvm to get true power control/status and ipmi/redfish
Just seems like something simple they could have easily included if they wanted to really target the homelab space
Wouldn’t running something like this 24/7 cause a substantial energy consumption? Costs of electricity being one thing, carbon footprint an another. Do we really want such a setup running in each household in addition to X other devices?
Are you saying it’s fine to drive a huge truck if you’re single and just need to get around the block to buy a pack of eggs, just because the emissions are nothing compared to those required for making that smaller, more efficient car that you could buy instead?
Obviously depends on the actual usage, and parent's specific setup, lots of motherboards/CPUs/GPUs/RAM allow you to tune the frequencies and allows you to downclock almost anything. Finally, we have no idea about the energy source in this case, could be they live in a country with lots of wind and solar power, if we're being charitable.
Because solar wind and hydro have no impact on the environment at all. Or nuclear.
I wish people would understand that waste is waste. Even less waste is still waste.
(I don't argue for fossil fuels here, mind you.)
Plus, the countries have shared grids. Any kWh you use can't be used by someone else, so may come from coal when they do, for all you know. It's a false rationalization.
>In general, you want to get the fastest boot drive you can.
Pretty much all NAS like operation systems run in memory, so in general you're better off running the OS from some shitty 128gb sata ssd and using the nvme for data/cache/similar where it actually matters. Some OS are even happy to use a usb stick but that only works for OS designed to accommodate this (unraid I think does). Something like proxmox would destroy the stick.
Also, on HDDs - worth reading up on SMR drives before buying. And these days considering an all flash build if you don't have TBs of content
Never used proxmox myself, but is that the common issue of "logs written to flash consuming writes"? Or something else? The former is probably just changing a line in the config to fix, if it's just that.
> And these days considering an all flash build if you don't have TBs of content
Maybe we're thinking in different scales, but doesn't almost all NAS' have more than 1TB of content? My own personal NAS currently has 16TB in total, I don't want to even imagine what the cost of that would be if I went with SSDs instead of HDDs. I still have SSD for caching, but main data store in a NAS should most likely be HDDs unless you have so much money you just have to spend it.
Depends on what you’re storing. With fast gigabit internet there just isn’t much of a need to store ahem Linux isos locally anymore as anything can be procured in a couple mins. Most people just aren’t producing that much original data on their own either (exceptions exist ofc - people in video making space etc)
Plus it’s not that expensive anymore. I’ve got around 6TB of 100% mirrored flash without even trying (was aiming for speed and redundancy). Most of it used enterprise ones. Think I paid around 50 a TB.
Re proxmox - some of their multi node orchestration stuff is famous for chewing up drives at wild rates for some people. People losing a 1% of ssd life every couple days. Hasn’t affected me so haven’t looked into details
SSDs are generally expected to be used as write-through caches with the main disc pool. However, if you have a bunch you can add them to a ZFS array and it works pretty much flawlessly.
* Don't use raid5. Use btrfs-raid1 or use mdraid10 with >=2 far-copies.
* Don't use raid6. Use btrfs-raid1c3 or use mdraid10 with >=3 far-copies.
* Don't use ZFS on Linux. If you really want ZFS, run FreeBSD.
The multiple copy formats outperform the parity formats on reads by a healthy margin, both in btrfs and in mdraid. They're also remarkably quieter in operation and when scrubbing, night and day, which matters to me since mine sits in a corner of my living room. When I switched from raid6 to 3-far-copy-mdraid10, the performance boost was nice, but I was completely flabbergasted by the difference in the noise level during scrubs.
Yes, they're a bit less space efficient, but modern storage is so cheap it doesn't matter, I only store about 10TB of data on it.
I use btrfs: it's the most actively tested and developed filesystem in Linux today, by a very wide margin. The "best" filesystem is the one which is the most widely tested and developed, IMHO. If btrfs pissed in your cheerios ten years ago and you can't figure out how to get over it, use ext4 with metadata_csum enabled, I guess.
I use external USB enclosures, which is something a lot of people will say not to do. I've managed to get away with it for a long time, but btrfs is catching some extremely rare corruption on my current NAS, I suspect it's a firmware bug somehow corrupting USB3 transfer data but I haven't gotten to the bottom of it yet: https://lore.kernel.org/linux-btrfs/20251111170142.635908-1-...
The drives stay spun down 99% of the time, because I also use a ZFS mirrored pool on SSDs for “hot” files, although Btrfs could also work if you're opposed to ZFS because it's out of tree.
Basically using this idea, but with straight Debian instead of ProxMox: https://perfectmediaserver.com/05-advanced/combine-zfs-and-o...
I also use mergerfs 'ff' (first found) create order, and put the SSDs first in the ordered fstab list of the mergerfs mount point. This gives me tiered storage: newly created files and reads hit the SSDs first. I use a mover script that runs nightly with the SnapRAID sync/scrub to keep space on the SSDs open.
https://github.com/trapexit/mergerfs/blob/master/tools/merge...
It's difficult for me to accept it's better given all the above.
On a side note: I hate web GUI's. I used to think they were the best thing since sliced bread but the constant churn combined with endless menus and config options with zero hints or direct help links led me to hate them. The best part is the documentation is always a version or two behind and doesn't match the latest and greatest furniture arrangement. Maybe that has improved but I'd rather understand the tools themselves.