Not only that, but their security situation is terrible. Their OS is full of EOL'ed stuff.
On products you can buy TODAY, you find:
- Their Btrfs filesystem is a fork of a very old branch and doesn't have modern patches
- A custom, non standard, self built, ACL system for the filesystem
- Kernel 4.4
- PHP 7.4 (requirement for their Hyperbackup app)
- smbd 4.15
- PostgreSQL 11.11
- smbd 8.2p1
- Redis 6.2.8
- ...
They claim it's OK because they've backported all security fixes to their versions. I don't believe them. The (theoretical) huge effort needed for doing that would allow them to grow a way better product.
And it's not only about security, but about features (well, some are security features too). We're missing new kernel features (network hardware offload, security, wireguard...), filesystem (btrfs features, performance and error patches...), file servers (new features and compatibility, as Parallel NFS or Multichannel CIFS/SMB), and so on...
I think they got stuck on 4.4 because of their btrfs fork, and now they're too deep on their own hole.
Also, their backend is a mess. A bunch of different apps developed on different ways that mostly don't talk to each other. They sometimes overlap with each other and have very essential features that don't work and don't plan to fix. Meanwhile, they're busy releasing AI stuff features for the "Office" app.
Edit note: For myself and some business stuff, I have a bunch of TrueNAS deployments, from a small Jonsbo box for my home, to a +16 disk rack server. This was for a client that wanted to migrate from another Synology they had on loan, and I didn't want to push a server on them, as they're a bit far away from me, and I wanted it to be serviceable by anyone. I regret it.
The encryption is also broken. If you use encrypted shared folders, you have an arbitrary filename limit (https://kb.synology.com/en-ro/DSM/tutorial/File_folder_path_...). If you use volume encryption, your encryption key is stored on the NAS itself, which is capable of decrypting the data, unless you buy a second Synology NAS (https://blog.elcomsoft.com/2023/06/volume-encryption-in-syno...) to act as a key vault. Synology claims that volume encryption protects if you if the storage drives are stolen, but in what world would the drives, and not the NAS itself, be stolen?
The filename limit comes from ecryptfs (https://www.ecryptfs.org/) which is what Synology uses for encrypted shared folders.
As for full disk encryption, you can select where to store the key, which may be on the NAS itself (rendering FDE more or less useless) or on a USB key or similar.
Ah, I forgot about that. I had to take the key out of the NAS too, to a different device. That made no sense at all. And almost all of the implementations of the key server you need cost thousands of dollars in licenses.
Edit: what they deploy on their NAS is an old version of a testing implementation of the KMIP protocol. PyKMIP: https://github.com/OpenKMIP/PyKMIP
I understand Synology’s design approach. In enterprise environments, physical security - especially when systems are housed in ISO 27001–certified data centers—is relatively straightforward to achieve.
The primary value of disk/volume encryption is actually for scenarios like end-of-life replacement, RMA, failure and disposal - even if someone later reconstructs the disk sectors, the bits remain unreadable. This is one layer of defense in depth, not a substitute for physical security.
Synology also supports KMIP, which I see addressing two situations:
1. Data center key governance and media mobility - Multiple hosts (including spares) can use KMIP for centralized key management, improving the mobility of drives within the data center and reducing the operational cost of moving drives between machines. When decommissioning hardware, keys can be revoked directly in KMIP with an audit trail.
2. Edge/branch sites with weaker physical controls - By using KMIP, keys are kept in the more secure data center rather than on the edge device itself. The edge hardware stores no keys, so if an entire machine is stolen, it cannot be unlocked, preserving confidentiality.
You can move out the key from the device using KMIP. I have an implementation that uses a Go-based service to store it in Nitrohsm. I'll clean it up and post a release announcement on Reddit...
> but in what world would the drives, and not the NAS itself, be stolen?
Not to defend Synology, but popping a drive out of the NAS so that it won't be noticed (or noticed much later) is a much easier way to steal data than carrying off the whole NAS. I assume they're guarding against the kind of scenario where an employee steals steals drives rather than ski-masked thieves breaching the office and making off with the NAS.
My main issue with their system is how closed it is.
I got an issue where mind would randomly start writing disk like crazy and maxing cpu usage, to the point I was bothered by the noise. I’d stop all containers, leave it as close to idle as I could manage, still spiking.
There was no way I could learn what was causing it.
I would like to assume it was a disk maintenance process or something, but for all I know it could be mining bitcoin and I’d be none the wiser. It went on for some weeks then stopped.
Ever since they added the "universal search" thingy, their NAS do that anytime they reach a decently large video file. Even if you turn down search indexing, media indexing, media thumbnails, ... It still kills itself with no throttling processing those files.
May or may not be what you encountered, but had a customer caught by this and found out the hard way you can't stop it. My issue is not the processing, it's the throttling, it's so crazy how the entire NAS gets taken down for like ten minutes (and that was on a racked xeon model), no samba no nfs no nothing answering anymore.
At a customer I ended up having to help one department running two Synology boxes as a side project. I came in with low expectations, and still was thoroughly disappointed.
- one device died, was EOL at that point, and newer ones no longer can read the disks
- stupid limits for array size. Depending on your setup adding disks can mean "copy everything off, delete arrays, and then create new ones". Also, want one 200TB array with your disks? Depending on model size you'll have to do multiple arrays instead with a bit to way lower capacity
- syncing a share to another instance is broken, with pretty much no useful debug information. Already the setup is stupid (doesn't let you select which array it goes on the target machine), and then seems to change access permissions of the sync user on the target box (i.e., you can do one sync, after that you'll need to reset the access permissions). I wanted to avoid doing my own sync script, but seems I'll have to do that in the end
- stupid disk compatibility warnings (which currently you can disable when you have SSH access)
- wireguard only via third party addons. It's 2025. I didn't even check before if those things can do wireguard - it didn't occur to me that a device sold nowadays might not be able to do that.
While debugging I also noticed that pretty much every software component is from the stone age.
My DS918+ has multichannel SMB and possibly also parallel NFS. It only works if you have multiple NICs connected.
Other than that, i completely agree. Their tech stack is horribly outdated, and while i understand their reasoning for not upgrading, there's a limit to how long you can do that. Their reasoning is that they know the software that's currently running, warts and all, and can better guarantee stability across millions of devices with fewer moving parts.
I have a DS 923+. These extremely old softwares you mentioned were always weird to me but everything worked fine so far. What I'm not happy about is the vendor lock in, and the abysmal virtualization / transcoding performance. I want a NAS that comes with a similar ease of use as the DSM, but can double down as a __very lightweight__ virtualization platform for my local test deployments and as a media PC that I can rely on. What would you suggest?
I'd suggest separate systems for NAS and media serving.
I've a Ryzen Embedded system with lost of RAM as my NAS box and a small Intel N-series based system as my Plex server that pulls media off the NAS box.
How about miniPC + USB x bay enclosure? I'm thinking about it. Have 4 Synology NAS mostly as long offline storage. No problems with them in this role so far.
I regret not pushing a bit more for deploying a custom storage solution with TrueNAS (or something similar) instead of Synology. All the TrueNAS devices I have are mine, not from my clients.
They already had one Synology device, they don't have any IT employees on site, and I'd need to take a flight to go to their offices, so I thought that using another Synology device would be better for maintenance. They (and I) were also worried about the noise: it's an small office, and they needed at least 8*3.5" drives, and most of the decent solutions I found for 8 or more drives were big and noisy. The Jonsbo N5 appeared a bit later, that looks like a good candidate today.
Now I found that all their applications are half done, they don't upgrade or fix them regularly, security-wise is a mess, and everything on the backend is super old...
As I understand it, they forked years ago when btrfs was very much not ready to be used for production NAS storage. Their value prop was they took it and added lots of their own special patches that they claimed made it highly dependable.
Over time their advantage has eroded as upstream has caught up, to the point that it looks ridiculously out of date today.
I don't know if this is the reason, but supposedly their btrfs fork contains a custom integration with mdraid/lvm so that when btrfs detects a bad block, it signals lvm to do a repair. This is their solution to avoid using btrfs raid5/6 which is still marked unstable.
The year is 2025. Delivering a good product is not considered profitable enough anymore. If a company or product is beloved by customers then that means it doesn't squeeze them to the max. This is clearly money left on the table that someone will sooner or later extract. High-end brands are not exempt from this.
Easily explained: when times are tough, delivering growth naturally is hard. Squeezing the customer is the lowest hanging fruit.
Sure, long term reputation is severely damaged, but why would decision makers care? Product owners interests are not aligned with interests of the company itself. Squeeze the customer, get your miniscule growth, call it "unlocking value", get your bonus, slap it onto your resume and move on to the next company. Repeat until retirement.
When times are tough, accept less growth (or sometimes none) so that when times get good again or someone builds a competitor, all your customers don't leave you.
Serving the needs of customers (practically the quality of the product) sits down in the list of importance. Sales strategy, marketing, PR, organizational culture, company values, ..., basically the self-serving measures come all before that.
My 10 year old NAS is a testament to how much money they have left on the table; they could 3x revenue and profits by simply breaking it every few years.
I extended the lifecycle of my 2013 vintage x64 QNAP (which lost support status around 2018 or 2019) by installing Ubuntu directly. The QNAP “firmware” was just an internal USB flash drive (DOM) that lived on a header that contained QNAP’s custom distro. There was a fully-featured standard UEFI that allows booting from the SATA devices.
I learned a lot in the process, but most important is that the special sauce NAS makers purport is usually years behind current versions.
The NAS finally bit the dust last year because of a design defect associated with Bay Trail systems that’s not limited to QNAP.
Our organization joined this trend some years ago. The original founders (did ca. 30-35 years ago) passed 60 and cashing out. Sold the company to an investor. Small fish, <100 employee, but in a niche of engineering app development with long time clients, very long time clients. Since then, we are a self declared sales oriented organization, company meetings are about success stories of billing more for the same service, monthly cash-flow analysis (target vs. actual), new marketing materials disseminated broadly, sales campaign, organizational culture, teamwork, HR. Every other has a technical development footnote, all AI (fits right in like designer bags at a pig farm). No QA, none.
Is NAS a growth market at all anymore? My somewhat unexamined opinion is that most folks can and probably do just store everything in the cloud.
I would not be surprised to find out that Synology is seeing a smaller market year over year and becoming desperate to find new revenue per person who is shopping for a NAS today.
Isn't the conventional wisdom "at least 2 backups, one offsite"? My lab gets by with 2 copies for most of our data: one on our Synology NAS and one mirrored to Box.
With the size of data we're dealing with, loading everything from cloud all the time would slow analyses down to a crawl. The Synology is networked with 10G Ethernet to most of our workstations.
> Delivering a good product is not considered profitable enough
Leaving products and commerce coupled is not considered good practice anymore. It's recommended in some places that you outsource so extremely to the point that your outsourced labor render services to receiving outsourced labor. And that's not considered insane.
Yes, but that doesn’t stop companies from putting a disproportionate amount of effort into squeezing it out, instead of directing that effort towards developing better products.
I have used Synology NASes for a good 15y now, and the one I'm on will likely be my last (DS920).
I have watched the software evolve from "quite good" to "very good" to "lets reimplement everything ourselves and close it off as much as possible".
It's sad because back in the day, at least for me, the brand was the perfect UX in many regards: small form factor and low power, price-accessible 4/5 bay NASes, a couple CPU tiers, upgradable hardware, regular software updates and a huge collection of software features.
For me they were the go-to choice for NAS because of the good web UI, the ease of setup and reliable operation that covered 99% of the prosumer usecases. They would just chug along forever, auto-updating themselves, never skipping a beat. Whenever I wanted to do special things with it via SSH I could, but the environment has become increasingly hostile to the point where I need to spend hours wondering how the heck the thing operates without bursting on fire.
I'm hoping that by the time I need to change my DS920 another good company like they were will have emerged, because building your own solution comes with operational maintenance and I want the thing to Just Work®.
What are the current options? I've been looking but I haven't found any. I have a DS923+ and it works fine, but I can already see what you are talking about.
I've purchased 3 QNAP NAS products over the course of a decade or so, and remain happy to recommend them. Their reputation was damaged by a raft of ransomware attacks in 2021 and 2022, but since then they've been better about improving overall system security and forcing basic security hygiene on users (who often don't know any better).
I just got a Ugreen 2800 to replace a homemade NAS, I had a Synology for a few years but it got slower and slower as the software changed so I ditched it maybe 10 years ago?
One of the things that sold me on the Ugreen was that it is basically just a garden-variety N100 box, upgradeable RAM, supports SATA and M.2, etc.
There are a few previous-generation units still available on the market. This article reminded me that I'd meant to track down a DS1522+ before it's too late, and I found a couple on Amazon and other sites.
A big part of the appeal of Synology was that you could just forget about it. I have a little one in the corner that's just been sitting there serving files out over SMB for years now. It doesn't need to do anything more and I don't need to think about it.
A lot of the alternatives being proposed are not so easy to maintain. A full general purpose OS install doesn't really take care of itself. And I don't have (and don't want) a 19-inch rack at home. Ever.
So what's the set-up-and-forget-until-it-gets-kicked-over option?
> I’m not interested in having to shut down a tower and open it up to swap/add drives.
How often do you actually do this? In 15 years of running my own NAS boxes, I so far had to do it once. I, of course, choose slow, middle of the range disks.
I have a similar lack of interest in opening up a tower to swap drives. QNAP has some nice JBOD enclosures. If you don't want any 2.5" drives their biggest enclosures are 8-bays, so you'd need an ATX tower with three available PCIe slots to run all the SFF cables for 3x8 drives. You would need to manage your own software stack with Unraid or w/e.
So much this. I left another comment that touched on this.
I want a small reliable box that I just put in the corner and I can forget about for months at a time, as long as it provides me the services I configured it for. I access my NAS UI maybe once every 3 months.
I know exactly how to roll my own NAS (and I'm already rolling my own router), but I just don't want to deal with operating it.
Synology still scores very high on this single metric.
Most of the other commenters do not seem to understand the difference between "low maintenance", "low maintenance by someone very skilled in the art", and "no maintenance".
Things that maintain themselves are amazing and I want more of them in my life. Anything that requires shell commands is out out out. That is for younger people.
I'm not sure I understand. Even a custom Arch install with samba, zfs, NFS, etc would be a "single setup, works forever" deal. It's not like what you configure is going to magically break if you don't look at it.
And security could be an issue, but it's not like Synology is any better there with their old as dirt dependencies.
Snark aside, TrueNas is probably your best bet. Maybe Unraid? Still, with all of these, it's not like they require constant attention to operate.
A lot of commenters don't seem to understand how much of a pain in the ass rolling your own NAS is. And then dealing with drive failures and expanding the storage pool, which is dead simple with Synology, but is completely hair raising (if not impossible) with other solutions.
There's really nothing that comes close to the hardware + software package Synology offers .
I was looking for alternatives, but anything else didn't come close to Syno Photos+Drive+Surveillance+Active Backup package you get with the NAS.
There's alternatives to each, sure, but they mostly need massively more powerful hardware to run pile of docker containers and end up being alpha quality.
I wished I'd have spend some time looking them up before I bought 2 new drives for my old Synology. The new QNAP is so much better. I'd have switched just for the UI already..
UnRaid. I'm currently evaluating it on my old 2014 motherboard.
The WebUI is responsive, it can be a bit brickish around the edges requiring you dive in to the logs if something doesn't work; turned out to be bad ram on my host refusing KVM to boot. Once it's up and working it sails.
GPU-PassThru in a Windows VM is proving incredibly smooth especially with using Moonshine on FreeBSD.
The docker ecosystem is a nice addition and the community seems fair. I can too throw all my old SSD drives without limitation (granted the basic licenses only allows six) is nifty in saving dust.
It being based off Slackware is pleasing. It is closed source but so is Synology and for $100 for a fully unlocked feature-rich NAS/OS - totally.
> A full general purpose OS install doesn't really take care of itself.
A Debian stable mostly does except on upgrades, and that's rare and painless enough. Even with a Synology you still need to make sure you have proper monitoring in case the hard drives start failing.
That's right. I bought a DS1019+ in 2019 and it's been running constantly for 6 years (except for a handful of power outages and house moves). The power adapter did fail last year and I had to buy a dodgy (albeit well-reviewed) third-party replacement.
Are you sure they survive for the time period you intend them to? When I was a teenager, I though the DVDs and BluRays I burned would be forever - 15 years later I am very unhappy to find that some of them started to crack and flay - it's a pain to keep checking them. Nothing like the guarantees a NAS + Cloud backup could provide.
I’m a fan of optical storage and its durability (with reasonable care.)
But the problem is when you need to recover and have 20 Blu-ray Discs with important data scattered about, it takes days.
Or when there is a specific piece of data you want/need and only have a vague idea of where it is/was in history. Maybe if those ultra capacity discs took hold but it looks like the era of optical is ending
Starters:
Fractal define - mid tower- 8 official 3.5" bays. With plenty of open space for more.
Jonsbo cases are the most NAS-like.
OS:
Easy button: FreeNAS. Maybe the newer TrueNAS Core rework. As long as you don't need the latest and greatest in features, and at this point probably a bunch of unfixed security.
Otherwise it's Truenas Scale- just avoid the docker/VM system. Its a complete cluster.
I dearly wish Cockpit Project was up to par for this.
For one, a Synology box won't get into the habit of restarting for ten minutes daily because Windows Update managed to break itself and keeps retrying the same update.
But it's true that you could probably leave a desktop on "NAS duty" for years unattended without anything really major happening, especially if it's only accessible on a local network.
`ssh://pi@raspberrypi.local:raspberry` with "while true; do ls /dev/ | grep ^sd | xargs mount; done" in rc.local, running outdated 10 years old Linux Kernel booting from ROM with write enable pin tied to ground, is all that's needed. There's probably sshfs for everything so protocol support for dozen things isn't a must.
I mean, I have one for handling an HDD with busted power circuit that cause system resets at regular intervals(likely brush sparks from a power steering motor went back up through USB and killed it). It's almost wrong that there isn't a pre-made solution for this.
But self-building a NAS is still a problem, and I'm also talking about this [1] article from the same blog:
There are NO low power NAS boards. I'm talking about something with an ARM CPU, no video, no audio, lots of memory (or SODIMM slot) and 10+ SATA ports.
Sure, anyone can buy a self-powered USB3 hub and add 7 external HDDs to a raspbery, but that level of performance is really really low, not to mention the USB random disconnects. And no, port replicators aren't much better.
That would be nice, but Synology doesn't offer that either, no?
The closest thing available now would probably be a Radxa ROCK 5 ITX+, a motherboard with a Rockchip SoC and two M.2 slots, into which you could put their six-port SATA cards. No idea what that whole setup will draw, though.
EDIT: I have to complain about the article you linked. It's certainly true that one should account for power consumption, not just purchase cost, but some crucial mistakes make the article more harmful on the whole.
The author cites 84 W power consumption for an i5-4690, and 10 W for a J4125 CPU, but those figures are the TDP. For all we know, those CPUs could idle at around the same wattage, and from my experience they likely do.
Having done some measuring myself, I'd say the largest source of power draw in an idle NAS will be the PSU, motherboard, and disks. With any remotely recent Intel CPU, it idles so efficiently as to be negligible in a PC.
> That would be nice, but Synology doesn't offer that either, no?
I have a Synology DS920+ 4-bay that averaged 20W total including 2 spinning drives with sleep disabled. I agonized about going with the closed product, and in many ways regret it. But at the time there was nothing I could find that came close, even without the drives. And that's before factoring my time administering the DIY option, or that it would be bigger and less ruggedized.
I went as far as finding the German low power motherboard forum spreadsheet and contemplating trying to source some of the years old SKUs there. You've gotta believe us when we say that before the n100s arrived, it was a wasteland for low power options.
In many ways it still is, although these n100 boards with many SATA are a sea change. Once you set out to define requirements, you pretty quickly get to every ZFS explainer saying that you are a fool to even attempt using less than 32 GB of ECC memory...
There are some new NAS boxes hitting the market (UGreen being one of the brands that are cheaper, but also Minisforum) which have solid hardware but aren't locked down at all. They're just x86 boxes with bog-standard hardware so you can just run whatever OS you want, and they support that use case.
Why do you need a bunch of SATA ports? Just get a cheap SAS2 PCIe card on eBay.
There are definitely low power ARM boards with a PCIe lanes. Typically its NVMe, but you can adapt that to 4x PCIe 3.0 which is a lot of bandwidth for HDDs. Not sure why you need a lot of memory for a NAS though, but they do have boards that have 32GB of memory.
If you need more than 8 HDDs you can get a SAS2 expander to connect to the SAS2 card and then you could easily connect 24 HDDs with a 6 port SAS2 expander and breakout cables.
Or if you put this small board and card into a server case that has a SAS2 backplane with expander built in, then you can just connect all the disks that way.
Another option, not ARM, but still low power and neat.
Why don't you look at Topton's N100 boards with 6x SATA, 2.5Gb LAN, PCIe slot for extra SATA ports and Jonsbo N3 NAS case with it? For $300 you'd have a way better NAS than anything Synology offers.
There are pci boards that let you hook up 4 sata ports to the pci3 on a raspberry pi 5. The drives will be a large part of the power draw, so if you want low power, going for 2 drives is probably better. That probably gets you into the 20-30 watt range
for 10+ sata ports you might as well get an x86 motherboard as it's going to draw lots of power anyway
Unless you plan to power down most of the drives most of the time
It's better now, been annoyed like you but the Rpi5 comes in a 16gb variant and has a PCI-E port that can be extended to 5x SATA or 2x M2. It's not blazing but probably an improvement over my old Celeron J1800 with 8gb of ram.
The Intel N100,etc series of machines seems popular with builders even if the RAM restrictions drives me nuts.
I think the major issue seems to be cases actually, there's tons of small cheap AMD machines from manufacturers like BeeLink that trounce most NUC setups for performance, but like the NUC's as soon as there's disc enclosures the price shoots away.
Why? There is no evidence that ARM is the only power efficient CPU. i5, i3 and n100 are all power efficient.
> no video, no audio
Why? Disable onboard video if you care that much.
> lots of memory (or SODIMM slot) and 10+ SATA ports
This eats power, conflicting with the rest of your requests.
> Sure, anyone can buy a self-powered USB3 hub and add 7 external HDDs to a raspbery, but that level of performance is really really low, not to mention the USB random disconnects. And no, port replicators aren't much better.
No, that's not what you do for a power efficient NAS. You build an i3, i5 or n100, turn off all unneeded peripherals, and configure bios as needed to your level of desired power consumption. under 10W is achievable.
> Why? There is no evidence that ARM is the only power efficient CPU. i5, i3 and n100 are all power efficient.
They are, but the motherboard is not, or at least not as much as an ARM board.
> Why? Disable onboard video if you care that much.
And it would boot... how? AFAIK, no UEFI system is capable of booting headless and very very few BIOS systems were.
> No, that's not what you do for a power efficient NAS. You build an i3, i5 or n100, turn off all unneeded peripherals, and configure bios as needed to your level of desired power consumption. under 10W is achievable.
I very much doubt that. N100 maybe, just maybe, could go lower than 20W if the power source is very very efficient, but I haven't seen any system with 10+ SATA ports. The commonly suggested solution here, to add a server SAS/SATA controller, would double or triple the idle power.
Second-hand Supermicro boards with 6, 8, even 12 SATA ports, 8-16 GB of ECC RAM, a preinstalled CPU, and often a passive radiator are pretty accessible or eBay or suchlike. Of course they are old, but if they did not break after 5-7 years in a server rack, most likely they're not going to break for 10 more years in a less demanding environment at home.
I have some quad core celeron board integrated thing. It draws 15watts. I added a PCI-E sata which gave me 6 extra ports. I am sure you can buy better ones.
I used a Fractal Node Case that has 6 drive bays. Installed TrueNAS Scale on an SSD. Swapping drives is a pain as I have to take the computer apart. But that is infrequent. So it is fine.
Not for a NAS. Speed is NVMe's benefit, but your network isn't fast enough to take advantage of it, which means you're paying through the nose for very low capacity. 24TB SATA drives are a way better deal for a NAS.
If you need a lot of (not so fast) storage, 3,5" drives are still by far the best TB per €. For a lot of NAS solutions (backups, video/movie/music storage etc.) their performance is completely fine.
Plus, we're most likely talking about Gigabit networking here, so unless your workload consists of very parallel random access, this is going to be the limiting factor anyway.
Low power and reliability is why I want to just use my Mac Mini M4 + DAS as storage solution among 15 people. I am not sold on it because Mac Mini has lots of life in it to be solely devoted to this use case.
I recently got one as a home server. It’s ludicrously power efficient and so powerful. But it’s a battle getting it to behave, with OS fights at every stage.
It took me a week of fighting to get it to reliably power up, connect to network shares and then start some containers.
Synology are bad at technical restrictions. That doesn't help most people, and it's not any sort of defense, but anything they strongly attempt to impose here is going to fail. It took me an evening to break the protection they imposed on another layer, and a chunk of that evening was me and a bottle of mezcal and just writing INSERT statements into sqlite, we are really not talking about extreme competence.
But! That doesn't matter, most users are never going to be able to do that themselves, and DMCA protections potentially prevent anyone sharing knowledge of how to do so without putting themselves at risk. The truth is that vendors can, under US law, threaten anyone who tells someone how to make the device they bought work properly with federal offences. Buy something else instead.
I don't really get the point of hacking a synology to break this kind of protection. I understand why you'd take one so that you get everything setup for you, but if you're gonna invest time jailbreaking and hacking it, wouldn't you be better off using an old PC with your own linux/software setup as a server?
Actually easier to remove the license restrictions around their RTSP backup software than it is to set up an equivalent thing myself
(Edit: I have a very particular set of skills. Having put some time into making this work with tools I could put together myself and failing, I found that my Synology had a tool that did it perfectly and refused to do so for the number of cameras I had. I fixed that.)
Is it clear what the actual restrictions are here? I have a couple of diskstations and like them and was about to buy another. How does this actually affect me as a practical matter?
It depends on the model you buy. On 2025 models forward, you cannot add unsupported disks to a storage pool. If you bring an existing pool forward from another machine, it will let you continue to use the old disks, but any replacement ones would need to be Synology-branded.
I recently moved all of my NAS needs to a UNAS Pro and just have an old Intel Bean Canyon NUC in the closet running apps on top of it. Portainer is a reasonable docker frontend, though not perfect -- but more importantly, the storage is just separated from the NAS entirely. As long as your NAS can serve files to a secondary server, you the sky is the limit with what you have actually accessing the files and doing things with.
The particularly jarring thing in this article is the SMB concurrency limits. Those effectively gate your scalability in terms of storage. Even more than forcing their own drives to be used, the concurrent user limit is a clear enterprise upsell: charge people to get a higher limit. The byproduct, of course, is that elaborate home lab connections or setups will also be hit by this.
How’s the UNAS? I’m firmly in the UI ecosystem for networking and security and looked at it earlier this year (I think they released the offering only in the past 12-24 months). But my synology 1221 is fairly new so I have at least 5 (probably more) years left of useful life in it.
UI isn’t without their own faults but allowing their unifios to run on grey boxes has improved my opinion of them further.
I ran my own NAS for over two decades in some old 4U I got for cheap, using whatever discarded consumer HW I got for free and I never got the point of Synology. Colleague who has one said it's compact. Well, this year I bought one of those gaming cube cases (with space for 10 drives, what do people do with them in gaming pcs? OK, only 8 spaces are actually drawers with grommets but physically you can fit 10) and retired my 4U.
Seriously, takes an hour to setup your own NAS and you can mix any drives, setup any encryption you want, seedbox etc. I totally understand convenience but this is not a email server you're setting up here, it's just a NAS.
I did something similar years ago. Couple of drives in an old beige tower case. Setup the sharing wand what not. Not exactly 'hard' but it was one thing. Time consuming. Once you have done it a few times the novelty wears off and its more of a chore to mess around with the thing. NAS boxes like that 'just work'. You plug some drives in, set it up, done. However, one comment in here puts it perfectly. The software on syno is wildly out of date. It has been for 15 years. Ease of use is now outweighed by something recent software wise. The syno guys are literally leaving nearm 20-30% perf uplift out. For 'reasons'. Those reasons are wearing very thin. It will mean I need a different backup solution for my computers. One that handles full disk incremental and stored windows and linux on the remote drive and not something I 'run once and awhile' and perferably open source.
20 years ago it was a chore but nowadays it's faster than baking a cake. 10 minutes prep time (configure os and add drives), 10 minutes bake time (installation) (or 10+ hours bake time if count building the array)
But let's assume you don't have a clue and have to follow some tutorial and do some reading and it takes you 2 hours. That's amortised across a decade. Especially now when easy distro upgrades are basically unattended so you can use the same setup for a decade and stay up to date.
I'm interested in starting out like this, I have a bunch of 2.5" SSD's I'm not using- do you have any tips on what cube to get? Are you concerned about power usage at all especially if this is always on?
Any of those cube gaming ones I think are great. I got a dual chamber one which makes shuffling drives and cabling easy. Can't remember the name but it was 90 pounds, way more than I paid for the old 4U, although inflation from the 90s probably means it was more expensive in real terms. Most of the power is used to spin rust so not sure it's worth worrying about the HW power use, just use whatever old pc you can get for free, ask colleagues and family, people throw out working PCs all the time, it's a NAS, not a rendering farm, if it boots it's good enough IMHO.
On products you can buy TODAY, you find:
They claim it's OK because they've backported all security fixes to their versions. I don't believe them. The (theoretical) huge effort needed for doing that would allow them to grow a way better product.And it's not only about security, but about features (well, some are security features too). We're missing new kernel features (network hardware offload, security, wireguard...), filesystem (btrfs features, performance and error patches...), file servers (new features and compatibility, as Parallel NFS or Multichannel CIFS/SMB), and so on...
I think they got stuck on 4.4 because of their btrfs fork, and now they're too deep on their own hole.
Also, their backend is a mess. A bunch of different apps developed on different ways that mostly don't talk to each other. They sometimes overlap with each other and have very essential features that don't work and don't plan to fix. Meanwhile, they're busy releasing AI stuff features for the "Office" app.
Edit note: For myself and some business stuff, I have a bunch of TrueNAS deployments, from a small Jonsbo box for my home, to a +16 disk rack server. This was for a client that wanted to migrate from another Synology they had on loan, and I didn't want to push a server on them, as they're a bit far away from me, and I wanted it to be serviceable by anyone. I regret it.
As for full disk encryption, you can select where to store the key, which may be on the NAS itself (rendering FDE more or less useless) or on a USB key or similar.
Edit: what they deploy on their NAS is an old version of a testing implementation of the KMIP protocol. PyKMIP: https://github.com/OpenKMIP/PyKMIP
The primary value of disk/volume encryption is actually for scenarios like end-of-life replacement, RMA, failure and disposal - even if someone later reconstructs the disk sectors, the bits remain unreadable. This is one layer of defense in depth, not a substitute for physical security.
Synology also supports KMIP, which I see addressing two situations:
1. Data center key governance and media mobility - Multiple hosts (including spares) can use KMIP for centralized key management, improving the mobility of drives within the data center and reducing the operational cost of moving drives between machines. When decommissioning hardware, keys can be revoked directly in KMIP with an audit trail.
2. Edge/branch sites with weaker physical controls - By using KMIP, keys are kept in the more secure data center rather than on the edge device itself. The edge hardware stores no keys, so if an entire machine is stolen, it cannot be unlocked, preserving confidentiality.
Not to defend Synology, but popping a drive out of the NAS so that it won't be noticed (or noticed much later) is a much easier way to steal data than carrying off the whole NAS. I assume they're guarding against the kind of scenario where an employee steals steals drives rather than ski-masked thieves breaching the office and making off with the NAS.
I got an issue where mind would randomly start writing disk like crazy and maxing cpu usage, to the point I was bothered by the noise. I’d stop all containers, leave it as close to idle as I could manage, still spiking.
There was no way I could learn what was causing it.
I would like to assume it was a disk maintenance process or something, but for all I know it could be mining bitcoin and I’d be none the wiser. It went on for some weeks then stopped.
May or may not be what you encountered, but had a customer caught by this and found out the hard way you can't stop it. My issue is not the processing, it's the throttling, it's so crazy how the entire NAS gets taken down for like ten minutes (and that was on a racked xeon model), no samba no nfs no nothing answering anymore.
Mine is in the basement for this reason. When it’s still and quiet after midnight I can still hear it grinding away. God I hate the sound.
https://forum.doozan.com/list.php
But don't you love it when companies invent their own security instead of using battle-tested open-source systems?
This breaks both the 'store key locally' and the KMIP setup.
And for their file-based encryption you cannot change the password. You need to create a new folder with a new password and copy all files over.
- one device died, was EOL at that point, and newer ones no longer can read the disks - stupid limits for array size. Depending on your setup adding disks can mean "copy everything off, delete arrays, and then create new ones". Also, want one 200TB array with your disks? Depending on model size you'll have to do multiple arrays instead with a bit to way lower capacity - syncing a share to another instance is broken, with pretty much no useful debug information. Already the setup is stupid (doesn't let you select which array it goes on the target machine), and then seems to change access permissions of the sync user on the target box (i.e., you can do one sync, after that you'll need to reset the access permissions). I wanted to avoid doing my own sync script, but seems I'll have to do that in the end - stupid disk compatibility warnings (which currently you can disable when you have SSH access) - wireguard only via third party addons. It's 2025. I didn't even check before if those things can do wireguard - it didn't occur to me that a device sold nowadays might not be able to do that.
While debugging I also noticed that pretty much every software component is from the stone age.
My DS918+ has multichannel SMB and possibly also parallel NFS. It only works if you have multiple NICs connected.
Other than that, i completely agree. Their tech stack is horribly outdated, and while i understand their reasoning for not upgrading, there's a limit to how long you can do that. Their reasoning is that they know the software that's currently running, warts and all, and can better guarantee stability across millions of devices with fewer moving parts.
I've a Ryzen Embedded system with lost of RAM as my NAS box and a small Intel N-series based system as my Plex server that pulls media off the NAS box.
It's confusing me after the preceding displeasure wrt Synology
They already had one Synology device, they don't have any IT employees on site, and I'd need to take a flight to go to their offices, so I thought that using another Synology device would be better for maintenance. They (and I) were also worried about the noise: it's an small office, and they needed at least 8*3.5" drives, and most of the decent solutions I found for 8 or more drives were big and noisy. The Jonsbo N5 appeared a bit later, that looks like a good candidate today.
Now I found that all their applications are half done, they don't upgrade or fix them regularly, security-wise is a mess, and everything on the backend is super old...
Over time their advantage has eroded as upstream has caught up, to the point that it looks ridiculously out of date today.
Sure, long term reputation is severely damaged, but why would decision makers care? Product owners interests are not aligned with interests of the company itself. Squeeze the customer, get your miniscule growth, call it "unlocking value", get your bonus, slap it onto your resume and move on to the next company. Repeat until retirement.
Serving the needs of customers (practically the quality of the product) sits down in the list of importance. Sales strategy, marketing, PR, organizational culture, company values, ..., basically the self-serving measures come all before that.
I learned a lot in the process, but most important is that the special sauce NAS makers purport is usually years behind current versions.
The NAS finally bit the dust last year because of a design defect associated with Bay Trail systems that’s not limited to QNAP.
I would not be surprised to find out that Synology is seeing a smaller market year over year and becoming desperate to find new revenue per person who is shopping for a NAS today.
With the size of data we're dealing with, loading everything from cloud all the time would slow analyses down to a crawl. The Synology is networked with 10G Ethernet to most of our workstations.
I’m in the latter group but Synology has locked themselves out of the market with this choice.
Uploading terabytes of content to the consumer cloud just isn’t practical, financially.
Leaving products and commerce coupled is not considered good practice anymore. It's recommended in some places that you outsource so extremely to the point that your outsourced labor render services to receiving outsourced labor. And that's not considered insane.
Dead Comment
I have watched the software evolve from "quite good" to "very good" to "lets reimplement everything ourselves and close it off as much as possible".
It's sad because back in the day, at least for me, the brand was the perfect UX in many regards: small form factor and low power, price-accessible 4/5 bay NASes, a couple CPU tiers, upgradable hardware, regular software updates and a huge collection of software features.
For me they were the go-to choice for NAS because of the good web UI, the ease of setup and reliable operation that covered 99% of the prosumer usecases. They would just chug along forever, auto-updating themselves, never skipping a beat. Whenever I wanted to do special things with it via SSH I could, but the environment has become increasingly hostile to the point where I need to spend hours wondering how the heck the thing operates without bursting on fire.
I'm hoping that by the time I need to change my DS920 another good company like they were will have emerged, because building your own solution comes with operational maintenance and I want the thing to Just Work®.
One of the things that sold me on the Ugreen was that it is basically just a garden-variety N100 box, upgradeable RAM, supports SATA and M.2, etc.
According to this installing your own OS doesn't invalidate the warranty so if I decide their software is lousy I can install Debian https://wiki.debian.org/InstallingDebianOn/Ugreen
A lot of the alternatives being proposed are not so easy to maintain. A full general purpose OS install doesn't really take care of itself. And I don't have (and don't want) a 19-inch rack at home. Ever.
So what's the set-up-and-forget-until-it-gets-kicked-over option?
I came to Synology after years of managing regular Linux (Debian) servers, then Unraid, and then Synology.
Synology was the most expensive thing I’ve used but I also _never_ think about it. The same could not be said for previous setups.
I want a stupid-easy NAS, plug-and-play, hotswapable bays. I’m not interested in having to shut down a tower and open it up to swap/add drives.
I have 2x12-bay Synology’s and I haven’t found an equivalent product yet (open to options).
How often do you actually do this? In 15 years of running my own NAS boxes, I so far had to do it once. I, of course, choose slow, middle of the range disks.
There are also 3d-printale cases where you buy a SATA backplane and screw it in.
It doesn't solve your software problem (though maybe TrueNAS might work?).
its a great backup for all your important files.
I want a small reliable box that I just put in the corner and I can forget about for months at a time, as long as it provides me the services I configured it for. I access my NAS UI maybe once every 3 months.
I know exactly how to roll my own NAS (and I'm already rolling my own router), but I just don't want to deal with operating it.
Synology still scores very high on this single metric.
Things that maintain themselves are amazing and I want more of them in my life. Anything that requires shell commands is out out out. That is for younger people.
And security could be an issue, but it's not like Synology is any better there with their old as dirt dependencies.
Snark aside, TrueNas is probably your best bet. Maybe Unraid? Still, with all of these, it's not like they require constant attention to operate.
I was looking for alternatives, but anything else didn't come close to Syno Photos+Drive+Surveillance+Active Backup package you get with the NAS.
There's alternatives to each, sure, but they mostly need massively more powerful hardware to run pile of docker containers and end up being alpha quality.
The WebUI is responsive, it can be a bit brickish around the edges requiring you dive in to the logs if something doesn't work; turned out to be bad ram on my host refusing KVM to boot. Once it's up and working it sails.
GPU-PassThru in a Windows VM is proving incredibly smooth especially with using Moonshine on FreeBSD.
The docker ecosystem is a nice addition and the community seems fair. I can too throw all my old SSD drives without limitation (granted the basic licenses only allows six) is nifty in saving dust.
It being based off Slackware is pleasing. It is closed source but so is Synology and for $100 for a fully unlocked feature-rich NAS/OS - totally.
https://unraid.net/community/apps
A Debian stable mostly does except on upgrades, and that's rare and painless enough. Even with a Synology you still need to make sure you have proper monitoring in case the hard drives start failing.
Yes, it is a pain versus having a NAS, but at least I don't have to deal with this kind of stuff.
But the problem is when you need to recover and have 20 Blu-ray Discs with important data scattered about, it takes days.
Or when there is a specific piece of data you want/need and only have a vague idea of where it is/was in history. Maybe if those ultra capacity discs took hold but it looks like the era of optical is ending
Starters: Fractal define - mid tower- 8 official 3.5" bays. With plenty of open space for more.
Jonsbo cases are the most NAS-like.
OS: Easy button: FreeNAS. Maybe the newer TrueNAS Core rework. As long as you don't need the latest and greatest in features, and at this point probably a bunch of unfixed security.
Otherwise it's Truenas Scale- just avoid the docker/VM system. Its a complete cluster.
I dearly wish Cockpit Project was up to par for this.
But it's true that you could probably leave a desktop on "NAS duty" for years unattended without anything really major happening, especially if it's only accessible on a local network.
Debian and unattended upgrades might need a tweak if you want it to actually reboot by itself, but I think the option is there
I mean, I have one for handling an HDD with busted power circuit that cause system resets at regular intervals(likely brush sparks from a power steering motor went back up through USB and killed it). It's almost wrong that there isn't a pre-made solution for this.
There are NO low power NAS boards. I'm talking about something with an ARM CPU, no video, no audio, lots of memory (or SODIMM slot) and 10+ SATA ports.
Sure, anyone can buy a self-powered USB3 hub and add 7 external HDDs to a raspbery, but that level of performance is really really low, not to mention the USB random disconnects. And no, port replicators aren't much better.
[1] https://lowendbox.com/blog/are-you-recyling-old-hardware-for...
The closest thing available now would probably be a Radxa ROCK 5 ITX+, a motherboard with a Rockchip SoC and two M.2 slots, into which you could put their six-port SATA cards. No idea what that whole setup will draw, though.
EDIT: I have to complain about the article you linked. It's certainly true that one should account for power consumption, not just purchase cost, but some crucial mistakes make the article more harmful on the whole.
The author cites 84 W power consumption for an i5-4690, and 10 W for a J4125 CPU, but those figures are the TDP. For all we know, those CPUs could idle at around the same wattage, and from my experience they likely do.
Having done some measuring myself, I'd say the largest source of power draw in an idle NAS will be the PSU, motherboard, and disks. With any remotely recent Intel CPU, it idles so efficiently as to be negligible in a PC.
I have a Synology DS920+ 4-bay that averaged 20W total including 2 spinning drives with sleep disabled. I agonized about going with the closed product, and in many ways regret it. But at the time there was nothing I could find that came close, even without the drives. And that's before factoring my time administering the DIY option, or that it would be bigger and less ruggedized.
I went as far as finding the German low power motherboard forum spreadsheet and contemplating trying to source some of the years old SKUs there. You've gotta believe us when we say that before the n100s arrived, it was a wasteland for low power options.
In many ways it still is, although these n100 boards with many SATA are a sea change. Once you set out to define requirements, you pretty quickly get to every ZFS explainer saying that you are a fool to even attempt using less than 32 GB of ECC memory...
To save some clicks:
https://nas.ugreen.com/https://www.minisforum.com/pages/n5_pro
- Max 48GB*2 DDR5 ECC
- 8 core PRO 8845HS
- 25W with nothing, doing nothing, realistically 50W
- 25G combined network
- 5 M.2 (3x2 and 2x1 lane) and 6 HDDs
- Oculink
https://aoostar.com/blogs/news/the-aoostar-wtr-max11bay-is-a...
There are definitely low power ARM boards with a PCIe lanes. Typically its NVMe, but you can adapt that to 4x PCIe 3.0 which is a lot of bandwidth for HDDs. Not sure why you need a lot of memory for a NAS though, but they do have boards that have 32GB of memory.
What's wrong with this?
https://www.amazon.com/Radxa-5B-Connector-Computer-32GB/dp/B...
And connect a card like this to the NVMe PCIe which you can connect 8 SATA HDDs to with SATA breakout cables.
https://www.ebay.com/itm/155007176276
If you need more than 8 HDDs you can get a SAS2 expander to connect to the SAS2 card and then you could easily connect 24 HDDs with a 6 port SAS2 expander and breakout cables.
Or if you put this small board and card into a server case that has a SAS2 backplane with expander built in, then you can just connect all the disks that way.
Another option, not ARM, but still low power and neat.
https://www.lattepanda.com/lattepanda-sigma
This has Thunderbolt 4 which you can connect to a PCIe slot like this:
https://www.dfrobot.com/product-2832.html
They have a lot of neat stuff, you can get the tiny LattePanda Mu, and dock it in this:
https://www.lattepanda.com/lattepanda-mu
https://www.dfrobot.com/product-2822.html
for 10+ sata ports you might as well get an x86 motherboard as it's going to draw lots of power anyway
Unless you plan to power down most of the drives most of the time
I do. Read my other comments.
The Intel N100,etc series of machines seems popular with builders even if the RAM restrictions drives me nuts.
I think the major issue seems to be cases actually, there's tons of small cheap AMD machines from manufacturers like BeeLink that trounce most NUC setups for performance, but like the NUC's as soon as there's disc enclosures the price shoots away.
Why? There is no evidence that ARM is the only power efficient CPU. i5, i3 and n100 are all power efficient.
> no video, no audio
Why? Disable onboard video if you care that much.
> lots of memory (or SODIMM slot) and 10+ SATA ports This eats power, conflicting with the rest of your requests.
> Sure, anyone can buy a self-powered USB3 hub and add 7 external HDDs to a raspbery, but that level of performance is really really low, not to mention the USB random disconnects. And no, port replicators aren't much better.
No, that's not what you do for a power efficient NAS. You build an i3, i5 or n100, turn off all unneeded peripherals, and configure bios as needed to your level of desired power consumption. under 10W is achievable.
They are, but the motherboard is not, or at least not as much as an ARM board.
> Why? Disable onboard video if you care that much.
And it would boot... how? AFAIK, no UEFI system is capable of booting headless and very very few BIOS systems were.
> No, that's not what you do for a power efficient NAS. You build an i3, i5 or n100, turn off all unneeded peripherals, and configure bios as needed to your level of desired power consumption. under 10W is achievable.
I very much doubt that. N100 maybe, just maybe, could go lower than 20W if the power source is very very efficient, but I haven't seen any system with 10+ SATA ports. The commonly suggested solution here, to add a server SAS/SATA controller, would double or triple the idle power.
I used a Fractal Node Case that has 6 drive bays. Installed TrueNAS Scale on an SSD. Swapping drives is a pain as I have to take the computer apart. But that is infrequent. So it is fine.
Plus, we're most likely talking about Gigabit networking here, so unless your workload consists of very parallel random access, this is going to be the limiting factor anyway.
If it can't run linux, it's not going to make a good storage server on the software side of things.
It took me a week of fighting to get it to reliably power up, connect to network shares and then start some containers.
How could this be hard?
But! That doesn't matter, most users are never going to be able to do that themselves, and DMCA protections potentially prevent anyone sharing knowledge of how to do so without putting themselves at risk. The truth is that vendors can, under US law, threaten anyone who tells someone how to make the device they bought work properly with federal offences. Buy something else instead.
(Edit: I have a very particular set of skills. Having put some time into making this work with tools I could put together myself and failing, I found that my Synology had a tool that did it perfectly and refused to do so for the number of cameras I had. I fixed that.)
The particularly jarring thing in this article is the SMB concurrency limits. Those effectively gate your scalability in terms of storage. Even more than forcing their own drives to be used, the concurrent user limit is a clear enterprise upsell: charge people to get a higher limit. The byproduct, of course, is that elaborate home lab connections or setups will also be hit by this.
UI isn’t without their own faults but allowing their unifios to run on grey boxes has improved my opinion of them further.
Seriously, takes an hour to setup your own NAS and you can mix any drives, setup any encryption you want, seedbox etc. I totally understand convenience but this is not a email server you're setting up here, it's just a NAS.
But let's assume you don't have a clue and have to follow some tutorial and do some reading and it takes you 2 hours. That's amortised across a decade. Especially now when easy distro upgrades are basically unattended so you can use the same setup for a decade and stay up to date.