- Specs are too limited for my needs (storage capacity for backup / home NAS purposes; compute power for local AI work; throughput for local high speed network traffic shaping; etc)
- can't upgrade over time (right now I'm averaging 15 years for my boxes, with incremental upgrades like storage, RAID adapters, memory, CPU etc, and I don't need to go through the days-long hassle of reformatting, reinstalling and reconfiguring OS's, services and software).
- less supported over time (I can still download driver upgrades in some cases, and find solutions if I run into something unexpected as the vendor is still in business and supporting the legacy model).
Full sized machines aren't difficult to build, and I've had great luck with second hand enterprise-targeted parts (eg. for a long time years back, used Mellanox Infiniband cards were dirt cheap on eBay because universities were upgrading to later generations, they were an order of magnitude faster than NIC's available at competing price points at the time, and as a bonus had lower latency). Older Areca RAID cards were great for SATA drives, easily upgradeable to new models, and I still have a few kicking around in production today.
Meanwhile neighbors have thrown out piles of ewaste and wasted time after their commodity junk failed unexpectedly.
You can also run a single storage box and then just pop over network (10gbe, thunderbolt, etc). One big box of spinning rust and tons of cheap compute.
Most folks are running proxmox and your OS installs are automated. Use ansible. I like docker swarm on top of a fleet of cattle vms on proxmox.
The zimaboard runs pfsense & an nginx reverse proxy, then all six of the mini-pcs run proxmox. 4 mini-pcs run k8s clusters (talos) and the other two run home services and selected one-offs (home-assistant, plex, bookstack, build-tools, gitea, origin servers for a subset of projects).
It was a lot easier to set up than I had expected. Its was still a massive PITA though. I got what I wanted out of it work-wise, and its a nice little novelty.
I've been thinking about ditching most of it for a while; I like the idea in the article about breaking it up - move one under the TV, one into the office, one under the stairs, and the remaining 3 + zimaboard I'm tempted to sell. I'd keep running proxmox on them, but I wouldn't link them up. The key thing that needs to happen for this to make sense is using something like cloudflare to route domains.
The part I never sorted properly was storage. It has 3TB of storage, but getting that storage into k8s for proper dynamic allocation without giving random nodes CPU perf issues was a too-long-for-one-session task which meant it never got finished. I was tempted to add a NAS, but most NAS's are horrid.
Ceph ebds are pretty easy and can offer good resilience but definitely have some performance issues in a standard homelab.
Something dumb like smb/nfs actually can work quite well if your workload doesn't mind it.
Rclone volumes work quite well for some cases not served by obvious other solutions but you have general FUSE limitations.