> ignore current warnings - I’m using a MacBook Pro charger + cable and still got the warning that I need a 5V/5A PSU.
You need to be careful with this one.
The USB spec goes up to 15W (3A) for its 5V PD profiles, and the standard way to get 25W would be to use the 9V profile. I assume the Pi 5 lacks the necessary hardware to convert a 9V input to 5V, and, instead, the Pi 5 and its official power supply support a custom, out-of-spec 25W (5A) mode.
Using an Apple charger gets you the standard 15W mode, and, on 15W, the Pi can only offer 600mA for accessories, which may or may not be enough to power your NVMe. Using the 25W supply, it can offer 1.6A instead, which gives you plenty more headroom.
5W5A is not a custom profile. It is part of the USB-PD standard, but as an optional profile that can only be provided if you ensure the cable used is safe to handle 5 amps of current. It’s why the official Rapsberry Pi 5 PSU has a non-removable cable.
That's good to know, thank you. It's using the official charger in the rack, but I used the charger I've had on my desk while setting it up. I added a note to the article.
I just learned about the whole homelab thing a week ago; it's a much deeper rabbit hole than I expected. I'm planning to setup ProxMox today for the first time in fact and retire my Ubuntu Server setup running on a NUC that's been serving me well for last couple years.
I hadn't heard about mealie yet, but sounds like a great one to install.
I've set up half a dozen different home labs over the years but never used anywhere near the compute or disk capacity I had. It was more about learning things, I guess. I laughed when he mentioned the number of cores he has available.
I used to have a large server serving a couple important things.
I was able to put everything on a fanless zotac box with a 2.5" sata SSD, and it has served well for many years. (and QUITE a bit less electricity, even online 24/7)
If you want to go another, related rabbit hole, check out the DataHoarder subreddit. But don't blame me, if you’re buying terabytes of storage over the next few months :)
Data Hoarding is a bit more involved than just a homelab. Don't want your data hoard to go down or missing, whole you're labbing new techs and protocols.
I can vouch for Mealie. My wife and I run it locally for family recipes and to pull down recipes from websites. I have a DNS ad blocker running, but most recipe sites are still a mess to navigate on mobile.
You can also distill recipes down. I find a lot of good recipes online that have a lot of hand-holding within the steps which I can just eliminate.
As others have said, Mealie is an excellent app for any homelab. My wife and I use the meal planning feature and connect it to our Home Assistant calendar that is displayed on a wall-mounted tablet. The ingredient parsing update is amazing and being able to scale recipes up/down is such a time saver.
I've had a ton of fun with CasaOS in the past few months. I don't mind managing docker-compose text files, but CasaOS comes with a simple UI and an "App Store" that makes the process really simple and doesn't overly-complicate things when you want to customize something about a container.
I have Proxmox running on top of a clean Debian install on my NUC, I wanted to allow Plex to use the hardware decoding and it got a bit funny trying to do that with Plex running in a VM, so it runs on the host and I use VMs for other stuff
I have an Intel (12th Gen i5-12450H) mini-pc and at first had issues getting the GPU firmware loaded and working in Debian 12. However upgrading to Debian 13 (trixie) and doing apt update and upgrade resolved the issue and was able to pass the onboard Intel GPU through Docker to a Jellyfin container just fine. I believe the issue is related to older linux kernels and GPU firmware compatibility. Perhaps that’s your issue.
My most recent learning - DDR4 ECC UDIMMs are comically expensive. To the point where I considered just replacing the entire platform with something RDIMM rather than swapping to ECC sticks.
>No space left on device.
>In other words, you can lock yourself out of PBS. That’s… a design.
Run PBS in LXC with the base on a zfs dataset with dedup & compression turned off. If it bombs you can increase disk size in proxmox & reboot it. Unlike VMs you don't need to do anything inside the container to resize FS so this generally works as fix.
>PiHole
AGH is worth considering because it has built in DoH
>Raspberry Pi 5, ARM64 Proxmox
Interesting. I'm leaning more towards k8s for integrating pis meaningfully
You seem knowledgeable so you may already know, but it's worth looking at the x86 mini PCs. Performance per watt has gotten pretty close on the newer low power CPUs (e.g. N150, unsure what AMD's line for that is), and performance per $ spent on hardware is way higher. I'm seeing 8GB Pi 5s with a power supply and no SD card for $100; you can get an N150 mini PC with 16GB of RAM and 500GB SSD pre-installed for like $160. Double the RAM, double the CPU performance, and comes with an SSD.
Imo, Raspberry Pis haven't been cost competitive general compute devices for a while now unless you want GPIO pins.
The first thing I thought when I read this article was how raspberry pi’s just make this kind of thing more difficult and annoying compared to a regular normal PC, new (e.g. cheap mini PC) or used (e.g. used business workstation or just a plain desktop PC).
And if you want GPIO pins I’d imagine that a lot of those applications you’d be better served with an ESP32 and that a raspberry pi is essentially overkill for many of those use cases.
The Venn diagram for where the pi makes sense seems smaller than ever these days.
Used Intel 8th gen based mini PCs seem like a pretty good value. 100-150 bucks for a pc from a somewhat reputable brand (lenovo, dell, hp) with slightly better multi core than N150 and ~6W idle if you manage to get it to stay in C10. Some of them have a low profile pcie slot, like M720q and M920q. Also the CPU is socketed so you could technically upgrade it to e.g. i9-9900K, at least the M920q is known to take one as long as you use a powerful enough PSU. Few of them (at least M920q) also support coreboot due to an Intel Boot Guard vuln which could be fun, I'm planning to look into whether it could be ported to my M720q as well.
Yeah have a collection of minipc - they are indeed great. This build was more NAS focused. 9x SATA SSD and 6x NVME...minipcs just don't have the connectivity for that sort of thing
>Imo, Raspberry Pis haven't been cost competitive general compute devices for a while now unless you want GPIO pins.
I have a bunch of rasp 4Bs that I'll use for a k8s HA control plane but yeah outside of that they're not idea. Especially with the fragility of SD card instead of nvme (unless you buy the silly HAT thing).
> My most recent learning - DDR4 ECC UDIMMs are comically expensive. To the point where I considered just replacing the entire platform with something RDIMM rather than swapping to ECC sticks.
DDR4 anything is becoming very expensive right now because manufacturers have been switching over to DDR5.
Don’t do K8s on Pis. The Pis will spend the majority of their horsepower running etcd, CNI of choice, other essential services (MetalLB, envoy, etc). You’ll be left with a minimal percentage of resources for the pods that actually do things you need outside the cluster.
And don’t get me started on if you intend to run any storage solutions like Rook-Ceph on cluster.
Maybe I just got lucky, but a year ago or so I managed to find Kingston 32GB DDR4 ECC UDIMM's from Amazon for a price that was more or less identical to normal non-ECC RAM. Running a Ryzen system with 128gb of memory now.
I went in thinking that maybe there's something to learn for my grand total of 1 ThinkCentre M910q "homelab", but this author's setup is on another league, I'm sure closer (or surpassing) the needs of a small/medium company!
You'd be delighted (or terrified) to know that I just added an old gaming computer in a 4U case to the cluster, so I can play with PCI/GPU passthrough.
The Dell is essentially the main machine that runs everything we actually use - the other hardware is either used as redundancy or for experiments (or both). I got the Pi from a work thing and this has been a fun use case. Not that I necessarily recommend it...
I don't understand some home labs. You see a beefy, but old rack server that has next to none single-thread performance (relatively). Usually servery underutilized un running Proxmox (issa lab after all) and RPi doing something for some reason?
Those setups always pure "home-lab" because it's too small or macgyvered together for anything, but the smallest businesses...where it will be an overkill.
Sometimes it's people running 2-3 node k8s cluster to run a few static workloads. You're not going to learn much about k8s from that, but you will waste CPU cycles on running the infra.
Running stuff on an underpowered Raspberry Pi is a good way to sniff test whether an infrastructure or software setup is sane. Powerful computers can hide horrible decisions for a long time, while less powerful devices make it immediately obvious if you need to switch a more efficient configuration.
I think this is one of the main reasons why Raspberry Pi has such a strong representation in homelabs, including my own.
I know, that's why I have my own "lab". I just don't get why most of other labs are so cookie-cutter proxmox + home assistant + (unifi controller) + pihole and there is always RPi somewhere next to a chunky server.
You overestimate the complexity. Even if you don't use k8s installer/distribution and do it from scratch - you are not going to learn much about operating/using k8s.
I see people like you on /r/homelab all the time. "What's the point of this?" "k8s is not for you". Shut the fuck up man. It's called homelab for a reason. People can design it however they want, even if it's underutilized. And, you can learn a lot by running k8s cluster at home.
Start with k3s, configure ingress/services/deployment, install ingress controller for cluster wide routing, install service meshes (istio, cilium), write a controller, mess around with gateway API. The possibilities are endless. Stop bitching and maybe have a real argument instead of shitting on homelabbers
I find horizontal scaling with many smaller cores and lots of memory more impactful for virtualization workloads than heavy single core performance (which, fwiw, is pretty decent on these Xeon Golds).
The biggest bottleneck is I/O performance, since I rely on SAS drives (since running full VMs has a lot of disk overhead), rather than SSDs, but I cannot justify the expense to upgrade to SSDs, not to mention NVME.
> Those setups always pure "home-lab" because it's too small or macgyvered together for anything, but the smallest businesses...where it will be an overkill.
That is a core part of the hobby. You do some things very enterprise-y and over-engineered (such as redundant PSUs and UPSs), while simultaneously using old hard drives that rely on your SMART monitor and pure chance to work (to pick 2 random examples).
I also re-use old hardware that piles up around the house constantly, such as the Pi. I commented elsewhere that I just slapped an old gaming PC into a 4U case since I want to play/tinker with/learn from GPU passthrough. I would not do this for a business, but I'm happy to spend $200 for a case and rails and stomach an additional ~60W idle power draw to do such. I don't even know what exactly I'll be running on it yet. But I _do_ know that I know embarrassingly little about GPUs, X11, VNC, ... actually work and that I have an unused GTX 1080.
Some of this is simply a build-vs-buy thing (where I get actual usage out of it and have something subjectively better than an off the shelf product), others is pure tinkering. Hacking, if you will. I know a website that usually likes stuff like that.
> You're not going to learn much about k8s from that
It's possible you and I learn things very differently then (and I mean this a lot less snarky than it sounds). I built Raft from scratch in Scala 3 and that told me a lot about Raft and Scala 3, despite being utterly pointless as a product (it's on my website if you care to read it). I have the same experience with everything home lab / Linux / networking - I always learn something new. And I work for a networking company...
> It's possible you and I learn things very differently then (and I mean this a lot less snarky than it sounds). I built Raft from scratch in Scala 3 and that told me a lot about Raft and Scala 3, despite being utterly pointless as a product (it's on my website if you care to read it). I have the same experience with everything home lab / Linux / networking - I always learn something new. And I work for a networking company...
Building k8s from scratch, you're going to learn how to build k8s from scratch. Not how to operate and/or use k8s. Maybe you will learn some configuration management tool along the way unless your plan is to just copy-paste commands from some website into terminal.
> find horizontal scaling with many smaller cores and lots of memory more impactful for virtualization workloads than heavy single core performance (which, fwiw, is pretty decent on these Xeon Golds).
Yeah, if you run a VM for every thing that should be a systemd service, it scales well that way.
I second the shout out for Mealie, it's very useful. Importing from URLs works very well, and it gives you a centralised place for all your recipes, without ads or filler content and safe from linkrot.
Primarily, docker isn't isolation. Where isolation is important, VMs are just better.
Outside of that:
Docker & k8s are great for sharing resources, VMs allow you to explicitly not share resources.
VMs can be simpler to backup, restore, migrate.
Some software only runs in VMs.
Passing through displays, USB devices, PCI devices, network interfaces etc. often works better with a VM than with Docker.
For my setup, I have a handful of VMs and dozens of containers. I have a proxmox cluster with the VMs, and some of the VMs are Talos nodes, which is my K8s cluster, which has my containers. Separately I have a zimaboard with the pfsense & reverse proxy for my cluster, and another machine with pfsense as my home router.
My primary networking is done on dedicated boxes for isolation (not performance).
My VMs run: plex, home assistant, my backup orchestrator, and a few windows test hosts. This is because:
- The windows test hosts don't containerise well; I'd rather containerise them.
- plex has a dedicated network port and storage device, which is simpler to set up this way.
- Home Assistant uses a specific USB port & device, which is simpler to set up this way.
- I don't want plex, home assistant, or the backup orchestrator to be affected by issues relating to my other services / k8s. These are the services where small transient or temporary issues would impact the whole household.
Also note, I don't use the proxmox container support (I use talos) for two reasons. 1 - I prefer k8s to manage services. 2 - the isolation boundary is better.
> Primarily, docker isn't isolation. Where isolation is important, VMs are just better.
Better how? What isolation are we talking about, home-lab? Multi-tenant environments for every family member?
> Some software only runs in VMs.
Like OS kernels and software not compiled for host OS?
> Passing through displays, USB devices, PCI devices, network interfaces etc. often works better with a VM than with Docker.
Insane take because we're talking about binding something from /dev/ to a namespace, which is much easier and faster than any VM pass-through even if your CPU has features for that pass-through.
> plex has a dedicated network port and storage device, which is simpler to set up this way.
Same, but my plex is just a systemd unit and my *arrs are in nspawn container also on its own port (only because I want to be able to access them without authentication on the overlay network).
> I don't want plex, home assistant, or the backup orchestrator to be affected by issues relating to my other services / k8s.
Hosting Plex in k8s is objectively wrong, so you're right there. I don't see how adding proxmox into the place instead of those services being systemd units. If they run on the same node - you're not getting any fault tolerance, just adding another thing that can go wrong (proxmox)
Maybe my use case is abnormal, but I allocate the majority of my resources to a primary VM where I run everything, including containers, etc. but by running Proxmox now I can backup my entire server and even transfer it across the network. If I ever have some software to try out, I can do it in a new VM rather than on my main host. I can also ‘reboot’ my ‘server’ without actually rebooting the real computer, which meant less fan noises and interruption back when I used an actual rack mounted server at home.
Proxmox is essentially a clustered hypervisor (a KVM wrapper, really). It has some overlap with K8s (via containers), but is simpler for what I do and has some very nice features, namely around backups, redundancy/HA, hardware passthrough, and the fact that it has a usable GUI.
I also use K8s at work, so this is a nice contrast to use something else for my home lab experiments. And tbh, I often find that if I want something done (or something breaks), muscle-memory-Linux-things come back to me a lot quicker than some obscure K8s incantation, but I suppose that's just my personal bias.
Several of my VMs (which are very different than containers, obviously - even though I believe VMs on K8s _can_ be a thing...) run (multiple) docker containers.
For my home archive NAS boxes, Proxmox is just a Debian distro with selective (mostly virtualization) things more up to date, and has ZFS and a web UI out of the box.
I disable the high availability stuff I don’t use that otherwise just grinds away at disks because of all the syncing it does.
It has quirks to work through, but at this point for me dealing with it is fairly simple, repeatable and most importantly, low effort/mental overhead enough for my few machines without having to go full orchestration, or worse, NixOS.
"Why would I need virtualization when I have Kubernetes".. sounds like a someone who has never had to update the K8s control plane and had everything go completely wrong. If it happens to you, you will be begging for an HVM with real snapshots.
personally : proxmox /VM is great if You'd like to separate physical HW. In my case - virtualized TrueNAS means I can give it a whole SATA controller and keep this as an isolated storage machine.
Whatever uses that storage usually runs in a Docker inside an LXC container.
If I need something more isolated (think public facing cloudflare) - that's a separate docker in another network routed through another OPNSense VM.
Desktop - VM where I passed down a whole GPU and a USB hub.
Best part - it all runs on a fairly low power HW (<20W idle NAS plus whatever the harddrives take - generally ~5W / HDD).
I have a couple of machines that I can spin up if I need the power and performance, but it was amazing to me that the always on services that I'm actually using would run on an N95 mini PC.
YMMV, I'm not saying it's enough for every use case. The CPU will transcode my 1080p media using QSV at ~500 FPS. I don't have enough users to saturate that using Jellyfin.
Edit: did some searching, probably you mean the N95 Intel CPU as the basis for a mini pc, not actually a system called N95. That thing is 30% faster than my server's cpu which has <10% occupancy, and about half of it comes from the 1 Windows VM that I need for a legacy application I should really be getting rid of. It's also very recent, you can use way older CPUs from hardware people are throwing away that use similarly little power instead of buying a new product
You need to be careful with this one.
The USB spec goes up to 15W (3A) for its 5V PD profiles, and the standard way to get 25W would be to use the 9V profile. I assume the Pi 5 lacks the necessary hardware to convert a 9V input to 5V, and, instead, the Pi 5 and its official power supply support a custom, out-of-spec 25W (5A) mode.
Using an Apple charger gets you the standard 15W mode, and, on 15W, the Pi can only offer 600mA for accessories, which may or may not be enough to power your NVMe. Using the 25W supply, it can offer 1.6A instead, which gives you plenty more headroom.
I hadn't heard about mealie yet, but sounds like a great one to install.
In my book, that’s a homelab, it's just a small one (an efficient one?...)
I was able to put everything on a fanless zotac box with a 2.5" sata SSD, and it has served well for many years. (and QUITE a bit less electricity, even online 24/7)
The Proxmox Backup Server is the killer feature for me. Incremental and encrypted backups with seamless restoration for LXC and VMs has been amazing.
I also wanted to back up my big honking zpool of media, but it doesn't economical to store 10+ TB offsite when the data isn't really that critical.
You can also distill recipes down. I find a lot of good recipes online that have a lot of hand-holding within the steps which I can just eliminate.
I'm not even using the features beyond the recipes yet, but i'm already very happy that i can migrate my recipes from google docs to over there
>No space left on device.
>In other words, you can lock yourself out of PBS. That’s… a design.
Run PBS in LXC with the base on a zfs dataset with dedup & compression turned off. If it bombs you can increase disk size in proxmox & reboot it. Unlike VMs you don't need to do anything inside the container to resize FS so this generally works as fix.
>PiHole
AGH is worth considering because it has built in DoH
>Raspberry Pi 5, ARM64 Proxmox
Interesting. I'm leaning more towards k8s for integrating pis meaningfully
Imo, Raspberry Pis haven't been cost competitive general compute devices for a while now unless you want GPIO pins.
And if you want GPIO pins I’d imagine that a lot of those applications you’d be better served with an ESP32 and that a raspberry pi is essentially overkill for many of those use cases.
The Venn diagram for where the pi makes sense seems smaller than ever these days.
>Imo, Raspberry Pis haven't been cost competitive general compute devices for a while now unless you want GPIO pins.
I have a bunch of rasp 4Bs that I'll use for a k8s HA control plane but yeah outside of that they're not idea. Especially with the fragility of SD card instead of nvme (unless you buy the silly HAT thing).
DDR4 anything is becoming very expensive right now because manufacturers have been switching over to DDR5.
On the plus side I have a lot of non-ECC DDR4 sticks that I'm dumping into the expensive market rn
Technitium has all the bells and whistles along with being cross platform.
https://technitium.com/dns/
And don’t get me started on if you intend to run any storage solutions like Rook-Ceph on cluster.
VMs can add a lot of complexity that you don't really need or want to manage.
And (perhaps unadmitted) lots of people bought Pis and then searched for use cases for them.
The Dell is essentially the main machine that runs everything we actually use - the other hardware is either used as redundancy or for experiments (or both). I got the Pi from a work thing and this has been a fun use case. Not that I necessarily recommend it...
Those setups always pure "home-lab" because it's too small or macgyvered together for anything, but the smallest businesses...where it will be an overkill.
Sometimes it's people running 2-3 node k8s cluster to run a few static workloads. You're not going to learn much about k8s from that, but you will waste CPU cycles on running the infra.
I think this is one of the main reasons why Raspberry Pi has such a strong representation in homelabs, including my own.
I'd bet you're minority. People use Pi because it lets them assemble a cluster for under 200 bucks.
Start with k3s, configure ingress/services/deployment, install ingress controller for cluster wide routing, install service meshes (istio, cilium), write a controller, mess around with gateway API. The possibilities are endless. Stop bitching and maybe have a real argument instead of shitting on homelabbers
That is my argument, you're not learning much if your cluster 2-3 machines and a few RPis.
> Stop bitching and maybe have a real argument instead of shitting on homelabbers
Why so rude? I have a homelab myself, it's just for running things and not LARPing as sysadmin.
I find horizontal scaling with many smaller cores and lots of memory more impactful for virtualization workloads than heavy single core performance (which, fwiw, is pretty decent on these Xeon Golds).
The biggest bottleneck is I/O performance, since I rely on SAS drives (since running full VMs has a lot of disk overhead), rather than SSDs, but I cannot justify the expense to upgrade to SSDs, not to mention NVME.
> Those setups always pure "home-lab" because it's too small or macgyvered together for anything, but the smallest businesses...where it will be an overkill.
That is a core part of the hobby. You do some things very enterprise-y and over-engineered (such as redundant PSUs and UPSs), while simultaneously using old hard drives that rely on your SMART monitor and pure chance to work (to pick 2 random examples).
I also re-use old hardware that piles up around the house constantly, such as the Pi. I commented elsewhere that I just slapped an old gaming PC into a 4U case since I want to play/tinker with/learn from GPU passthrough. I would not do this for a business, but I'm happy to spend $200 for a case and rails and stomach an additional ~60W idle power draw to do such. I don't even know what exactly I'll be running on it yet. But I _do_ know that I know embarrassingly little about GPUs, X11, VNC, ... actually work and that I have an unused GTX 1080.
Some of this is simply a build-vs-buy thing (where I get actual usage out of it and have something subjectively better than an off the shelf product), others is pure tinkering. Hacking, if you will. I know a website that usually likes stuff like that.
> You're not going to learn much about k8s from that
It's possible you and I learn things very differently then (and I mean this a lot less snarky than it sounds). I built Raft from scratch in Scala 3 and that told me a lot about Raft and Scala 3, despite being utterly pointless as a product (it's on my website if you care to read it). I have the same experience with everything home lab / Linux / networking - I always learn something new. And I work for a networking company...
Building k8s from scratch, you're going to learn how to build k8s from scratch. Not how to operate and/or use k8s. Maybe you will learn some configuration management tool along the way unless your plan is to just copy-paste commands from some website into terminal.
> find horizontal scaling with many smaller cores and lots of memory more impactful for virtualization workloads than heavy single core performance (which, fwiw, is pretty decent on these Xeon Golds).
Yeah, if you run a VM for every thing that should be a systemd service, it scales well that way.
I don’t get why people use VMs for stuff when there’s docker.
Thanks!
Outside of that:
Docker & k8s are great for sharing resources, VMs allow you to explicitly not share resources.
VMs can be simpler to backup, restore, migrate.
Some software only runs in VMs.
Passing through displays, USB devices, PCI devices, network interfaces etc. often works better with a VM than with Docker.
For my setup, I have a handful of VMs and dozens of containers. I have a proxmox cluster with the VMs, and some of the VMs are Talos nodes, which is my K8s cluster, which has my containers. Separately I have a zimaboard with the pfsense & reverse proxy for my cluster, and another machine with pfsense as my home router.
My primary networking is done on dedicated boxes for isolation (not performance).
My VMs run: plex, home assistant, my backup orchestrator, and a few windows test hosts. This is because:
- The windows test hosts don't containerise well; I'd rather containerise them. - plex has a dedicated network port and storage device, which is simpler to set up this way. - Home Assistant uses a specific USB port & device, which is simpler to set up this way. - I don't want plex, home assistant, or the backup orchestrator to be affected by issues relating to my other services / k8s. These are the services where small transient or temporary issues would impact the whole household.
Also note, I don't use the proxmox container support (I use talos) for two reasons. 1 - I prefer k8s to manage services. 2 - the isolation boundary is better.
Better how? What isolation are we talking about, home-lab? Multi-tenant environments for every family member?
> Some software only runs in VMs.
Like OS kernels and software not compiled for host OS?
> Passing through displays, USB devices, PCI devices, network interfaces etc. often works better with a VM than with Docker.
Insane take because we're talking about binding something from /dev/ to a namespace, which is much easier and faster than any VM pass-through even if your CPU has features for that pass-through.
> plex has a dedicated network port and storage device, which is simpler to set up this way.
Same, but my plex is just a systemd unit and my *arrs are in nspawn container also on its own port (only because I want to be able to access them without authentication on the overlay network).
> I don't want plex, home assistant, or the backup orchestrator to be affected by issues relating to my other services / k8s.
Hosting Plex in k8s is objectively wrong, so you're right there. I don't see how adding proxmox into the place instead of those services being systemd units. If they run on the same node - you're not getting any fault tolerance, just adding another thing that can go wrong (proxmox)
Dead Comment
I also use K8s at work, so this is a nice contrast to use something else for my home lab experiments. And tbh, I often find that if I want something done (or something breaks), muscle-memory-Linux-things come back to me a lot quicker than some obscure K8s incantation, but I suppose that's just my personal bias.
Several of my VMs (which are very different than containers, obviously - even though I believe VMs on K8s _can_ be a thing...) run (multiple) docker containers.
I disable the high availability stuff I don’t use that otherwise just grinds away at disks because of all the syncing it does.
It has quirks to work through, but at this point for me dealing with it is fairly simple, repeatable and most importantly, low effort/mental overhead enough for my few machines without having to go full orchestration, or worse, NixOS.
especially useful if you want multiple of those, and also helpful if you don't want one of them anymore.
Whatever uses that storage usually runs in a Docker inside an LXC container.
If I need something more isolated (think public facing cloudflare) - that's a separate docker in another network routed through another OPNSense VM.
Desktop - VM where I passed down a whole GPU and a USB hub.
Best part - it all runs on a fairly low power HW (<20W idle NAS plus whatever the harddrives take - generally ~5W / HDD).
Deleted Comment
Dead Comment
300 are a lot of watts. ...I didn't used to pay attention but power costs keep rising.
I recently opted for a i7 F (65w base) over an i7 K (125w base) even with the 15% performance hit.
YMMV, I'm not saying it's enough for every use case. The CPU will transcode my 1080p media using QSV at ~500 FPS. I don't have enough users to saturate that using Jellyfin.
Edit: did some searching, probably you mean the N95 Intel CPU as the basis for a mini pc, not actually a system called N95. That thing is 30% faster than my server's cpu which has <10% occupancy, and about half of it comes from the 1 Windows VM that I need for a legacy application I should really be getting rid of. It's also very recent, you can use way older CPUs from hardware people are throwing away that use similarly little power instead of buying a new product
Did you do the original encoding yourself (to insure everything is optimized for your rig)?