Readit News logoReadit News
redundantly · a month ago
I like Promox a lot, but I wish it had an equivalent to VMware's VMFS. The last time I tried, there wasn't a way to use shared storage (i.e., iscsi block devices) across multiple nodes and have a failover of VMs that use that storage. And by failover I mean moving a VM to another host and booting it there, not even keeping the VM running.
aaronius · a month ago
That should have been possible for a while. Get the block storage to the node (FC or configure iSCSI), configure multipathing in most situations, and then configure LVM (thick) on top and mark it as shared. One nice thing this release brings is the option to finally also have snapshots for such a shared storage.
redundantly · a month ago
I tried that, but had two problems:

When migrating a VM from one host to another it would require cloning the LVM volume, rather than just importing the group on the other node and starting the VM up.

I have existing VMware gusts that I'd like to migrate over in bulk. This would be easy enough to do by converting the VMDK files, but using LVM means creating an LVM group for each VM and importing the contents of the VMDK into the LV.

SlavikCA · a month ago
Proxmox has built-in support for CEPH, which is promoted as VMFS equivalent.

I don't have much experience with them, so can't tell if it's really on the same level.

thyristan · a month ago
Proxmox with Ceph can do failover when a node fails. You can configure a VM as high-availability to automatically make it boot on a leftover node after a crash: https://pve.proxmox.com/wiki/High_Availability . When you add ProxLB, you can also automatically load-balance those VMs.

One advantage Ceph has over VMware is that you don't need specially approved hardware to run it. Just use any old disks/SSDs/controllers. No special extra expensive vSAN hardware.

But I cannot give you a full comparison, because I don't know all of VMware that well.

woleium · a month ago
Yes, you can do this with ceph on commodity hardware (or even your compute nodes, if you are brave), or if you have a bit of cash, something like a netapp to do NFS/iSCSI/NVME-oF

Use any of these with the built in HA manager in proxmox

redundantly · a month ago
As far as I understand it, Ceph allows you to create distributed storage by using the hardware across your hosts.

Can it be used to format a single shared block device that is accessed by multiple hosts like VMFS does? My understanding is this isn't possible.

keeperofdakeys · a month ago
Unfortunately clustered storage is just a hard problem, and there is a lack of good implementations. OCFS2 and GFS2 exist, but IIRC there are challenges for using them for VM storage, especially for snapshots. Proxmox 9 added a new feature to use multiple QCOW2 files as a volume chain, which may improve this, but for now that's only used for LVM. (Making Proxmox 9 much more viable on a shared iSCSI/FC LUN).

If your requirements are flexible Proxmox does have one nice alternative though - local ZFS + scheduled replication. This feature performs ZFS snapshots + ZFS send every few minutes, giving you snapshots on your other nodes. This snapshot can be used for manual HA, auto HA, and even for fast live migration. Not great for databases, but a decent alternative for homelab and small business.

pdntspa · a month ago
That, and configuring mount points for read-write access on the host is incredibly confusing and needlessly painful
avtar · a month ago
I would see Proxmox come up in so many homelab type conversations so I tried 8.* on a mini pc. The impression I got was that the project probably provides the most value in a clustered environment or even on a single node if someone prefers using a web UI. What didn't seem very clear was an out-of-box way for declaring VM and container configurations [1] that could then be version controlled. Common approaches seemed to involve writing scripts or reach for other tools like Ansible. Whereas something like LXD/Incus makes this easier [2] by default. Or maybe I'm missing some details?

[1] https://forum.proxmox.com/threads/default-settings-of-contai...

[2] https://linuxcontainers.org/incus/docs/main/howto/instances_...

m463 · a month ago
I have similar feelings.

I really wish proxmox had nicer container support.

If a proxmox container config could specify a dockerfile as an option, I think proxmox would be 1000% more useful (and successful)

Instead with LXC and their config files, I feel like i have to put on a sysadmin hat to get a container going. Seems like picking up an adding machine to do my taxes.

(also, lxc does have a way to specify a container, but it is not used)

Instead I have written scripts to automate some of this, which helps,

There is also cloud-init, but I found it sort of unfriendly and never went anywhere with it.

cyberpunk · a month ago
There are various terraform providers for proxmox.
whalesalad · a month ago
Yeah, this is the way. You end up treating Proxmox like it is AWS and asserting your desired state against it.
rcarmo · a month ago
cloud-init support is sorely missed.
argulane · a month ago
Proxmox has had cloud-init support for a while and we have been using it for several years in production.
riedel · a month ago
We are really happy with proxmox for our 4 machine cluster in the group. We evaluated many things, they were either to light or to heavy for our users and/or our group of hobbyist admins. A while back we also set up a backup server. Forum is also a great resource. I just failed to contribute a pull request via their git email workflow and I am now stuck with a non-upstreamed patch to the LDAP Sync (btw. the code there is IMHO not the best part of PVE). In general, while the system works great as a monolith, extending it is IMHO really not easily possible. We have some cludges all over the place (mostly using the really good API), that could be better integrated, e.g. with the UI. At least I did not find a way to e.g. add a new auth provider easily.
woleium · a month ago
Can’t it use pam? so many options for providers there.
riedel · a month ago
It was mostly about syncing groups with proxmox. Worked by patching the LDAP provider to support our schema. Comment was more about the extensibility problem when doing this. Actually when you say this, I wonder how PAM could work, only ever used it for providing shell access: we typically do not have any local users on the machine. Never used PAM in a way not providing any local execution privileges (which is the whole point of a VM host).
BLKNSLVR · a month ago
For my homelab I switched from ESXi to Proxmox a few years ago because the consumer-level hardware I mostly used didn't have Intel network cards and ESXi didn't support the Realtek network devices that were ubiquitous in consumer gear at the time.

Love Proxmox, it's done everything I needed of it.

I don't use it to anywhere near it's potential, but I can vouch for the robustness of its backup process. I've had to restore more than handful of VMs for various reasons, and they've all been rock-solid upon restoration. I would like to use its high-availability features, but haven't needed them and don't really have the time to tinker so much these days.

BodyCulture · a month ago
Seems like it still has no official support for any kind of disk encryption, so you are on your own if you fiddle that in somehow and things may break. Such a beautiful, peaceful world where disk encryption is not needed!
stormking · a month ago
Proxmox supports ZFS and ZFS has disk encryption.
cromka · 18 days ago
Even OpenZFS people advise against using their encryption at this point.
aaronius · a month ago
don't enable it though, if you rely on the guest replication feature of PVE. See https://bugzilla.proxmox.com/show_bug.cgi?id=2350 for why
rcarmo · a month ago
In the hypervisor? Because I have plenty of VMs with LUKS and BitLocker.
PeterStuer · a month ago
I have only recently moved to proxmox as the Hyper-V licensing became too opressive for hobby/one-person projects use.

Can someone tell me wether proxmox upgrades are usually smooth sailing, or should I prepare for this being an endeavour?

guerby · a month ago
Proxmox ships a tool that verify if everything is right for the update (eg: pve8to9), and the wiki documentation is extensive and kept up to date.

At work we started with 6.x a few years ago, upgraded to 7.x a bit after releases, then same with 8.x without issue.

We'll wait a reasonable while before upgrading to 9.x but I don't expect any issue.

Note : same with integrated ceph update, did Reef to Squeed a few weeks ago, no issue.

thyristan · a month ago
Never had a problem with them. Just put each node in maintenance, migrate the VMs to another node, update, move the VMs back. Repeat until all nodes are updated.
zamadatix · a month ago
The "update" step is a bit of a "draw the rest of the owl" in the case of major version updates like this 8.x -> 9.x release. It also depends how many features you're using in that cluster as to how complicated the owl is to draw.

That said, I just made it out alright in my home lab without too much hullabaloo.

woleium · a month ago
if you are using hardware pass through for e.g. nvidia cards you have to update your VMs as well, but other than that pretty painless in my experience (over 15 years)
keeperofdakeys · a month ago
Usually smooth. But if you're running a production workload definitely do your prep work. Working and tested backups, upgrade one node at a time and test, read release notes, wait for a week after major releases, etc. If you don't have a second node I highly recommend it, Proxmox can do ZFS replication for fast live migrations without shared storage.
sschueller · a month ago
The official release of Debian Trixie is not until the 9th...
piperswe · a month ago
Trixie is under a heavy freeze right now; just about all that's changing between now and the 9th are critical bug fixes. Yeah it's not ideal for Proxmox to release an OS based on Trixie this early, but nothing's really going to change in the next few days on the Debian side except for final release ISOs being uploaded
zozbot234 · a month ago
They might drop packages between now and the stable release. An official Debian release won't generally drop packages unless they've become totally unusable to begin with.
znpy · a month ago
Debian repositories gets frozen months in advance before a release, and pretty much only security patches are imported after that. Maybe some package gets rebuilt, or stuff like that. No breaking changes.

I wouldn't expect much changes, if any a all, between today (Aug 5th) and the expected release date (Aug 9th).

cowmix · a month ago
Yeah, it’s wild how many projects—especially container-based ones—have already jumped to Debian Trixie as their “stable” base, even though it’s still technically in testing. I got burned when linuxserver.io’s docker-webtop suddenly switched to Trixie and broke a bunch of my builds that were based on Bookworm.

As you said, Debian 13 officially lands on August 9, so it’s close—but in my (admittedly limited) experience, the testing branch still feels pretty rough. I ran into way more dependency chaos—and a bunch of missing or deprecated packages—than I expected.

If you’re relying on container images that have already moved to Trixie, heads up: it’s not quite seamless yet. Might be safer to stick with Bookworm a bit longer, or at least test thoroughly before making the jump.

Pet_Ant · a month ago
Yeah, but what is the rush? I mean 1) what if something critical changes, and 2) I could easily see some setting somewhere being at "-rc" which causes a bug later.

Frankly, not waiting half a week is bright orange flag to me.

guerby · a month ago
Proxmox 8 was also released just before Debian 12 bookworm.
nativeit · a month ago
Still use/love Proxmox daily. Congrats to the team on the latest release!