Nutanix is popular with traditional larger enterprise VMware type customers, Proxmox is popular with the smaller or homelabber refugees. Exceptions exist to each of course.
That people consolidated their business atop VMware's hypervisor, got screwed by Broadcom, and as a result are moving everything to Nutanix (from whom they need to buy the hypervisor, the compute stack, the storage stack, etc.) is insane to me.
You can have a public company that invests in private companies, as opposed to investing in publicly listed companies (like $BRK/Buffett does (in addition to PE stuff)).
Talking to midmarket and enterprise customers and nobody is taking Proxmox seriously quite yet, I think due to concerns around support availability and long term viability. Hyper-V and Azure Local come up a lot in these conversations if you run a lot of Windows (Healthcare in the US is nearly entirely Windows based). Have some folks kicking tires on OpenShift, which is a HEAVY lift and not much less expensive than modern Broadcom licenses.
My personal dark horse favorite right now is HPE VM Essentials. HPE has a terrible track record of being awesome at enterprise software, but their support org is solid and the solution checks a heck of a lot of boxes, including broad support for non-HPE servers, storage, and networking. Solution is priced to move and I expect HPE smells blood in these waters, they're clearly dumping a lot of development resources into the product in this past year.
I've used them professionally during 0.9 times (2008.) and it was already quite useful and very stable (all advertised features worked).
17 years looks pretty good to me, Proxmox will not go away (neither product or company)
>(Healthcare in the US is nearly entirely Windows based).
This wasn't my experience in over a decade in the industry.
It's Windows dominant, but our environment was typically around a 70/30 split of Windows/Linux servers.
Cerner shops in particular are going to have a larger Linux footprint. Radiology, biomed, interface engines, and med records also tended to have quite a bit of nix infrastructure.
One thing that can be said is that containerization has basically zero penetration with any vendors in the space. Pretty much everyone is still doing a pets over cattle model in the industry.
HPE VM Essentials and Proxmox are just UI/wrappers/+ on top of kvm/virsh/libvirt for the virtualization side.
You can grow out of either by just moving to self hosted, or you can avoid both for the virtualization part if you don't care about the VMware like GUI if you are an automation focused company.
If we could do it 20 years ago once VT-x for production Oracle EBS instances for a smaller but publicly traded company with a IT team of 4, almost any midmarket enterprise could do it today, especially with modern tools.
It is culture and web-ui requirements and FUD that cause issues, not the underlying products that are stable today, but hidden from view.
So with support for OCI container images, does this mean I can run docker images as LXCs natively in proxmox? I guess it's an entirely manual process, no mature orchestration like portainer or even docker-compose, no easy upgrades, manually setting up bind mounts, etc. It would be a nice first step.
Also hoping that this work continues and tooling is made available. I suppose eventually someone could even make a wrapper around it that implements Docker's remote API
The only thing missing making Proxmox difficult in traditional environment is a replacement for VMware's VMFS (cluster aware VM file system).
Lots and lots of organizations already have SAN/storage fabric networks presenting block storage over the network which was heavily used for VMware environments.
You could use NFS if your arrays support it, but MPIO block storage via iscsi is ubiquitous in my experience.
Not really, that works if you want to have converged storage in your hypervisors, but most large VMWare deployments I've seen use external storage from remote arrays.
Watching hypervisors slowly improve over the last few years has been amazing. They aren't quite to the point that I will install them under any new hardware I buy and then put my daily driver OS on top, but they are very close. I think a strong focus on creating 'the OS under your OS' experience seamless could open up a lot more here.
I'm not sure I would want my daily driver to be a hypervisor... Whats controlling audio, do I really need audio kernel extensions on my hypervisor? Whos in charge when I shut the lid on my laptop...
But the moment you stop trying to do everything locally Proxmox, as it is today, is a dream.
It's easy enough to spin up a VM, throw a clients docker/podman + other insanity onto it and have a running dev instance in minutes. It's easy enough to work remotely in your favorite IDE/dev env. DO I need to "try something wild", clone it... build a new one... back it up and restore if it doesn't work...
Do I need to emulate production at a more fine grained level than what docker can provide: easy enough to build something that looks like production on my proxmox box.
And when I'm done with all that work... my daily driver laptop and desktop remain free of cruft and clutter.
VMware has been so good and reasonably priced for so long that there hasn't been a competitive market in the enterprise virtualization space for the past two decades. In a way, I think Broadcom's moves here might be healthy for the enterprise datacenter longer term, it has created the opportunity for others to step in and broadened the ecosystem significantly.
As my main desktop computers I've been using Fedora and Windows (for gaming only) virtualised on top of a single proxmox host with 2 GPUs passed through for more than 10 years... Upgraded all the way to latest versions (guests and hosts) without ever having to reinstall from scratch. I upgraded the hardware a few times (just cloned the disks), and since the desktops are virtualised, Windows always worked fine without complaining about new hardware drivers (only thing to change was GPU driver)
Another benefit is block-level backups of the VMs (either with qcow2 disks files or ZFS block storage, which both support snapshots and easy incremental backups of changed block data only)
Proxmox is great for this, although maybe not on a laptop unless you're ready to do a lot of tweaks for sleep, etc.
I have a PC where I installed Proxmox on bare metal and put a daily-use desktop OS on top. It works surprisingly well, the trickiest part was making sure the desktop OS took control of video/audio/peripherals.
Yup my primary Windows machine is a VM and after passing through all the relevant peripherals (GPU, USB) it’s pretty seamless and you’d never know.
Cool part is I needed a more powerful Linux shell than my regular servers (NUCs, etc.) for a one off project, so I spun up a VM on it and instantly had more than enough compute.
For many folk's workflows, I'd wager that hypervisors are there and ready. I had a nice time setting up xcp-ng before deciding microk8s fits my needs more betterer; they're just plum good, well documented, and blazing fast.
I think the possibilities are huge with this area. I'd love to see more 'manager' layers that build on top of any 'cloud' system, even a local one, to give you a standard stack that is easy to move. Imagine something that lives at the hypervisor level (that you trust and was mature) taking control of your various cloud accounts to merge them and make it easy to migrate/leave one provider for another. I know that is the promise of terraform but we all want a good, consistent, interface to play with and then build the automation tools on top of. Maybe that is a good direction for proxmox? integrating with cloud providers in a seamless way. Anyway, a lot of promise in this area no matter the direction it takes.
15-20 years ago this wouldn't have been a company. It would have been a strong but informal open collaboration where
smart and just people funded by various entities around the world kept it running.
Then the opportunity to get rich by offering an open source product combined with closed source extras+support was invented. I don't like this new world.
Edit: Somewhere along the line, we also lost the concept of having a sysadmin/developer person working at like a municipality contributring like 20% of their time towards maintenance of such projects. Invaluable when keeping things running.
Funny enough, Proxmox VE is 17 years old. I want to say it was ballpark 13-14 years ago I was using it to replace ESXi to get features (HA/Live migration) that only came with expensive licensing. 15-20 years ago there were definitely companies doing exactly this.
Somehow, their web developer managed to break scrolling on Safari, so I am unable to navigate the linked site. If anyone else was looking for a list of what has changed in recent releases, it can be found at https://pve.proxmox.com/wiki/Roadmap
(Perhaps if you're a Microsoft shop you're looking at Hyper-V?)
> KKR & Co. Inc., also known as Kohlberg Kravis Roberts & Co., is an American global private equity and investment company.
* https://en.wikipedia.org/wiki/KKR_%26_Co.
You can have a public company that invests in private companies, as opposed to investing in publicly listed companies (like $BRK/Buffett does (in addition to PE stuff)).
My personal dark horse favorite right now is HPE VM Essentials. HPE has a terrible track record of being awesome at enterprise software, but their support org is solid and the solution checks a heck of a lot of boxes, including broad support for non-HPE servers, storage, and networking. Solution is priced to move and I expect HPE smells blood in these waters, they're clearly dumping a lot of development resources into the product in this past year.
This wasn't my experience in over a decade in the industry.
It's Windows dominant, but our environment was typically around a 70/30 split of Windows/Linux servers.
Cerner shops in particular are going to have a larger Linux footprint. Radiology, biomed, interface engines, and med records also tended to have quite a bit of nix infrastructure.
One thing that can be said is that containerization has basically zero penetration with any vendors in the space. Pretty much everyone is still doing a pets over cattle model in the industry.
You can grow out of either by just moving to self hosted, or you can avoid both for the virtualization part if you don't care about the VMware like GUI if you are an automation focused company.
If we could do it 20 years ago once VT-x for production Oracle EBS instances for a smaller but publicly traded company with a IT team of 4, almost any midmarket enterprise could do it today, especially with modern tools.
It is culture and web-ui requirements and FUD that cause issues, not the underlying products that are stable today, but hidden from view.
https://youtu.be/4-u4x9L6k1s?t=21
>no mature orchestration
Seems to borrow the LXC tooling...which has a decent command line tool at least. You could in theory automate against that.
Presumably it'll mature
Lots and lots of organizations already have SAN/storage fabric networks presenting block storage over the network which was heavily used for VMware environments.
You could use NFS if your arrays support it, but MPIO block storage via iscsi is ubiquitous in my experience.
And how does Ceph/RBD work over Fibre Channel SANs? (Speaking as someone who is running Proxmox-Ceph (and at another gig did OpenStack-Ceph).)
But the moment you stop trying to do everything locally Proxmox, as it is today, is a dream.
It's easy enough to spin up a VM, throw a clients docker/podman + other insanity onto it and have a running dev instance in minutes. It's easy enough to work remotely in your favorite IDE/dev env. DO I need to "try something wild", clone it... build a new one... back it up and restore if it doesn't work...
Do I need to emulate production at a more fine grained level than what docker can provide: easy enough to build something that looks like production on my proxmox box.
And when I'm done with all that work... my daily driver laptop and desktop remain free of cruft and clutter.
Dead Comment
Another benefit is block-level backups of the VMs (either with qcow2 disks files or ZFS block storage, which both support snapshots and easy incremental backups of changed block data only)
Proxmox is great for this, although maybe not on a laptop unless you're ready to do a lot of tweaks for sleep, etc.
Cool part is I needed a more powerful Linux shell than my regular servers (NUCs, etc.) for a one off project, so I spun up a VM on it and instantly had more than enough compute.
I thought it might need gpu virtualization?
do you do it with passthrough?
I learned stuff like this years ago with upgrades to debian/ubuntu/etc - upgrading a distribution is a mess, and I've learned not to trust it.
Then the opportunity to get rich by offering an open source product combined with closed source extras+support was invented. I don't like this new world.
Edit: Somewhere along the line, we also lost the concept of having a sysadmin/developer person working at like a municipality contributring like 20% of their time towards maintenance of such projects. Invaluable when keeping things running.
Remember: Not all commercial users are FAANG rich. Counties/local municipalities count as commercial users, as an example.
Adventures in upgrading Proxmox - https://news.ycombinator.com/item?id=45981666 - Nov 2025 (10 comments)