In the next ~6 months I’m going to entirely overhaul my setup. Today I have a NUC6i3 running Home Assistant OS, and a NUC8i7 running OpenMediaVault with all the usual suspects via Docker.
I want to upgrade hardware significantly, partially because I’d like to bring in some local LLM. Nothing crazy, 1-8B models hitting 50tps would make me happy. But even that is going to mean a beefy machine compared to today, which will be nice for everything else too of course.
I’m still all over the place on hardware, part of what I’m trying to decide is whether to go with a single machine for everything or keep them separate.
Idea 1 is a beefy machine and Proxmox with HA in a VM, OMV or TrueNAS in another, and maybe a 3rd straight Debian to separate all the Docker stuff. But I don’t know if I want to add the complexity.
Idea 2 would be beefy machine for straight OMV/TrueNAS and run most stuff there, and then just move HA over to the existing i7 for more breathing room (mostly for Frigate, which could also separate to other machine I guess).
I hear a lot of great things about Proxmox, but I’m not sold that it’s worth the new complexity for me. And keeping HA (which is “critical” compared to everything else) separated feels like a smart choice. But keeping it on aging hardware diminishes that anyway, so I don’t know.
Just wanting to hear various opinions I guess.
The one factor that no one seems to have mentioned yet that is key for many of us is LEARNING …
It’s a great way to learn virtualization and containerization
I use it exclusively to run Linux containers, it makes it very convenient to backup and restore as well as replicate environments.
We are now migrating our lab at work away from VMW
Do you need clusters that can failure ver from one machine to another? Is yes, proxmox is good. If no, there are less complex options.
I did it purely so I could fully back up my server VM and move it to new hardware when I wanted to upgrade. I just have to install Proxmox, attach the NAS, and pull the VM backup. And just like that everything is back to running just as it was before the upgrade! Now just faster and more energy efficient!
I will always recommend Proxmox, not just because it’s really easy to add more stuff, but because it’s really safe to tinker with. You take a snapshot, start messing around, and if you break something you just revert to the snapshot
This. Even if you were going to run a bare metal server it’s almost always nicer to install Proxmox and just have a single VM
This is how I run my OPNsense router. Snapshots are great and rebooting is SO much faster!
Uh. OpnSense on bare metal can also do snapshots, if you set it up correctly…
I shy away from VMs because I prefer having a pool of resources on a machine that can be used as needed instead of being pre-allocated. Pre-allocating CPU, RAM, and doing PCI passthough for GPUs wastes already limited resources and is extra effort. Yes, the best practice for production k8s is setting resource requests and limits, but it’s not something I want to bother with when I only have one server.
I use Proxmox for Work and Hyper-V at home. Looking forward to retiring my old Hyper-V host and replace it with Proxmox because Hyper-V is a pain.
Virtualization really helps with reliability. In particular, by allowing you to quickly take snapshots before doing anything destructive and by streamlining backup and recovery.
I use PVE professionally. I could spent some time bitching about how it handles ssh keys and the fragile corosync cluster management. I could complain about the sloppy release cycle and the way they move fast and break shit. Or all the janky shit they’ve slapped together in PBS. I could go on.
But I actually pay for a license for my homelab. And ya, it is THE thing at work now.
I’ve often heard it said that Proxmox isn’t a great option. But its the best one.
If you do try it, don’t bother asking questions here.
Go to the source. https://forum.proxmox.com/Please elaborate. How does it handle ssh keys? And what is fragile regarding corosync?
SSH key management in PVE is handled in a set of secondary files, while the original debian files are replaced with symlinks. Well, that’s still debian. And in some circumstances the symlinks get b0rked or replaced with the original SSH files, the keys get out of sync, and one machine in the cluster can’t talk to another. The really irritating thing about this is that the tools meant to fix it (pvecm updatecerts) don’t work. I’ve got an elaborate set of procedures to gather the certs from the hosts and fix the files when it breaks, but it sux bad enough that I’ve got two clusters I’m putting off fixing.
Corosync is the cluster. It’s a shared file system that immediately replicates any changes to all members. That’s essentially anything under /etc/pve/. Corosync is very sensitive. I believe they ask for 10ms lag or less between hosts, so it can’t work over a WAN connection. Shit like VM restores or vmotion between hosts can flood it out. Looks fukin awful when it goes down. Your whole cluster goes kaput.
All corosync does is push around this set of config files, so a dedicated NIC is overkill, but in busy environments, you might wind up resorting to that. You can put cororsync on its own network, but you obviously need a network for that. And you can establish throttles on various types of host file transfer activities, but that’s a balancing act that I’ve only gotten right in our colos where we only have 1gb networks. I have my systems provisioned on a dedicated corosync vlan and also use a secondary IP on a different physical interface, but corosync is too dumb to fall back to the secondary if the primary is still “up”, regardless of whether its actually communicating, so I get calls on my day off about “the cluster is down!!!1” when people restore backups.
Thanks for your answer.
I use proxmox since version 2.1 in my home lab and since 2020 in production at work. We did not have issues with the ssh files yet. Also corosync is working fine although it shares its 10g network with ceph.
In all that time I was not aware of how the certs are handled, despite the fact I had two official proxmox trainings. Ouch.
Cool.
Here. SSH key issues. There was a huge forum war.
https://forum.proxmox.com/threads/ssh-keys-in-a-proxmox-cluster-resolving-replication-host-key-verification-failed-errors.138102/
But its still a thing. That still needs to be fixed by a human. Today that’s me.Regarding CEPH and corosync on the same network … well I’m just getting started with that now. I do have them on different vlans, but its the same 10gb set of nics. I’m hoping if it gets really lousy, my netadmin can prioritize the corosync vlan. I’ll burn that bridge when I come to it.
EDIT … The linked forum post above leads to the SSH key answer, but its convoluted.
Here’s what I put in my own wiki.Get the right key from each server.
cat ~/.ssh/id_rsa.pubMake sure they match in here. Fix em if they don’t.
/etc/pve/priv/authorized_keysThere’s a couple symlinks to fix too, but this should get it.
In my opinion, Proxmox is worth it for two reasons:
-
Easy high-availability setup and control
-
Proxmox Backup Server
Those two are what drove me to switch from KVM, and I don’t regret it at all. PBS truly is a fantastic piece of software.
-
Don’t add a layer of abstraction until you need it, or you have the free time to learn it well enough that it won’t cause you problems while you experiment.
Don’t use Proxmox, use incus. It’s way easier to run and doesn’t give a care about your storage.
I like Incus a lot, but it’s not as easy to create complex virtual networksnas it is with proxmox, which is frustrating in educational/learning environments.
No backup utility like PBS though, thats why I haven’t switched.
Like I said, incus don’t care about your storage.
I’ve never uses PBS, I’ve always just rolled my own. I currently keep 7 daily, 4 weekly and 4 monthly. My data mounts are all nfsv4.
Edit: isnt it possible to use pbs with non-proxmox systems?
Yeah it sounds nice but too much time investment for me.
I can install PBS client on any system but it requires manual setup and scheduling which I don’t want to do. When used with Proxmox that’s all handled for me.
Also I don’t think Proxmox cares about storage either, I just use ZFS which is completely standard under the hood.
I need do update my hardware and thought about switching to proxmox, because of all the good things i hear about it. Iam currently on unraid, but this thing still runs and its the same installation of 7 years ago. It had zero downtime. Mutliple drives, vms and docker container. Easy to use and rock solid.
Not sure what youre doing with OMV that couldn’t be done in proxmox, so feel free to elaborate there.
Almost all my servers are proxmox (some just Debian, though a few more specific work related solutions are lurking about). For docker I’d do an LXC, btw, I wouldn’t bother with a full VM.
My (excessive) setup is all proxmox, set up as a high availability cluster. HA runs in a VM, and my USB devices are passed through (technically its USB over IP extension, so the USB devices for various VMs continually pass through even if I have to shut a server down).
Its where Jellyfin, Audiobookshelf, homepage.dev, a bajillion stupid containers I mostly dont need, DNS, monitoring and analytics, mealie (recipe server), various websites I host, etc, etc all live. Nothing is by itself on a box except my workstations, but for non-linux use I have VMs I remote into (mostly industry specific software and random crap like an xp VM to use an old piece of hardware).
It’s great if you need what it offers. Otherwise, it’s simpler to set up something like Ubuntu Server.
I use Proxmox to run my email service, https://port87.com/, because I can have high-availability services that can move around the different Proxmox hosts. It’s great for production stuff.
I also use it to run my seedbox, because graphics in the browser through Proxmox is really easy.
For everything else (my Jellyfin, Nextcloud, etc), I have a server that runs Ubuntu Server and use a docker compose stack for each service.
I had never heard of Port87 before, how do you like it? And I assume you pay no monthly fee by hosting your own domain?
I meant that I made it. :) It’s my own email service, and I run it on Proxmox. So, take this with a grain of salt knowing that I wrote and run it, but I think it’s the best email service by far. I wrote an article about how it works really well for me here:
https://sciactive.com/2023/07/17/the-best-email-for-those-who-struggle-with-organization/
Feel free to sign up for free and try it out. :D
I like ProxMox too, I’m quite happy that I dove in with it. Just one word of warning - if you mount a drive volume in a container, destroy the container and restore it from a backup, it wipes out the mounted drive. I, uh, lost a bunch of data that way. Not super important data, but still.
I’m still glad I went with ProxMox though. It makes spinning up something a breeze, and I also went with HA in a VM, and another Debian VM for Docker, and a bunch of random LXCs.
If you can replicate it, you should really file a bug report so that the next guy doesn’t lose data.
It tells you it will happen when you use the restore backup feature.
Is this separate from a bind mount? Cause that doesn’t happen with bind mounts.
Yeah, not a bind mount. There was a warning, but I was restoring a ton of LXCs and clicked through the warning too fast. My fault, I’m not super sore about it, just warning others as a service to prevent what happened to me!
Fair enough!
I’m running Proxmox and hate it. I still recommend it for what you are trying to do. I think it would work quite nicely. Three of my four nodes have llama.cpp VMs hosting OpenAI-compatible LLM endpoints (llama-server) and I run Claude Code against that using a simple translation proxy.
Proxmox is very opinionated on certain aspects and I much prefer bare metal k8s for my needs.







