Finally managed to get my hands on 2x1TB NVMe’s. Budgets are tight these days … :-) They are Crucial P310 … hope they are reliable, although I suspect nowhere near Samsung stuff.
I have a little Proxmox installation running a VM on a 256GB NVMe, which as you can imagine is tight. Is there a way of cloning this installing on one of the new NVMes?
Reason why I have 2x new NVMe is that I want to eventually get myself to Proxmox HA, so that the two machines (two little Optiplex 5070, one of which has the 256GB install) provide me with redundancy.
First thing is to clone the 256GB install to the larger NVMe. Would it be an idea to go this way: a) install 1TB new NVMe on spare Optiplex b) install Proxmox on this new machine c) find a way to replicate the whole 256GB install on the second machine (need to read the docs to see if/how this can happen) d) once second machine is up and running as a clone, remove machine with 256GB (current machine) and install the 1TB NVMe. e) do the same above process the other way around.
Do you think this will work or am I going to hit a wall? Is there a simpler way of doing this?
I like @pgo_lemmy’s answer best, but instead of rebuilding the original system, (assuming you did the default ZFS installation) you can add the bigger device as part of a mirror, let it resilver, install the boot loader, and then detach the smaller device from the mirror. It should automatically grow to the bigger size once the smaller device is removed and the only downtime you’d have is from installing the bigger device. Check the PVE wiki and you should find some details on this method.
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
Fewer Letters More Letters HA Home Assistant automation software ~ High Availability LXC Linux Containers NVMe Non-Volatile Memory Express interface for mass storage ZFS Solaris/Linux filesystem focusing on data integrity
[Thread #299 for this comm, first seen 17th May 2026, 14:40] [FAQ] [Full list] [Contact] [Source code]
I have a little Proxmox installation running a VM on a 256GB NVMe, which as you can imagine is tight
Not as tight as I was imagining a 256 MB installation to be.
I know nothing about promox, but because it’s quiet in here, I imagine cloning the original drive for the original system then expanding it to take over the whole drive is the easier thing to do, it’s a fairly standard process and generally nondestructive because you can just put the old drive back in if something breaks.
So, I would probably go e) unless you really want to set everything up again from scratch (which is sometimes nice to do)
Add a second node using the new drive, move all vm to the new node, decommission old node, rebuild the old node with the new drive.
You can get away with a disk clone but in my opinion a vm move is the proper way to go.
Adding a new node you start with a clean install, any quirk you have on the old hw will be finally washed away (or will bite you back and be properly documented), you have a quick way back should anything go sideways (the clone too provides a quick way back, but i like this way much more ^^), you get some hands on multi node experience that will be useful for ha setup.
Agreed, it helps that with proxmox the cluster is a first-class feature and all installs are a cluster even if only a single node. Really removes a lot of potential pain points from operations like this.
That depends on what level of HA you want to end up with.
If you want proper HA, you’ll want to plan on adding a (small, like a Raspberry Pi) third node for quorum. If you are already taking backups and you just want “I can restore on the second system” then it’s slightly simpler, but mostly the same process:
- Setup new node, add to cluster
- Migrate all VMs and LXCs to new node
- Remove and upgrade other node
- Add rebuilt node to cluster
If you’re planning on proper HA, I’d strongly advise having the proxmox installation on a second small drive on each node and leaving your 1tb drives as data only.
This article half-explains one option for a two node setup (zfs replication), which is functional but not ideal. If you want to get your feet wet with Ceph then I can give you some pointers.

