1 Introduction

Proxmox VE is based on Debian, therefore the disk image (ISO file) provided by them includes a complete Debian system (« stretch » for version 5.x) as well as all necessary Proxmox VE packages. Using the installer will guide you through the setup, allowing you to partition the local disk(s), apply basic system configurations (e.g. timezone, language, network) and install all required packages.

1.1 Components

1.2 Using the Proxmox VE Installer

You can download the ISO from https://www.proxmox.com/en/downloads. It includes the following:

  • Complete operating system (Debian Linux, 64-bit)
  • The Proxmox VE installer, which partitions the local disk(s) with ext4, ext3, xfs or ZFS and installs the operating system.
  • Proxmox VE kernel (Linux) with LXC and KVM support
  • Complete toolset for administering virtual machines, containers and all necessary resources
  • Web based management interface for using the toolset

1.3 Install Proxmox VE

Start installation with ISO file

You normally select Install Proxmox VE to start the installation.

After that you get prompted to select the target hard disk(s). The Options button lets you select the target file system, which defaults to ext4. The installer uses LVM if you select ext3, ext4 or xfs as file system, and offers additional option to restrict LVM space.

You can also use ZFS as file system. ZFS supports several software RAID levels, so this is specially useful if you do not have a hardware RAID controller. The Options button lets you select the ZFS RAID level, and you can choose disks there. Additionally you can set additional options.

The next page just ask for basic configuration options like your location, the time zone and keyboard layout. The location is used to select a download server near you to speed up updates. The installer is usually able to auto detect those settings, so you only need to change them in rare situations when auto detection fails, or when you want to use some special keyboard layout not commonly used in your country.

You then need to specify an email address and the superuser (root) password. The password must have at least 10 characters. Include lowercase and uppercase alphabetic characters, numbers and symbols. I recommend you to use pwgen 10

The last step is the network configuration. Please note that you can use either IPv4 or IPv6 here, but not both. If you want to configure a dual stack node, you can easily do that after installation.

If you press Next now, installation starts to format disks, and copies packages to the target. Please wait until that is finished, then remove the installation media and restart your system. Further configuration is done via the Proxmox web interface. Just point your browser to the IP address given during installation (https://ipaddress:8006).

The installation is finish, browser to the IP address given during installation (https://ipaddress:8006).

1.4 Qemu-guest-agent

The qemu-guest-agent is a helper daemon, which is installed in the guest. It is used to exchange information between the host and guest, and to execute command in the guest.

In Proxmox VE, the qemu-guest-agent is used for mainly two things:

   To properly shutdown the guest, instead of relying on ACPI commands or windows policies
   To freeze the guest file system when making a backup (on windows, use the volume shadow copy service VSS).

1.4.1 Installation

You have to enable the guest-agent per VM, either set it in the GUI to « Yes » under options (see screenshot):

On Linux you have to simply install the qemu-guest-agent, please refer to the documentation of your system.
We show here the commands for Debian/Ubuntu and Redhat based systems:

on Debian/Ubuntu based systems (with apt-get) run:
apt-get install qemu-guest-agent

and on Redhat based systems (with yum):
yum install qemu-guest-agent

You need to stop/start the VM afterwards, a simple reboot will not be enough!

First you have to download the virtio-win driver iso (see Windows VirtIO Drivers).
Then install the virtio-serial driver:

   Attach the ISO to your windows VM (virtio-*.iso)
   Go to the windows Device Manager
   Look for "PCI Simple Communications Controller"
   Right Click -> Update Driver and select on the mounted iso in DRIVE:\vioserial\<OSVERSION>\ where <OSVERSION> is your Windows Version (e.g. 2k12R2 for Windows 2012 R2)

After that, you have to install the qemu-guest-agent:

   Go to the mounted ISO in explorer
   Execute the installer with double click (either qemu-ga-x64.msi (64-bit) or qemu-ga-x86.msi (32-bit)

After that the qemu-guest-agent should be up and running. You can validate this in the list of Window Services, or in a PowerShell with:

PS C:\Users\Administrator> Get-Service QEMU-GA

Status   Name               DisplayName
------   ----               -----------
Running  QEMU-GA            QEMU Guest Agent

2 Usage

2.1 Connect to the Web Interface

  1. Open the System Bible to get the credentials, cf section above
  2. Click on https://ipaddress:8006

Proxmox – Web Interface

2.2 LUOTM of Proxmox

  1. Before the start LUOTM, do a TA
  2. (if belonging to a cluster) Move all VMs out of the Host before starting the upgrade
  3. Procedure for executing LUOTM, connect to the proxmox management and click on updates

Proxmox – upgrade
  1. Click on Upgrade

Proxmox – upgrade
  1. Then Yes, when the update process is complete, click Reboot

Proxmox – upgrade
  1. (if belonging to a cluster) Move back VMs on the updated host

2.3 Renaming a PVE node

Proxmox VE uses the hostname as a nodes name, so changing it works similar to changing the host name. This must be done on a empty node.

Rename a standalone PVE host, you need to edit:


Replace the entries with your intended hostname.

You will now need to restart pve in order to create the new hostname in the system.

   service pve-cluster restart

This should now create a new host entry in Proxmox and a new folder under


You now need to copy your old openvz data from the old folder to the new.

   mv /etc/pve/nodes/[oldhostname]/openvz/* /etc/pve/nodes/[oldhostname]/openvz/


   mv /etc/pve/nodes/proxmox-portable/openvz/* /etc/pve/nodes/proxmox-Xen/openvz/

Change the name node

   vim /etc/pve/storage.cfg

I would advise to restart the Proxmox host in order to finish the renaming.

2.4 Enable interface 10GB

10GB interfaces are not recognized after installation of the proxmox VE. You have to execute the following commands

echo "options ixgbe allow_unsupported_sfp=1" > /etc/modprobe.d/ixgbe-options.conf
depmod -a
update-initramfs -u

2.5 User management on the CLI

The tool pveum can be used to manage users. Run pveum help to learn more about the syntax of this tool.

  • Example: Change the mail address of the root@pam user:
# pveum user modify root@pam -email operations.unix.reporting@vtx-telecom.ch

2.6 Allows to have HA between servers

cd /etc/pve/
vim datacenter.cfg

add the following line

ha: shutdown_policy=migrate

restart the service

service pve-cluster restart

2.7 How to: Regenerate Self-Signed SSL/TLS certificate for Proxmox VE

Login to terminal via web gui -> Shell or via SSH or directly from the host

Use following command to regenerate the self-signed SSL/TLS certificate for the Proxmox VE host

pvecm updatecerts --force
systemctl restart pveproxy

2.8 How to enable thin provisioning on ZFS storage

Reference: https://pve.proxmox.com/wiki/Storage:_ZFS

WARNING: By enabling thin provisioning you will allow to overcommit your disk space. Please ensure that proper monitoring of the disk space is in place before doing so. If your ZFS pool runs full then all VMs will experience I/O errors which may lead to data corruption/loss!

Steps (GUI):

  1. Login to the web interface and click on the datacenter
  2. Click on the storage menu
  3. Click on the ZFS storage « local-zfs » and click « Edit »
  4. Check the box « Thin provisioning » as follows:

Alternative steps (CLI):

  1. Edit the file /etc/pve/storage.cfg
  2. Inside the local-zfs section add « sparse », like this:
zfspool: local-zfs
        pool local-zfs
        content images,rootdir
        mountpoint /local-zfs
  • Set the ZFS refreservation to zero on each VM disk on each host, example:
zfs set refreservation=0G local-zfs/vm-100-disk-0
  • Verify disk sizes and usage:
zfs list

Note: If the « USED » column still shows the full disk size as used then please read the section below

2.9 How to shrink the disk image

Situation: Thin provisioning (see above) was not enabled and you have already created a couple of VMs on the ZFS storage. You have set the ZFS refreservation to zero as explained above but you still have VM disks that use their full disk size on the storage. Now you want to convert those disks to thin provisioned disks.

Prerequisites: In order to convert VMs to thin provisioning you need to have a 2nd storage pool available (example: NFS pool) with enough disk space to host the VM temporarily. Also, that 2nd storage pool needs to have the « Disk image » feature enabled.

2.9.1 Method 1 (VM is powered off)


  1. Enable thin provisioning on your ZFS storage as explained in the section above
  2. Login to the web GUI and select the VM you want to migrate
  3. Click on the « Hardware » menu
  4. Click on the hard disk
  5. Edit the disk and enable the « Discard » option
  6. In the menu bar above, select « Move disk »
  7. In the drop-down « Target Storage », select the 2nd storage (example: NFS)
  8. Keep the format « QEMU image » (only possible for NFS storage)
  9. Check the box « Delete source »
  10. Confirm by clicking on « Move disk »
  11. Watch the progress and wait until the disk has been moved
  12. Move the hard disk back to the original ZFS storage and delete the source
  13. Verify that everything is OK and then delete the backup file on the NFS storage again

2.9.2 Method 2 (VM is running or method 1 didn’t work)


  1. Enable thin provisioning on your ZFS storage as explained in the section above
  2. Login to the web GUI and select the VM you want to migrate
  3. Click on the « Hardware » menu
  4. Click on the hard disk
  5. Edit the disk and enable the « Discard » option
  6. Shutdown the VM from the GUI and restart it
  7. Log in to the VM and trim the filesystem:
    1. On Linux: fstrim -av
    2. On FreeBSD: fsck_ufs -Ey /dev/gpt/rootfs (partition name may be different)
  8. In the GUI, go back to the same place again (select the hard disk)
  9. In the menu bar above, select « Move disk »
  10. In the drop-down « Target Storage », select the 2nd storage (example: NFS)
  11. Keep the format « QEMU image » (only possible for NFS storage)
  12. Check the box « Delete source »
  13. Confirm by clicking on « Move disk »
  14. Watch the progress and wait until the disk has been moved
  15. Once the migration has completed, shutdown the VM from the GUI
  16. In the host shell, enter the NFS directory, example: /mnt/pve/n4a1
  17. Enter the directory of the VM, example: cd images/105
  18. Make a backup of the VM disk: cp -p vm-105-disk-0.qcow2 vm-105-disk-0.qcow2.backup
  19. Shrink the disk image: qemu-img convert -O qcow2 vm-105-disk-0.qcow2.backup vm-105-disk-0.qcow2
  20. Power on the VM and verify everything is OK
  21. Move the hard disk back to the original ZFS storage and delete the source
  22. Verify that everything is OK and then delete the backup file on the NFS storage again

Example before shrinking:

# zfs list
NAME                      USED  AVAIL     REFER  MOUNTPOINT
local-zfs                 565G  8.94G       96K  /local-zfs
local-zfs/vm-105-disk-0  61.9G  67.6G     3.25G  -

Example after shrinking:

# zfs list
NAME                      USED  AVAIL     REFER  MOUNTPOINT
local-zfs                 195G   379G       96K  /local-zfs
local-zfs/vm-105-disk-0  3.01G   379G     3.01G  -

3 Troubleshooting

3.1 VM has no access to disks after host crash

Situation: A Proxmox node has crashed or unexpectedly rebooted and replication has failed. The VMs were migrated to another node but they can’t be started due to error Error : TASK ERROR: timeout: no zvol device link for ‘vm-114-disk-0’ found after 300 sec found

Cause: The VM config files were moved to another node but due to the failed replication the disks are not available on that node.


  • Check which node holds the disk for the affected VM:
bus-ind-proxmox-opnsense-02:~ # zfs list
NAME                       USED  AVAIL     REFER  MOUNTPOINT
rpool                     58.1G   663G      104K  /rpool
rpool/ROOT                2.31G   663G       96K  /rpool/ROOT
rpool/ROOT/pve-1          2.31G   663G     2.31G  /
rpool/data                55.5G   663G       96K  /rpool/data
rpool/data/vm-113-disk-0  3.66G   663G     3.66G  -
rpool/data/vm-114-disk-0  3.69G   663G     3.69G  -
rpool/data/vm-115-disk-0  3.60G   663G     3.59G  -

Run this command on all nodes in the Proxmox cluster

In this example, the node bus-ind-proxmox-opnsense-02 holds the disks for VM 114. But in the GUI we see that the VM is on bus-ind-proxmox-opnsense-01

  • Next, move the VM configuration files back to the original host:
mv /etc/pve/nodes/bus-ind-proxmox-opnsense-01/qemu-server/114.conf /etc/pve/nodes/bus-ind-proxmox-opnsense-02/qemu-server/
  • Now you should be able to start the VM again.

3.2 Replication fails due to SSH key mismatch

Situation: A replication task has failed with an error:

2021-07-15 01:52:06 root@ Permission denied (publickey,password).
2021-07-15 01:52:06 ERROR: migration aborted (duration 00:00:00): Can't connect to destination address using public key
TASK ERROR: migration aborted

Cause: The SSH host keys are not properly configured.


  • For every node in the cluster, read the file /root/.ssh/id_rsa.pub and make sure those keys are in /etc/pve/priv/authorized_keys
  • Also check that /root/.ssh/authorized_keys2 is a symlink to /etc/pve/priv/authorized_keys
  • Note: The directory /etc/pve/priv/ is synchronized across the nodes, so you’ll only have to edit it on one host

3.3 Unauthorized [IP: 443] The repository

Problem when you want to upgrade the servers.
If it displays the error « Unauthorized ».
You have to connect to the proxmox shop https://shop.proxmox.ch, the accesses are in syspass « shop proxmox ».
Then Services => My subscription => click on the server that has the problem and click on Reissue

Proxmox – key_error