Posts tagged proxmox

proxmox and lxc containers

posted on 2017-04-28 20:48

Here is the bare minimum when working with the shell to create containers, at least thats the intention of this post.

check storage, where to save it

pvesm help  ## show help
pvesm help <command>  ## show help for specific <command>
pvesm status  ## show storages and free space
pvesm list <storage>  ## show current contents

handle image(s) from which to create containers

pveam help  ## show help
pveam help <command>  ## show help for specific <command>
pveam update  ## update template list
pveam available  ## show downloadable images
pveam available -section system  ## you will want this for blank os containers without applications
pveam download <image>  ## get <image>
pveam list <storage>  ## show templates on <storage>
pveam remove <image>  ## delete

provisioning containers

pct help  ## show help
pct help <command>  ## show help for specific <command>
pct list  ## show existing containers on current HV, other HVs' ones will not be shown
pveam list <storage> ## show available image templates

pct create ... ## don't do this, use the gui. seriously.
pct list  ## get ids
pct destroy <cid>  ## delete container by <id>

pct config <cid>  ## get current config for <containerid>, helps if you really want to shell-script containers

handling containers

At first you will want the gui to learn your way around. Later on you will read pct help and pct help <command> quite often, the cli api is pretty, pretty decent.

Also you might like using pct help set and using pct set ... for changing container settings.

For now this is sufficient to get up and running.

proxmox: what is an EFI disk?

posted on 2017-01-07 21:22

Proxmox lets you create an EFI disk in the most recent versions. But what exactly is it?

cat'ing or strings'ing the file which represents it on disk is a first try, but sadly does not help much. Besides that the output looks a little like firmware stuff.

According to the proxmox wiki:

BIOS and UEFI In order to properly emulate a computer, QEMU needs to use a firmware. By default QEMU uses SeaBIOS for this, which is an open-source, x86 BIOS implementation. SeaBIOS is a good choice for most standard setups.

There are, however, some scenarios in which a BIOS is not a good firmware to boot from, e.g. if you want to do VGA passthrough. [5] In such cases, you should rather use OVMF, which is an open-source UEFI implemenation. [6]

If you want to use OVMF, there are several things to consider:

In order to save things like the boot order, there needs to be an EFI Disk. This disk will be included in backups and snapshots, and there can only be one.

You can create such a disk ...

A long story short:

UEFI, like BIOS, is the onboard firmware on your motherboard that let's you boot anything at all. Both happen to use a non-volatile storage on the motherboard to store settings. BIOS its settings, UEFI probably does just the same, but also the locations of start files it uses to boot the operating systems.

ovmf (the UEFI implementation that proxmox uses to emulate an UEFI) cannot use any kind of NVRAM by itself, it just seems to lack any at all. By default, it uses /boot/efi/EFI/BOOT/BOOTX64.EFI to search for the default start file.

If, however, like, after a default debian install, there is no startfile to be found there (debian uses /boot/efi/EFI/debian/grubx64.efi), then it cannot boot.

Two solutions are possible: Copy the grubx64.efi (or whatever it is called) to BOOTX64.EFI path, if you use only default settings besides.

Or use the EFI disk, which should not be so mysterious anymore now, and qemu will simply store the information there. This also has the added benefit that it's possible to store several startfiles for booting, in case you have several installations within the same VM. But its easier to create several vms for that anyway.

stop proxmox nagware

posted on 2017-01-05 05:07

This is said to fix proxmox 'no valid license' dialog box which appears when you login to the web interface and do not have a valid subscription:

find /usr/share/pve-manager -name *.js -exec sed -i 's/PVE.Utils.checked_command(function\s*()\s*{\s*\(.*\)\s*}\s*)\s*;\s*/\1/g' {} \;

TODD: I haven't tested it so far, the post will be updated once I can tell more.

proxmox delete and recreate cluster

posted on 2016-12-21 22:48

In case you have the questionable idea of renaming a hypervisor of your proxmox cluster, you are going to feel some pain. (It won't work and you will get scared wether you fucked your system landscape up or not. Been there, done that.)

The only viable and reproducible approach I found was removing all cluster configurations from all HV's, rebooting them, then recreating the cluster on one HV and readding all the others again.

read this, or continue at your own peril

To sum it up again, some notes before:

  • tested with proxmox 4.4
  • do all the next steps on all hosts
  • rebooting is neccessary aftwards, of all hv's. maybe not, but at least the first node had to be rebooted
  • the sqlite output, if any is shown, should only appear once
  • working ssh between your hv's is neccessary, errors or warning will prevent you from readding nodes after the cluster recreation
  • backup /etc/pve before doing anything, as you will lose all your vm configurations in the process, but these can be copied back afterwards.
  • no guarantee that this post will cover everything

howto remove all clusterconfigs

Let's go: (this is completely paste-able)

# backup
cp -va /etc/pve /root

systemctl stop pvestatd.service
systemctl stop pvedaemon.service
systemctl stop pve-cluster.service
systemctl stop corosync
systemctl stop pve-cluster
pmxcfs -l
rm /etc/pve/corosync.conf
rm /etc/corosync/*
rm /var/lib/corosync/*
rm -rf /etc/pve/nodes/*
sqlite3 /var/lib/pve-cluster/config.db "select * from tree where name='corosync.conf'"
sqlite3 /var/lib/pve-cluster/config.db "delete from tree where name='corosync.conf'"
sqlite3 /var/lib/pve-cluster/config.db "select * from tree where name='corosync.conf'"

Check for error messages, then:


Recreate the cluster on the first HV: (or whichever one you see fit)

pvecm create CLUSTER-NAME

Then readd all other HVs to your newly created cluster. From each of them, do:

#test ssh

if that does work, add, else see below how to troubleshoot
pvecm add IP-OF-FIRST-HV

troubleshooting SSH issues

Adding nodes works best with keyauth (Don't know wether I ever tried it without, to be honest, but I doubt it works.). In case you have reinstalled a node or something, try connecting via ssh from the host in question to your 'first' hv.

Read the error message closely, as known hosts are stored in /etc/ssh/ssh_known_hosts, not ~/.ssh/known_hosts:

# in case you have trouble on a certain host
> /root/.ssh/known_hosts
> /etc/ssh/ssh_known_hosts
ssh-copy-id FIRST_HV

As said before, ssh errors or warnings won't let you add vm's to a cluster.

browser not working

Once you have completed the stuff above, close all browsertabs you had opened to access your cluster. Simply refreshing them does not seem to work.

finishing touches (fix your vms before you become stressed out)

When looking at the webgui, you might become scared, as all your virtual hosts seem to be missing. This happens with VM's, but I guess the same happens with Containers, too.

In fact, we worked on proxmox cluster filesystem where it stores a lot of its settings, which gets mounted at /etc/pve aftwards. Which happens to be stored completely under /var/lib/pve-cluster/config.db as a sqlite3 database.

There all file contents (the actual character that get written into the config file(s)), the inode of the file that shall be created, along with the folder structure etc. etc. .

Once your cluster is running, try diff / colordiff to spot the exact differences. (I.e. colordiff /root/pve /etc/pve to see the file contents) Or a simple find /root/pve -iname "*conf" might also do.

Copy the configs back to their original locations, and everything should be fine.

proxmox magic fix script

posted on 2016-12-05 14:52

From here, this link often is handed out in ##proxmox on FreeNode:


# on all nodes
magicfix() {
        service pve-cluster stop
        service pvedaemon stop
        service cman stop

        service pve-cluster start
        service cman start
        service pvedaemon start

        # this one could possibly restart VMs in 4.x (but doesn't in 3.x), so disable unless you think you need it
        #service pve-manager restart

        service pvestatd restart
        service pveproxy restart
        service pve-firewall restart
        service pvefw-logger restart

# again after above was done on all nodes (makes /etc/pve rw)
service pve-cluster restart

proxmox vzdump to stdout

posted on 2016-11-21 13:30

Pipe a vzdump directly to STDOUT:

vzdump <VMID> --dumpdir /tmp --mode snapshot --stdout 

In /tmp the config will be dumped, but the dump will not be saved on disk. So the dump can easily piped to nc.

proxmox nat howto

posted on 2016-08-29 08:29

Network Adress Translation in combination with port forwarding lets you access a VM of a proxmox instance via the IP of the hypervisor itself. A second bridge is added for creating the internal network, and the hypervisor is configurated to forward packets destined to a certain port to the VM on the internal network. The added bridge is called vmbr1 here, and this was added to our networking config.

This is just an excerpt of the relevant part of the /etc/network/interfaces file there:

auto vmbr1
iface vmbr1 inet static
    bridge_ports none
    bridge_stp off
    bridge_fd 0

    post-up echo 1 > /proc/sys/net/ipv4/ip_forward
    post-up iptables -t nat -A POSTROUTING -s -o eth0 -j MASQUERADE
    post-up iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 2222 -j DNAT --to

    post-down iptables -t nat -D POSTROUTING -s -o eth0 -j MASQUERADE
    post-down iptables -t nat -D PREROUTING -i eth0 -p tcp --dport 2222 -j DNAT --to 

This is the network configuration for the VM in question:

auto eth0
iface eth0 inet static

As soon as the bridge on the HV is started, the masqerading and port forwarding rules are added, they are removed again when the interface gets disabled.

proxmox usb passthrough to VM

posted on 2016-08-17 09:35

This works while the VM is already running, no reboot needed.

Plug the USB device into your hypervisor.

lsusb to see if it's there:

root@server:~# lsusb
Bus 002 Device 002: ID 8087:8002 Intel Corp. 
Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 001 Device 002: ID 8087:800a Intel Corp. 
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 003 Device 003: ID 0557:2419 ATEN International Co., Ltd 
Bus 003 Device 002: ID 0557:7000 ATEN International Co., Ltd Hub
Bus 003 Device 004: ID 1058:25a2 Western Digital Technologies, Inc. 
Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

Here the Identifier is 1058:25a2.

Now qm list to discern your vm's id.

root@server:~# qm list
      VMID NAME                 STATUS     MEM(MB)    BOOTDISK(GB) PID       
     10280 vm_01                running    25600           5580.00 482591    
     10281 vm_02                running    6144             500.00 764248

Enter console via qm monitor <vmid>:

root@server:~# qm monitor 10280
Entering Qemu Monitor for VM 10280 - type 'help' for help

There do info usbhost:

qm> info usbhost
  Bus 3, Addr 3, Port 12.1, Speed 1.5 Mb/s
    Class 00: USB device 0557:2419
  Bus 3, Addr 4, Port 1, Speed 480 Mb/s
    Class 00: USB device 1058:25a2, Elements 25A2

Via the device identifier we know its bus 3 and port 1. Now attach it to the virtual machine:

qm> device_add usb-host,hostbus=3,hostport=1

lsblk from within the VM shows me the plugged in harddisk.

proxmox and VLANs

posted on 2016-07-15 13:07

This is a howto with a sample configuration on how to create a proxmox setup using vlans. No bonding is used.

  • network:
  • gateway ip:
  • proxmox ip:
  • VM ip:
  • vlan id: 222
  • physical NIC: eth0


Physical NIC is set to manual, also the coresponding vlan device. Also the main bridge, only the specific bridge-vlan adapter is of type inet.

Main bridge uses physical NIC, vlan-bridge used the vlan-adapter the the physical NIC.


auto lo
iface lo inet loopback

auto eth0
iface eth0 inet manual

auto eth0.222
iface eth0.222 inet manual
    vlan-raw-device eth0

auto vmbr0
    iface vmbr0 inet manual
    bridge_ports eth0
    bridge_stp off
    bridge_fd 0

auto vmbr0v222
iface vmbr0v222 inet static
    bridge_ports eth0.222
    bridge_stp off
    bridge_fd 0

Naming convention is ethX.VLAN for the physical NIC's VLAN adapter. For the bridge, do vmbrXvVLAN.

Set up more ethX.VLAN / vmbrXvVLAN couples for more VLANs.


Setup the network as usual, as if no VLAN is in place:

auto lo
iface lo inet loopback

# The primary network interface
allow-hotplug eth0
iface eth0 inet static

Also set the VLAN from withing the proxmox interface for your VM's desired adapter. (Tab Hardware in the VM's menu, double-click onto Network Device, select main bride, which is vmbr0 here, and add the VLAN id in the field VLAN Tag.)


You have to have set up trunking on the physical switch's switchport that your proxmox hardware is using.

If you omit this, no vlan tagging will take place and you will have no connectivity even if your proxmox network config is solid.

proxmox: restoring backups

posted on 2016-06-11 10:42

First, create backup via gui, at the backup tab of your vm.

Go to backup save location and copy it to wherever you want to restore it. Maybe you need to fix ssh keys to directly copy the data directly, i.e. copy the current host's content of ~/.ssh/ over to the ~/.ssh/authorized_keys of your destination machine.

After ssh'ing to the destination hypervisor, do:

qmrestore vzdump-qemu-100-2016_06_06-19_38_36.vma.lzo 616 -storage local

This would restore your backupped vm image automatically to a 'newly created' VM with ID 616 on your local storage.

proxmox: qemu-img convert

posted on 2016-06-11 10:33

In proxmox you sometimes want to convert images from one type to another.

available types

QCOW2 (KVM, Xen)    qcow2
QED   (KVM)         qed
raw                 raw
VDI   (VirtualBox)  vdi
VHD   (Hyper-V) vpc
VMDK  (VMware)  vmdk
RBD   (ceph)    rbd


qemu-img convert -f raw -O qcow2 vm-100-disk-1.raw vm-100-disk-1.qcow2

-f is the first image format, -O the second. Look at the manpage to guess why -f is called -f.

proxmox: mount NFS share in proxmox

posted on 2016-04-31 20:56


We have two servers:

  • is the proxmox instance
  • is the NFS server ip

create NFS share on NFS server

On the Server you want to create the NFS share:

Create the folder you want to export:

mkdir -p /srv/export/testmount

vim /etc/exports entry:



# make the export available
exportfs -ar

# show current exports
showmount -e localhost
exportfs -v

If you have a firewall, allow all traffic from


  1. left frame, click on 'datacenter'
  2. tab 'storage', button 'add', choose 'NFS'
  3. id: "servername_of_nfs" or whatever you like
  4. server:
  5. export: /srv/export/testmount
  6. choose everything, if you do not want to filter just disk images, apply

If you then click on your newly added storage in the left frame below a hypervisor, you should be able to use all tabs. Otherwise there would happen a 'connection timeout' error of sorts.

proxmox: unable to open database

posted on 2016-02-12 00:29:25


After a reboot a promox hypervisor did not come back up properly...

While this may sound like quite a dumb story, it can actually be fun if you like figuring out things. Except that you maybe don't. And the system where this broke was of course a production system where the customer is waiting for you to fix things.

Anyway: You do not really need to reinstall in case you read something like this:

Restarting pve cluster filesystem: pve-cluster[database] crit: found entry with duplicate name (inode = 0000000000000160, parent = 00000000000000F2, name = 'qemu-server')
[database] crit: DB load failed
[main] crit: memdb_open failed - unable to open database '/var/lib/pve-cluster/config.db'
[main] notice: exit proxmox configuration filesystem (-1)

Promox stores the configuration of /etc/pve in a sqlite database: /var/lib/pve-cluster/config.db.


So when this happens, there is simply a duplicate entry, which you can fix with regular sqlite foo. Due to the duplicate entry proxmox cannot read the sqlite database, and so i will not know how to create the folders and files to populate /etc/pve, as the whole information about all these folders and files in this directory node is saved within sqlite.

See the example below, dont be fooled, the data column is just shown too small. Either use .mode line or look up the size via .show to change .width of the column so you can read the file content, but nevermind for now.

Usually you change the contents of /etc/pve while it is mounted, the state will be written the the database to. (I have no idea either, but nonetheless cannot be bothered to look this up in the promox sources.)

Without a clean databse, it can't mount /etc/pve, and neither can you fix it by copying files. (...)


Open the database, delete one entry if both are duplicate (But save the data in a texteditor in case you should need it later.) of fix it somehow.

It's easy:

sqlite3 /var/lib/pve-cluster/config.db

sqlite> .databases
seq  name             file                                                      
---  ---------------  ----------------------------------------------------------
0    main             /var/lib/pve-cluster/config.db                            
sqlite> .tables

You see, there's only a single database in there (to be honest, I do not know if sqlite can handle more.) called main with a single table called tree. There you can simply use UPDATE or DELETE statements from sql.

sqlite> .headers on
sqlite> select * from tree;

inode       parent      version     writer      mtime       type        name         data      
----------  ----------  ----------  ----------  ----------  ----------  -----------  ----------
0           0           371         0           1455194525  8           __version__            
2           0           3           0           1410521743  8           user.cfg     user:root@
4           0           5           0           1410521743  8           datacenter.  keyboard: 
6           0           6           0           1410521891  4           priv                   
8           0           8           0           1410521891  4           nodes                  
9           8           9           0           1410521891  4           my_server              
10          9           10          0           1410521891  4           qemu-server            
11          9           11          0           1410521891  4           openvz                 
12          9           12          0           1410521891  4           priv                   
13          6           14          0           1410521892  8           authkey.key  -----BEGIN
15          0           16          0           1410521892  8   -----BEGIN
17          0           18          0           1410521892  8           pve-www.key  -----BEGIN
19          9           20          0           1410521892  8           pve-ssl.key  -----BEGIN
21          6           22          0           1410521892  8           pve-root-ca  -----BEGIN
23          0           24          0           1410521892  8           pve-root-ca  -----BEGIN
25          6           232         0           1455173387  8           pve-root-ca  03
28          9           31          0           1410521892  8           pve-ssl.pem  -----BEGIN
39          0           41          0           1410521892  8           vzdump.cron  # cluster 
137         6           137         0           1410903436  4           lock                   
224         8           224         0           1455173387  4           my_server_0            
225         224         225         0           1455173387  4           qemu-server            
225         224         225         0           1455173387  4           qemu-server            
226         224         226         0           1455173387  4           openvz                 
227         224         227         0           1455173387  4           priv                   
228         224         229         0           1455173387  8           pve-ssl.key  -----BEGIN
230         224         233         0           1455173387  8           pve-ssl.pem  -----BEGIN
260         225         261         0           1455173556  8           101.conf     bootdisk: 
360         6           368         0           1455194525  8           authorized_  # This fil
369         6           371         0           1455194525  8           known_hosts  |1|Z2FUpc+

sqlite> .mode insert
sqlite> select * from tree;

... in the rather longish output now search for the corresponding double line:

INSERT INTO table VALUES(225,224,225,0,1455173387,4,'qemu-server',NULL);

Then delete the double line and reinsert it, so it only appears once.

sqlite> delete from tree where inode=225;

Now, to reinsert, replace table with tree:

sqlite> INSERT INTO tree VALUES(225,224,225,0,1455173387,4,'qemu-server',NULL);

Of course, the described aproach only works if you have duplicate entries. If there is something else borked, you have to fix this, and not just delete an entry from the database.

But if you happen to know what you are doing, this should not pose a problem. After proxmox can read the database again, you can change its contents again by editing the mounted data at /etc/pve, which will be saved when you quit or so. (I don't know the exact time when this happens.)

proxmox: manual cluster migration

posted on 2015-07-13 18:10:53

  1. change drbd on current node to 'active', if needed
  2. service pve-cluster stop
  3. pmxcfs -l
  4. start vm again (qm start <vm-id>)

To rebuild the cluster again:

  • unmount /etc/pve
  • service pve-cluster start

Proxmox user account

posted on 2014-10-28 17:05:12

To create a user account in the proxmox web interface (manager interface of the hypervisor, reachable via https://<server-ip_or_fqdn>:8006) with proper rights management, do this:

  1. create the VM
  2. check that Storage View is NOT chosen in the dropdown menu in the upper left part
  3. click on Datacenter folder on the left tree menu
  4. open tab Users
  5. add user, use Proxmox VE authentication server as realm (This saves the user info in /etc/pve/priv/shadow.cfg, instead of maybe /etc/passwd when using Linux PAM standard authentication.)
  6. choose the VM in the left tree frame
  7. go to tab Permissions
  8. add permission, choosing user and PVEVMAdmin
  9. if the user should be able to create his own VM's, give him the PVEAdmin role

That should be about it.

proxmox partitioning

posted on 2014-09-05 07:12:04

When using the proxmox installer image, usually the partitioning settings are percentages of the disk size.

In cases where you want to have fixed sizes i.e. for swap and '/', choose custom boot options when proxmox prompts you with 'boot':

linux ext4 maxroot=10 swapsize=2

will give you:

  • ext4 as partitioning format, usually ext3 is used
  • 10 GB /
  • 2 GB swap

A list on the available options: (shamelessly stolen here)

At install time, at the boot prompt you can specify "linux" followed by one or more optional parameters, see below:

Default partitioning uses ext3, with only ext4 as option. If you want ext4 just type: ext4

To define the total size of hdd to be used (only Proxmox > 2.3): hdsize=X (where X is in GB, and affects /dev/sda2 size (/dev/sda1 is small and only for boot). This way you can save free space on the HDD for further partitioning (i.e. for an additional PV and VG on the same hard disk that can be used for LVM storage type).

To define the amount of root partition's disk space: maxroot=X (where X is in GB).

To define the amount of swap partition's disk space: swapsize=X (where X is in GB. Default is the same size as installed RAM, with 4GB minimum and HDSIZE/8 as maximum)

To define the amount of logical volume "data" space (/var/lib/vz): maxvz=X (where X is in GB).

To define the amount of free space left in LVM VG 'pve': minfree=X (where X is in GB, 16GB is the default if storage available > 128GB, 1/8 otherwise)

Example: linux ext4 maxroot=25 swapsize=8 maxvz=400 minfree=32

Last notes, also from the link:

Follow the instructions as always. If you screwed up grub and can't boot from hdd anymore, boot from the installer cdrom and at "boot:" prompt just type: pveboot (NOT preceded by "linux"). This will boot from cdrom and mount LVM root and data partitions.

Proxmox firewall settings

posted on 2014-08-28 14:23:21

To run proxmox properly (including it's web interface), these ports are needed in /etc/init.d/firewall:

bin=`which iptables`

$bin -A INPUT -i $I -p tcp -m conntrack --ctstate NEW,ESTABLISHED,RELATED --dport 22 -j ACCEPT
$bin -A INPUT -i $I -p tcp -m conntrack --ctstate NEW,ESTABLISHED,RELATED --dport 80 -j ACCEPT
$bin -A INPUT -i $I -p tcp -m conntrack --ctstate NEW,ESTABLISHED,RELATED --dport 443 -j ACCEPT
$bin -A INPUT -i $I -p tcp -m conntrack --ctstate NEW,ESTABLISHED,RELATED --dport 8006 -j ACCEPT
$bin -A INPUT -i $I -p tcp -m conntrack --ctstate NEW,ESTABLISHED,RELATED --dport 5900 -j ACCEPT
$bin -A INPUT -i $I -p tcp -m conntrack --ctstate NEW,ESTABLISHED,RELATED --dport 5901 -j ACCEPT
$bin -A INPUT -i $I -p tcp -m conntrack --ctstate NEW,ESTABLISHED,RELATED --dport 5902 -j ACCEPT
$bin -A INPUT -i $I -p tcp -m conntrack --ctstate NEW,ESTABLISHED,RELATED --dport 5903 -j ACCEPT
$bin -A INPUT -i $I -p tcp -m conntrack --ctstate NEW,ESTABLISHED,RELATED --dport 5904 -j ACCEPT
$bin -A INPUT -i $I -p tcp -m conntrack --ctstate NEW,ESTABLISHED,RELATED --dport 5905 -j ACCEPT

Proxmox no subscription repo

posted on 2014-08-25 18:05:50

As of currently, the Proxmox installer is available in version 3.2. Nowadays proxmox apt repositories are set to the subscription ones (costing money, if you want to use them).

To change this... change the following file to contain these contents:


deb [arch=amd64] wheezy pve-no-subscription

apt-get update, apt-get upgrade.

Proxmox - No snapshots?

posted on 2014-07-30 11:38:20

Proxmox can create snapshots.

At least in theory.


  • only .qcow2 will be work as file format for creating them
  • .raw will not work
  • .vmdk will not work
  • converted images (to .qcow2) will also not work


Proxmox remove subscription message

posted on 2014-07-23 12:23:45

Nowadays proxmox will nag you with an annoying popup message window, since they want to sell subscriptions. (The one telling you 'You do not have a valid subscription for this server. Please visit to get a list of available options.'...) See here, in case you ponder buying something.

For all the others like us, connect to your proxmox instance (i.e. ssh), and do this:

$ cd /usr/share/pve-manager/ext4/
$ cp pvemanagerlib.js pvemanagerlib.js.bu   ## creating backups is good style

Then open the pvemanagerlib.js in an editor of your choice, and edit line 519 (currently thats where the change has to be made as of 7/2014):

The code in question looks like this:

 1      checked_command: function(orig_cmd) {
 2          PVE.Utils.API2Request({
 3              url: '/nodes/localhost/subscription',
 4              method: 'GET',
 5              //waitMsgTarget: me,
 6              failure: function(response, opts) {
 7                  Ext.Msg.alert(gettext('Error'), response.htmlStatus);
 8              },
 9              success: function(response, opts) {
10                  var data =;
12                  if (data.status !== 'Active') {
13            {
14                          title: gettext('No valid subscription'),
15                          icon: Ext.Msg.WARNING,
16                          msg: PVE.Utils.noSubKeyHtml,
17                          buttons: Ext.Msg.OK,
18                          callback: function(btn) {
19                              if (btn !== 'ok') {
20                                  return;
21                              }
22                              orig_cmd();
23                          }
24                      });
25                  } else {
26                      orig_cmd();
27                  }
28              }
29          });
30      },

Change line 12 in the above example:

if (data.status !== 'Active') {


if (false) {

and things are fixed.

This blog covers .csv, .htaccess, .pfx, .vmx, /etc/crypttab, /etc/network/interfaces, /etc/sudoers, /proc, 10.04, 14.04, AS, ASA, ControlPanel, DS1054Z, GPT, HWR, Hyper-V, IPSEC, KVM, LSI, LVM, LXC, MBR, MTU, MegaCli, PHP, PKI, R, RAID, S.M.A.R.T., SNMP, SSD, SSL, TLS, TRIM, VEEAM, VMware, VServer, VirtualBox, Virtuozzo, XenServer, acpi, adaptec, algorithm, ansible, apache, apachebench, apple, applet, arcconf, arch, architecture, areca, arping, asa, asdm, autoconf, awk, backup, bandit, bar, bash, benchmarking, binding, bitrate, blackarmor, blockdev, blowfish, bochs, bond, bonding, booknotes, bootable, bsd, btrfs, buffer, c-states, cache, caching, ccl, centos, certificate, certtool, cgdisk, cheatsheet, chrome, chroot, cisco, clamav, cli, clp, clush, cluster, coleslaw, colorscheme, common lisp, configuration management, console, container, containers, controller, cron, cryptsetup, csync2, cu, cups, cygwin, d-states, database, date, db2, dcfldd, dcim, dd, debian, debug, debugger, debugging, decimal, desktop, df, dhclient, dhcp, diff, dig, display manager, dm-crypt, dmesg, dmidecode, dns, docker, dos, drivers, dtrace, dtrace4linux, du, dynamictracing, e2fsck, eBPF, ebook, efi, egrep, emacs, encoding, env, error, ess, esx, esxcli, esxi, ethtool, evil, expect, exportfs, factory reset, factory_reset, factoryreset, fail2ban, fbsd, fdisk, fedora, file, filesystem, find, fio, firewall, firmware, fish, flashrom, forensics, free, freebsd, freedos, fritzbox, fsck, fstrim, ftp, ftps, g-states, gentoo, ghostscript, git, git-filter-branch, github, gitolite, global, gnutls, gradle, grep, grml, grub, grub2, guacamole, hardware, haskell, hdd, hdparm, hellowor, hex, hexdump, history, howto, htop, htpasswd, http, httpd, https, i3, icmp, ifenslave, iftop, iis, imagemagick, imap, imaps, init, innoDB, innodb, inodes, intel, ioncube, ios, iostat, ip, iperf, iphone, ipmi, ipmitool, iproute2, ipsec, iptables, ipv6, irc, irssi, iw, iwconfig, iwlist, iwlwifi, jailbreak, jails, java, javascript, javaws, js, juniper, junit, kali, kde, kemp, kernel, keyremap, kill, kpartx, krypton, lacp, lamp, languages, ldap, ldapsearch, less, leviathan, liero, lightning, links, linux, linuxin3months, lisp, list, livedisk, lmctfy, loadbalancing, locale, log, logrotate, looback, loopback, losetup, lsblk, lsi, lsof, lsusb, lsyncd, luks, lvextend, lvm, lvm2, lvreduce, lxc, lxde, macbook, macro, magento, mailclient, mailing, mailq, manpages, markdown, mbr, mdadm, megacli, micro sd, microsoft, minicom, mkfs, mktemp, mod_pagespeed, mod_proxy, modbus, modprobe, mount, mouse, movement, mpstat, multitasking, myISAM, mysql, mysql 5.7, mysql workbench, mysqlcheck, mysqldump, nagios, nas, nat, nc, netfilter, networking, nfs, nginx, nmap, nocaps, nodejs, numberingsystem, numbers, od, onyx, opcode-cache, openVZ, openlierox, openssl, openvpn, openvswitch, openwrt, oracle linux, org-mode, os, oscilloscope, overview, parallel, parameter expansion, parted, partitioning, passwd, patch, pct, pdf, performance, pfsense, php, php7, phpmyadmin, pi, pidgin, pidstat, pins, pkill, plasma, plesk, plugin, posix, postfix, postfixadmin, postgres, postgresql, poudriere, powershell, preview, profiling, prompt, proxmox, ps, puppet, pv, pveam, pvecm, pvesm, pvresize, python, qemu, qemu-img, qm, qmrestore, quicklisp, quickshare, r, racktables, raid, raspberry pi, raspberrypi, raspbian, rbpi, rdp, redhat, redirect, registry, requirements, resize2fs, rewrite, rewrites, rhel, rigol, roccat, routing, rs0485, rs232, rsync, s-states, s_client, samba, sar, sata, sbcl, scite, scp, screen, scripting, seafile, seagate, security, sed, serial, serial port, setup, sftp, sg300, shell, shopware, shortcuts, showmount, signals, slattach, slip, slow-query-log, smbclient, snmpget, snmpwalk, software RAID, software raid, softwareraid, sophos, spacemacs, spam, specification, speedport, spi, sqlite, squid, ssd, ssh, ssh-add, sshd, ssl, stats, storage, strace, stronswan, su, submodules, subzone, sudo, sudoers, sup, swaks, swap, switch, switching, synaptics, synergy, sysfs, systemd, systemtap, tar, tcpdump, tcsh, tee, telnet, terminal, terminator, testdisk, testing, throughput, tmux, todo, tomcat, top, tput, trafficshaping, ttl, tuning, tunnel, tunneling, typo3, uboot, ubuntu, ubuntu 16.04, udev, uefi, ulimit, uname, unetbootin, unit testing, upstart, uptime, usb, usbstick, utf8, utm, utm 220, ux305, vcs, vgchange, vim, vimdiff, virtualbox, virtualization, visual studio code, vlan, vmstat, vmware, vnc, vncviewer, voltage, vpn, vsphere, vzdump, w, w701, wakeonlan, wargames, web, webdav, weechat, wget, whois, wicd, wifi, windowmanager, windows, wine, wireshark, wpa, wpa_passphrase, wpa_supplicant, x11vnc, x2x, xfce, xfreerdp, xmodem, xterm, xxd, yum, zones, zsh

Unless otherwise credited all material Creative Commons License by sjas