Proxmox / Linux - Live-Update Network Configuration with Zero-Downtime

Proxmox / Linux - Live-Update Network Configuration with Zero-Downtime

In this smaller post i want to show you how easy it is to do a live config update of your network without network restarts or machine reboots. I did this just recently on production Proxmox machines (Debian 8 and 9 or Proxmox 4.x and 5.x respectively) without any interruptions (zero-downtime).

This post is not meant to be a comprehensive guide but it shows how simple it can be on the example of a Linux bond and a Proxmox typical bridge device. The tool that makes all that possible is ip (

What is our goal?

I had some Proxmox machines that have multiple physical network interfaces that are teamed together in a so called bond ( There are several types of bonds and i use LACP bonds, active-backup bonds and balance-alb bonds most of the time. But this does not matter for this post.

My problem was that mostly older Proxmox machines that where setup 1 or 2 years ago did not have all VLAN interfaces available to the virtual machines but i wanted to have it consistent again in my fleet. So the goal is to add those VLAN interfaces on top of the bond and a bridge (vmbr) on top of this VLAN interface and make it available within a virtual machine. All of those steps without any interruption on the Proxmox host or the virtual machines!

How does the current setup look like?

My setup roughly looks like this on all machines.

auto lo
iface lo inet loopback

iface enp4s0 inet manual
bond-master bond0

iface enp5s0 inet manual
bond-master bond0

auto bond0
iface bond0 inet static
slaves enp4s0 enp5s0
bond-miimon 100
bond-mode 4
bond-downdelay 200
bond-updelay 200

auto bond0.100
iface bond0.11 inet manual
vlan_raw_device bond0
post-up bond0

auto vmbr100
iface vmbr100 inet manual
bridge_ports bond0.100
bridge_stp off
bridge_fd 0

You can see at the top, there is the loopback interface and the 2 physical network interfaces that are only configured to be used as slaves for the main bond interface. Below there is the main bond interface that listens on the network without a VLAN tag (untagged or with VLAN ID 1 which is the default). This also gets the bonding information as well as the slave NIC information and the IP.

Below that there is our first VLAN interface called bond0.100 where 100 is the VLAN ID 100 (tagged). This one has bond0 as the raw interface where it listens for packets and it's being brought up after bond0 is up.

And below this there is a bridge called vmbr100. I actually name the bridges after the VLAN IDs they represent, so with more VLANs added it's easier to remember. This uses the bond0.100 VLAN interface as a bridge port and configures two bridge settings that aren't discussed here. But you can search for them if you want.

This configuration - but with about 8 more VLANs is now in use for years on several machines and works quite well.

Add one more VLAN interface and bridge for VM usage!

We now want to add one more VLAN interface for VLAN ID 200 as well as the right bridge. So here is the ip-magic.

  1. Spawn a new VLAN interface on top of bond0 first with ip link add link bond0 name bond0.200 type vlan id 200
  2. Bring this interface up with ip link set bond0.200 up
  3. Add the bridge interface now with ip link add name vmbr200 type bridge forward_delay 0. With older Proxmox machines the forward_delay 0 config isn't used automatically but you can manually configure this afterwards with brctl setfd vmbr200 0.
  4. Now attach this bridge to the bond0.200 VLAN interface, so that traffic can flow with ip link set dev bond0.200 master vmbr200
  5. Now that everything is setup we can bring up the new bridge with ip link set vmbr200 up
  6. Before we can reload the pveproxy service that is used for the WebUI where we want to see the new interfaces we have to add them manually to the /etc/network/interfaces file like you can see above in our initial setup. Just add them below and save the file.
  7. Now reload the service with service pveproxy reload and login to the WebUI.

You should now be able to see the new interfaces and should be able to add them to a running VM straight away.

On two Debian VMs the new interface (virtio-net) was immediately seen as a new ethX interface. Then i was able to also configure this with ip like this (for example):

ip addr add dev ethX
ip link set ethX up


As you can see, its fairly easy to do zero-downtime network changes when you know the tools that are out in the wild to make your life easier and ip is definitely one of the hidden gems for network admins.