The hypervisor is the most privileged layer in your infrastructure. A compromised hypervisor means every virtual machine and container it hosts is compromised. In multi-tenant environments — where different teams, customers, or security zones share physical hardware — hypervisor security is the foundation everything else depends on.
Proxmox VE is a powerful open-source virtualization platform built on Debian, KVM, and LXC. Its flexibility is also its risk surface. This guide covers hardening Proxmox for production multi-tenant environments: storage encryption, network segmentation, API security, backup protection, container isolation, and integrity monitoring.
Storage Security: ZFS Encryption at Rest
Proxmox supports ZFS natively, and ZFS native encryption provides transparent encryption at rest without the performance overhead of full-disk encryption at the block layer.
Creating Encrypted ZFS Datasets
# Create an encrypted dataset for VM storage
zfs create -o encryption=aes-256-gcm
-o keylocation=file:///etc/zfs/keys/vm-storage.key
-o keyformat=raw
rpool/encrypted-vms
# Generate the encryption key
dd if=/dev/urandom of=/etc/zfs/keys/vm-storage.key bs=32 count=1
chmod 600 /etc/zfs/keys/vm-storage.key
# Verify encryption is active
zfs get encryption,keystatus rpool/encrypted-vms
Protect the key file. If the key is stored on the same disk as the encrypted data, encryption only protects against physical disk theft (which is still valuable for decommissioned drives). For stronger protection, store keys on a separate device or use a key management server:
# Auto-load keys at boot (requires key file on local disk)
# Add to /etc/systemd/system/zfs-load-key.service
[Unit]
Description=Load ZFS encryption keys
Before=zfs-mount.service
After=zfs-import.target
[Service]
Type=oneshot
ExecStart=/sbin/zfs load-key -a
RemainAfterExit=yes
[Install]
WantedBy=zfs-mount.service
Separate Datasets per Tenant
For multi-tenant environments, create separate encrypted datasets per tenant with unique keys:
# Tenant A storage
zfs create -o encryption=aes-256-gcm
-o keylocation=file:///etc/zfs/keys/tenant-a.key
-o keyformat=raw
rpool/tenants/tenant-a
# Tenant B storage
zfs create -o encryption=aes-256-gcm
-o keylocation=file:///etc/zfs/keys/tenant-b.key
-o keyformat=raw
rpool/tenants/tenant-b
This ensures that if Tenant A’s data is subpoenaed or compromised, Tenant B’s data remains encrypted with a separate key.
Network Segmentation Between Guests
Default Proxmox networking places all VMs on a shared Linux bridge. In multi-tenant environments, this is unacceptable — guests could sniff each other’s traffic.
VLAN-Based Segmentation
# /etc/network/interfaces
# Physical uplink
auto eno1
iface eno1 inet manual
# Main bridge — VLAN-aware
auto vmbr0
iface vmbr0 inet static
address 10.0.0.1/24
bridge-ports eno1
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 100-200
Assign VLANs to VMs via their network device configuration:
# VM config: /etc/pve/qemu-server/101.conf
net0: virtio=AA:BB:CC:DD:EE:01,bridge=vmbr0,tag=100
# VM config: /etc/pve/qemu-server/201.conf
net0: virtio=AA:BB:CC:DD:EE:02,bridge=vmbr0,tag=200
Proxmox Firewall
Enable the built-in firewall at the datacenter, host, and VM levels:
# Enable datacenter-level firewall
# /etc/pve/firewall/cluster.fw
[OPTIONS]
enable: 1
policy_in: DROP
policy_out: ACCEPT
log_ratelimit: enabled
[RULES]
# Allow SSH to Proxmox hosts from management network only
IN ACCEPT -source 10.0.1.0/24 -p tcp -dport 22
# Allow Proxmox web UI from management network
IN ACCEPT -source 10.0.1.0/24 -p tcp -dport 8006
# Allow cluster communication between nodes
IN ACCEPT -source 10.0.0.0/24 -p tcp -dport 5900:5999
IN ACCEPT -source 10.0.0.0/24 -p tcp -dport 3128
IN ACCEPT -source 10.0.0.0/24 -p udp -dport 5405:5412
Per-VM firewall rules enforce tenant isolation:
# /etc/pve/firewall/101.fw
[OPTIONS]
enable: 1
policy_in: DROP
[RULES]
# Tenant A VMs can only talk to each other (VLAN 100)
IN ACCEPT -source 10.100.0.0/24 -p tcp
# Allow outbound internet via gateway
OUT ACCEPT -dest 0.0.0.0/0 -p tcp -dport 80
OUT ACCEPT -dest 0.0.0.0/0 -p tcp -dport 443
# Block access to Proxmox management
OUT DROP -dest 10.0.0.1/32
API Access Controls
The Proxmox API is the control plane for your entire virtualization infrastructure. Secure it aggressively.
API Token Scoping
Create scoped API tokens instead of using root credentials:
# Create a dedicated user for automation
pveum user add automation@pve --comment "CI/CD automation account"
# Create a role with minimal permissions
pveum role add VMOperator --privs "VM.Audit,VM.PowerMgmt,VM.Console"
# Assign the role to specific resources
pveum acl modify /vms/100-199 --users automation@pve --roles VMOperator
# Create an API token (save the secret — it is shown only once)
pveum user token add automation@pve ci-deploy --privsep 1
The --privsep 1 flag means the token’s permissions are the intersection of the user’s permissions and any additional token-specific restrictions. The token can never exceed the user’s own access.
Restrict API Network Access
Limit API access to management networks using the Proxmox firewall or iptables:
# Only allow API access from management VLAN
iptables -A INPUT -s 10.0.1.0/24 -p tcp --dport 8006 -j ACCEPT
iptables -A INPUT -p tcp --dport 8006 -j DROP
# Also restrict SSH
iptables -A INPUT -s 10.0.1.0/24 -p tcp --dport 22 -j ACCEPT
iptables -A INPUT -p tcp --dport 22 -j DROP
TLS Configuration
Harden the Proxmox web interface TLS settings:
# /etc/default/pveproxy
ALLOW_FROM="10.0.1.0/24"
DENY_FROM="all"
POLICY="allow"
# TLS settings
CIPHERS="ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384"
HONOR_CIPHER_ORDER="yes"
Backup Encryption
Unencrypted backups are a common data exposure vector. Proxmox Backup Server supports client-side encryption:
# Create an encryption key for backups
proxmox-backup-client key create /etc/pve/priv/backup-encryption.key
# Configure backup job with encryption
# In /etc/pve/vzdump.cron or via the UI
vzdump 100 101 102
--storage pbs-encrypted
--mode snapshot
--compress zstd
--encrypt 1
# Verify a backup can be decrypted
proxmox-backup-client restore
--repository backup-server:backup-store
--keyfile /etc/pve/priv/backup-encryption.key
vm/100/2026-01-15T02:00:00Z
--dry-run
Store the encryption key separately from the backup storage. If backups and keys are in the same location, encryption adds no security value. Use a password manager or hardware security module for key storage.
Backup Retention and Verification
# /etc/vzdump.conf
tmpdir: /var/tmp
storage: pbs-encrypted
mode: snapshot
compress: zstd
pigz: 4
maxfiles: 0 # Use retention policy instead
# Retention policy (configure on backup storage)
# Keep: 3 daily, 2 weekly, 1 monthly
Run regular backup verification to ensure recoverability:
#!/bin/bash
# verify-backups.sh — run weekly
BACKUP_STORAGE="pbs-encrypted"
KEYFILE="/etc/pve/priv/backup-encryption.key"
# List all VMs with backups
for vmid in $(qm list | awk 'NR>1 {print $1}'); do
echo "Verifying latest backup for VM ${vmid}..."
proxmox-backup-client verify
--repository backup-server:backup-store
--keyfile "${KEYFILE}"
"vm/${vmid}" 2>&1 | tail -1
done
LXC vs KVM: Security Isolation Trade-offs
This is the most consequential decision in multi-tenant Proxmox environments.
KVM (full virtualization) provides hardware-level isolation. Each VM runs its own kernel. A kernel exploit in one VM does not affect the host or other VMs. For mutually untrusted tenants, KVM is the only appropriate choice.
LXC (OS-level containers) shares the host kernel. They are lighter weight and faster, but a kernel vulnerability in any container can potentially compromise the host. Use LXC only for workloads under the same trust domain.
Harden LXC containers when you must use them:
# /etc/pve/lxc/300.conf
# Unprivileged container (critical — always use unprivileged)
unprivileged: 1
# Restrict capabilities
lxc.cap.drop: sys_admin sys_rawio net_raw
# Enable AppArmor profile
lxc.apparmor.profile: generated
# Prevent access to host devices
lxc.cgroup2.devices.deny: a
# Resource limits
lxc.cgroup2.memory.max: 2G
lxc.cgroup2.cpu.max: 100000 100000
# Disable nesting (unless specifically required)
features: nesting=0
Never run privileged containers in multi-tenant environments. A privileged container with a kernel exploit is equivalent to root on the host. There is no security boundary.
Monitoring Hypervisor Integrity
Detecting compromise of the hypervisor itself requires monitoring that runs independently of the monitored system:
File Integrity Monitoring
# Install and configure AIDE
apt install aide
# Configure monitored paths
# /etc/aide/aide.conf
/etc/pve p+i+n+u+g+s+b+m+c+md5+sha256
/etc/network p+i+n+u+g+s+b+m+c+md5+sha256
/usr/bin p+i+n+u+g+s+b+m+c+md5+sha256
/usr/sbin p+i+n+u+g+s+b+m+c+md5+sha256
/boot p+i+n+u+g+s+b+m+c+md5+sha256
# Initialize the database
aide --init
mv /var/lib/aide/aide.db.new /var/lib/aide/aide.db
# Run daily check via cron
0 3 * /usr/bin/aide --check | mail -s "AIDE Report $(hostname)" [email protected]
Authentication Monitoring
Monitor Proxmox authentication logs for suspicious activity:
# Monitor failed login attempts
# /etc/rsyslog.d/proxmox-auth.conf
:programname, isequal, "pvedaemon" /var/log/proxmox-auth.log
:programname, isequal, "pveproxy" /var/log/proxmox-auth.log
# Alert on brute force attempts
# Simple monitoring script
#!/bin/bash
THRESHOLD=10
FAILED=$(grep "authentication failure" /var/log/proxmox-auth.log
| grep "$(date +%Y-%m-%dT%H)" | wc -l)
if [ "$FAILED" -gt "$THRESHOLD" ]; then
echo "WARNING: ${FAILED} failed Proxmox auth attempts in the last hour" |
mail -s "Proxmox Auth Alert" [email protected]
fi
Cluster Integrity Verification
In multi-node Proxmox clusters, verify that cluster configuration has not been tampered with:
# Compare cluster config checksums across nodes
for node in node1 node2 node3; do
echo -n "${node}: "
ssh ${node} "sha256sum /etc/pve/corosync.conf"
done
# Verify expected node count
pvecm status | grep "^Node"
Applying Updates Without Downtime
Security updates for the hypervisor kernel and Proxmox packages must be applied promptly, but rebooting takes all hosted VMs offline.
# Enable live migration to evacuate VMs before patching
# On the node to be patched:
# Step 1: Migrate all VMs to other nodes
for vmid in $(qm list | awk 'NR>1 {print $1}'); do
qm migrate ${vmid} target-node --online
done
# Step 2: Apply updates
apt update && apt full-upgrade -y
# Step 3: Reboot if kernel was updated
needs-restarting -r || reboot
# Step 4: Migrate VMs back (or let HA handle rebalancing)
For environments with HA enabled, Proxmox can automate this process — fencing the node, migrating workloads, and restarting after updates.
Summary Checklist
Hardening a Proxmox environment for multi-tenant use requires attention at every layer:
- Storage: ZFS encryption per tenant, separate keys, key management separate from data
- Network: VLAN segmentation, per-VM firewall rules, management network isolation
- API: Scoped tokens with least privilege, network-restricted access, strong TLS
- Backups: Client-side encryption, key stored separately, regular restore testing
- Isolation: KVM for untrusted tenants, unprivileged LXC only within same trust domain
- Monitoring: File integrity checking, auth log analysis, cluster config verification
- Patching: Live migration for zero-downtime updates, prompt kernel patching
The hypervisor is your security floor. Everything built on top of it — firewalls, EDR, application security — assumes the hypervisor is trustworthy. Invest the effort to make that assumption true.
