Proxmox VE 8 to 9 Upgrade: The Complete Guide for Enterprises

Editorial note: The information in this article was compiled to the best of our knowledge at the time of publication. Technical details, prices, versions, licensing terms, and external content may change. Please verify the information provided independently, particularly before making business-critical or security-related decisions. This article does not replace individual professional, legal, or tax advice.

Proxmox Management & Maintenance by WZ-IT? We handle the upgrade to PVE 9 for you — including Ceph, PBS and cluster migration. Schedule a free consultation
Proxmox VE 8 reaches end of life in August 2026. After that: no security updates, no bugfixes, no support. If you're running production workloads on PVE 8, you need to act now — not in July.
This guide walks you through the upgrade to Proxmox VE 9 (Debian 13 "Trixie") step by step — with special focus on enterprise setups with Ceph clusters, Proxmox Backup Server and High Availability.
Table of Contents
- What's new in PVE 9?
- Prerequisites
- Step 1: Ensure PVE 8.4.1
- Step 2: Create backups
- Step 3: Run pve8to9 checklist
- Step 4: Upgrade Ceph to Squid
- Step 5: Upgrade PBS to version 4
- Step 6: Update APT repositories
- Step 7: Run the upgrade
- Step 8: Post-upgrade checks
- Cluster upgrade strategy
- Common mistakes and how to avoid them
- How WZ-IT handles upgrades
What's new in PVE 9?
Proxmox VE 9.0 was released in July 2025, the current version is PVE 9.1 (November 2025). Key improvements:
Infrastructure:
- Debian 13 "Trixie" as base
- Linux Kernel 6.17 (PVE 9.1)
- QEMU 10.1.2, LXC 6.0.5, ZFS 2.3.4
Enterprise features:
- OCI containers in LXC: Use Docker images directly as LXC templates — no Docker or Kubernetes in a VM needed
- TPM state in qcow2: Snapshots of TPM-enabled VMs now possible on NFS/CIFS/directory storage
- HA Rules replace HA Groups: More flexible resource-to-node and resource-to-resource affinity
- SDN Fabric Support: Complex, scalable network architectures directly in the PVE interface
- Thick-LVM Snapshots: Snapshots on thick-provisioned LVM (FC/iSCSI SAN) — vendor-agnostic
- Confidential Computing: Early Intel TDX support for encrypted VMs
Prerequisites
Before starting, verify this checklist:
| Requirement | Details |
|---|---|
| PVE Version | At least 8.4.1 on all nodes |
| Free disk space | Min. 5 GB on root partition (10 GB+ recommended) |
| Ceph | Must be upgraded to 19.2 Squid first |
| PBS | Must be upgraded to version 4 first |
| Container OS | CentOS 7, Ubuntu 16.04 (systemd ≤230) are no longer supported |
| Backups | Complete backup of all VMs/CTs before upgrading |
| Network | Stable connection, don't upgrade over unstable WiFi/VPN |
Step 1: Ensure PVE 8.4.1
Update all nodes to the latest version first:
apt update && apt dist-upgrade -y
pveversion
Output must show pve-manager/8.4.1 or higher.
Step 2: Create backups
Non-negotiable. Before every major upgrade:
# Back up all VMs and containers (PBS or local storage)
vzdump --all --mode snapshot --storage your-backup-storage
# Back up Proxmox configuration
tar czf /root/pve-config-backup-$(date +%Y%m%d).tar.gz /etc/pve/
# If using Ceph: document Ceph status
ceph status > /root/ceph-status-pre-upgrade.txt
ceph osd tree > /root/ceph-osd-tree-pre-upgrade.txt
Step 3: Run pve8to9 checklist
Proxmox ships a diagnostic tool that detects potential issues before the upgrade:
pve8to9 --full
The tool checks repository configuration, storage settings, deprecated options, package versions and container compatibility.
Red entries must be resolved before upgrading. Yellow warnings should be reviewed but can often be ignored.
For clusters: run pve8to9 --full on every node before upgrading the first one.
Step 4: Upgrade Ceph to Squid
If you're running Ceph, this step is mandatory before the PVE upgrade:
# Check Ceph status (must be HEALTH_OK)
ceph status
# Set Ceph flags
ceph osd set noout
ceph osd set nobackfill
ceph osd set norecover
# Upgrade Ceph Reef → Squid (on each node)
apt update
apt dist-upgrade -y
# After all nodes are upgraded: remove flags
ceph osd unset noout
ceph osd unset nobackfill
ceph osd unset norecover
# Verify status
ceph status
Official guide: Ceph Reef to Squid
Step 5: Upgrade PBS to version 4
If Proxmox Backup Server is in use, it must be upgraded before the PVE upgrade. PBS 4 is also based on Debian 13.
# On the PBS host
apt update && apt dist-upgrade -y
proxmox-backup-manager version
Step 6: Update APT repositories
Repositories must be switched from Bookworm to Trixie:
sed -i 's/bookworm/trixie/g' /etc/apt/sources.list
sed -i 's/bookworm/trixie/g' /etc/apt/sources.list.d/*.list
apt update
Important: Remove all references to enterprise.proxmox.com if you don't have an active subscription key. Don't mix enterprise and no-subscription repos.
Step 7: Run the upgrade
apt dist-upgrade -y
Duration depends heavily on hardware:
- SSD + modern server: 5–10 minutes
- HDD + older hardware: 30–60+ minutes
After completion:
reboot
# After reboot: verify version
pveversion
# Expected: pve-manager/9.x.x
Step 8: Post-upgrade checks
After the reboot, verify these items:
# Kernel version
uname -r
# Expected: 6.14.x or 6.17.x
# Any failed services?
systemctl --failed
# Web interface accessible?
curl -k https://localhost:8006
# Ceph status (if applicable)
ceph status
# VM/CT status
qm list
pct list
Cluster upgrade strategy
For production clusters with HA, we recommend this sequence:
1. Preparation (all nodes)
- Run
pve8to9 --fullon every node - Upgrade Ceph to Squid (if applicable)
- Upgrade PBS to version 4 (if applicable)
2. Rolling upgrade (node by node)
Node 1 → Maintenance Mode → Upgrade → Reboot → Verify → Exit Maintenance
Node 2 → Maintenance Mode → Upgrade → Reboot → Verify → Exit Maintenance
Node 3 → ...
3. Per node:
- Set node to Maintenance Mode (HA services will be migrated)
- Live-migrate VMs/CTs to other nodes
- Run the upgrade
- Reboot
- Post-upgrade checks
- Exit Maintenance Mode
4. After the last node:
- HA Groups are automatically migrated to HA Rules
- Verify cluster status:
pvecm status - Verify Ceph status:
ceph status
Important: VMs can always be migrated from an older to a newer PVE node. The reverse (PVE 9 → PVE 8) is not officially supported.
Common mistakes and how to avoid them
1. Ceph not upgraded before PVE
The upgrade fails or Ceph becomes unreachable after reboot. Fix: Always follow the order: Ceph Squid → PBS 4 → PVE 9.
2. Enterprise repository without subscription
apt dist-upgrade fails with 401 errors. Fix: Remove enterprise repo, use no-subscription repo.
3. Meta-package proxmox-ve marked for removal
APT wants to uninstall proxmox-ve — this indicates incorrect repository configuration. Fix: Are all repos set to trixie? No mixed Bookworm/Trixie sources?
4. Containers with old systemd (≤230) won't start
CentOS 7, Ubuntu 16.04 and similar containers with systemd ≤230 no longer work under PVE 9. Fix: Migrate to newer OS versions before upgrading. PVE 8 support runs until August 2026 — use that time.
5. systemd-boot package
The pve8to9 check flags the systemd-boot package. For PVE 8.1–8.4 installations it was automatically included. Fix: Safe to remove unless manually configured.
6. VM migration fails in mixed cluster
During the rolling upgrade, VMs may sometimes not start on the target node. Fix: Start VM manually, check CPU type (must be supported on both nodes).
How WZ-IT handles upgrades
We upgrade Proxmox clusters for enterprises using a standardized process:
- Analysis — Inventory: cluster topology, Ceph layout, PBS version, container OS versions
- Planning — Define maintenance windows, establish sequence, document rollback strategy
- Backup — Complete backup of all VMs/CTs, configurations and Ceph status
- Upgrade — Rolling upgrade node by node with monitoring
- Verification — Post-upgrade checks, performance baseline, HA test
- Documentation — Updated infrastructure documentation
The upgrade is performed by an experienced engineer — not by a script. If issues arise, we're directly on the system.
Rather hand off the Proxmox upgrade? We handle the complete upgrade to PVE 9 — from analysis to post-upgrade monitoring. Schedule a consultation | Proxmox Managed Service
Related Guides
- Proxmox Backup Server 4.2: S3 Storage Exits Tech Preview — What's new in PBS 4.2
- Encrypted Proxmox Backups with Hetzner Storage Box — Set up offsite backups
- Install Proxmox on Hetzner: Auto-Install Script — Fresh PVE 9 installation
- VMware to Proxmox Migration — Switch from VMware ESXi to PVE
- Private Cloud for SMEs: HA Proxmox Cluster — Cluster architecture for enterprises
Frequently Asked Questions
Answers to important questions about this topic
Proxmox VE 8 is supported until August 2026. After that, there will be no more security updates. Enterprises should upgrade to PVE 9 by July 2026 at the latest.
Yes, Proxmox provides an official in-place upgrade path. The prerequisite is PVE 8.4.1 or newer. A complete reinstallation is not necessary.
VMs and containers are preserved. In clusters, nodes are upgraded one at a time — VMs can be live-migrated to already upgraded nodes.
Yes. Ceph must first be upgraded to version 19.2 (Squid) before starting the PVE 8→9 upgrade. This order is mandatory.
HA Groups are deprecated in PVE 9 and are automatically migrated to HA Rules. The migration happens automatically once all cluster nodes are running PVE 9.
Between 5 minutes (SSD, fast server) and 60+ minutes (HDD, older hardware). Plan a maintenance window per node for production systems.

Written by
Timo Wevelsiep
Co-Founder & CEO
Co-Founder of WZ-IT. Specialized in cloud infrastructure, open-source platforms and managed services for SMEs and enterprise clients worldwide.
LinkedInLet's Talk About Your Idea
Whether a specific IT challenge or just an idea – we look forward to the exchange. In a brief conversation, we'll evaluate together if and how your project fits with WZ-IT.


Timo Wevelsiep & Robin Zins
Managing Directors of WZ-IT




