WZ-IT Logo

Proxmox Cluster Networking on OVH: Using vRack Properly

Timo Wevelsiep
Timo Wevelsiep
#Proxmox #OVH #OVHcloud #vRack #Cluster #Corosync #Networking #DevOps

Running Proxmox on OVH professionally? WZ-IT handles setup, network design, operations and support for your Proxmox infrastructure. View Service Page →

When Proxmox clusters on bare metal become unstable, the cause is often trivial: Cluster communication (Corosync), storage traffic and "normal" VM traffic share a single line – or even run over public IPs. This is exactly what OVHcloud vRack is a powerful building block for: vRack connects Bare Metal, Public Cloud and Hosted Private Cloud in a private Layer-2 network, as if everything were in the same rack/datacenter (OVHcloud Docs).

Table of Contents


Which Networks Does a Proxmox Setup Really Need?

Proxmox works with "everything in one network". But it only gets good (and HA-capable) when you properly separate traffic.

Management Network (GUI/SSH/API)

Admin access, automation, monitoring. Don't leave unnecessarily public: better to use VPN/Bastion, restrictive firewall.

Cluster/Corosync Network (separate!)

Proxmox explicitly recommends a separate network for Corosync because other traffic can interfere with it (Proxmox Wiki).

Remember: Corosync is the "cluster heartbeat". If the heartbeat stutters, the cluster loses quorum – and then things get uncomfortable.

Storage Network (optional, but urgent with Ceph)

The most important sentence from the Proxmox documentation:

"Storage communication should never be on the same network as corosync!" (Proxmox Wiki)

This is exactly where vRack helps enormously: keeping storage/backend traffic private and separate.


What is vRack in Plain Terms?

vRack is OVHcloud's solution for an isolated private network that connects Public Cloud, Hosted Private Cloud and Bare Metal – as a secure Layer-2 network (OVHcloud Docs).

For Proxmox this means specifically:

  • Corosync/cluster communication doesn't have to run over public IP
  • Storage/backend can stay private
  • Hybrid setups (Public Cloud Bastion/Monitoring + Dedicated Proxmox Nodes) become clean

vRack Use Cases for Proxmox

Use Case A: Proxmox Cluster on OVH Bare Metal

  • Proxmox nodes are Dedicated Servers
  • vRack provides private L2 connectivity → good for Corosync/Storage separation

Use Case B: Hybrid – Public Cloud + Dedicated (private)

OVH documents how to configure private networking between Public Cloud Instance and Dedicated Server via vRack (OVHcloud Support).

This is perfect for:

  • Bastion/Jump Host (Public Cloud) → access to private Proxmox IPs
  • Monitoring/Automation in the Public Cloud
  • "Public only where necessary"

For implementation details: Proxmox on OVH Service Page.

Use Case C: Single Node on Bare Metal

Single node doesn't "require" vRack – but if you want to keep backups/storage/monitoring private (or scale later), vRack is still useful.


Reference Architectures

Three patterns that work in projects – depending on budget and availability goals. All three we regularly implement for clients (→ Service Page).

Pattern A: Single Node (Dedicated)

For whom: Small to medium setups, cost focus, "just get started".

Networks:

  • Management (secured)
  • Optional: Backend (backups/monitoring private)

Why this is okay: Proxmox can do single node. Stability then comes through operations: backups, restore tests, monitoring, update process.

Important: Single node ≠ HA. You need a plan for how to recover in case of hardware failure (RTO). We help with planning and implementation.


Pattern B: 3-Node Cluster (the "standard" for serious HA)

For whom: Real failover, maintenance without downtime, resilience.

Networks:

  • Management network (secure)
  • Corosync/Cluster network (separate!) (Proxmox Wiki)
  • Storage network (if Ceph / storage-intensive) – never share with Corosync!

Optional: Redundancy for the cluster network (second ring) – recommended via two physically separate networks (RRP).


Pattern C: Hybrid – Cloud Bastion/Tools + Dedicated Proxmox Cluster via vRack

For whom: Teams that want cloud comfort (Bastion, Monitoring, CI, VPN) but run workloads on bare metal.

Networks:

  • Public Cloud: Bastion/Monitoring/Automation
  • Dedicated: Proxmox Nodes/Workloads
  • Connection private via vRack (OVHcloud Support)

Advantage: Proxmox GUI/SSH accessible entirely via private IPs (via Bastion/VPN) instead of publicly exposed.


Step-by-Step Approach

Step 1: IP Plan + Naming Schema (beforehand!)

  • Subnets: Mgmt / Corosync / Storage
  • IP ranges: don't plan too small
  • Decide: Which ports/services are public at all?

Step 2: Activate vRack & Add Dedicated Servers

OVH provides its own support guides for this: "Configuring the vRack on your Dedicated Servers" (OVHcloud Support).

vRack is described as a "virtual rack" where servers are grouped and connected to a virtual switch in the private network.

Step 3: Optional: Add Public Cloud Instance to vRack

If you want hybrid: OVH explains this in guides like "Configuring vRack for Public Cloud" (OVHcloud Support).

Step 4: Proxmox: Explicitly Bind Corosync to the Cluster Network

During cluster creation (or afterward) ensure that Corosync does not run over the "fat" VM network. Proxmox explicitly recommends this (Proxmox Wiki).

Pro tip: Also plan where migration traffic should run – by default Proxmox often uses the cluster network, which is not ideal (Proxmox Forum).

Unsure about the assignment? We can advise you.

Step 5: Keep Storage Traffic Separate (mandatory with Ceph)

Storage never in the Corosync network (Proxmox Wiki). With Ceph/storage recovery, Corosync can otherwise "drown".


Validation

After setup (before production):

  • Corosync remains stable under load (no packet loss/spikes)?
  • Storage recovery/backfill doesn't noticeably affect Corosync?
  • Migration runs over a network that can handle bandwidth (not over Corosync)?
  • Admin access runs via Bastion/VPN, not open to the internet?

Common Mistakes and How to Avoid Them

Mistake 1: Corosync + Storage in the Same Network

Proxmox says: don't do it (Proxmox Wiki).

Fix: Separate storage network. Corosync gets a "quiet" line.

Mistake 2: Cluster Running Over Public IP

Works – until it doesn't. vRack is exactly there to keep traffic private.

Fix: Use vRack for private cluster communication.

Mistake 3: Hybrid Without vRack

Then "private" ends up as "WireGuard plus NAT plus headaches".

Fix: Use vRack for clean Cloud ↔ Dedicated connection (OVHcloud Support).

Mistake 4: Migration Traffic Kills Corosync

Proxmox often uses the cluster network for migration by default – that can be problematic (Proxmox Forum).

Fix: Put migration on a separate network with sufficient bandwidth.

Mistake 5: Management UI Publicly Accessible

Unnecessary risk.

Fix: Bastion/VPN + restrictive firewall, management only from private networks.


Our Approach at WZ-IT

For Proxmox on OVH, we proceed as follows:

  1. Requirements Analysis: Single node vs. cluster, HA goals, storage requirements
  2. Network Design: IP plan, vRack topology, VLAN structure
  3. Setup: vRack, Dedicated/Cloud connection, Proxmox installation with clean interface assignment
  4. Hardening: Secure management, firewall, Bastion/VPN
  5. Validation: Corosync stability, failover tests, monitoring
  6. Documentation: Network plan, runbooks, escalation paths

We handle the complete setup or audit existing infrastructures.

View Service Page: Proxmox on OVH →


Frequently Asked Questions

Answers to important questions about this topic

A private Layer-2 network that connects OVH products (Public Cloud, Private Cloud, Bare Metal) – as if everything were in the same rack.

Not necessarily. But useful if you want to keep backups/monitoring/backend private or plan to scale to a cluster later.

Yes, OVH has specific guides for this (Public Cloud ↔ Dedicated via vRack).

Because other traffic can interfere with Corosync – Proxmox recommends a separate Corosync network.

No. Proxmox explicitly states: Storage never in the Corosync network.

For robust quorum/HA, 3+ is the standard; 2 nodes are possible but operationally trickier.

OVH states that vRack can group servers regardless of number/physical location.

High-level via the OVH Control Panel/Guides (Dedicated in vRack, optionally Public Cloud in vRack).

IP/VLAN plan improvised, MTU/Jumbo inconsistent, Storage/Corosync mixed.

Check routes/interfaces (private IPs), firewall logs, and ensure cluster/storage don't go over public interfaces.

Yes, you should. Proxmox can otherwise default to using the cluster network for migration; that's not optimal.

Yes, Proxmox recommends redundancy via RRP and two separate networks.

Single node: usually ZFS. HA cluster: Ceph only if network/disks/sizing really fit.

No – better to use Bastion/VPN + restrictive rules.

It provides you with the foundation for private networks (Corosync/Storage), without public IP traffic.

Let's Talk About Your Idea

Whether a specific IT challenge or just an idea – we look forward to the exchange. In a brief conversation, we'll evaluate together if and how your project fits with WZ-IT.

Trusted by leading companies

  • Keymate
  • SolidProof
  • Rekorder
  • Führerscheinmacher
  • ARGE
  • NextGym
  • Paritel
  • EVADXB
  • Boese VA
  • Maho Management
  • Aphy
  • Negosh
  • Millenium
  • Yonju
  • Mr. Clipart
Timo Wevelsiep & Robin Zins - CEOs of WZ-IT

Timo Wevelsiep & Robin Zins

CEOs of WZ-IT

1/3 – Topic Selection33%

What is your inquiry about?

Select one or more areas where we can support you.