Proxmox Backup Server Offsite: Pull Architecture with Asymmetric Retention for Ransomware-Safe Backups

Editorial note: The information in this article was compiled to the best of our knowledge at the time of publication. Technical details, prices, versions, licensing terms, and external content may change. Please verify the information provided independently, particularly before making business-critical or security-related decisions. This article does not replace individual professional, legal, or tax advice.

Proxmox Backup Server as a Managed Service — WZ-IT designs, builds, and operates your PBS offsite architecture. NIS2-compliant, with defined SLAs and documented restore tests. Book a meeting · Learn more about Proxmox Backup Server
Three core questions decide whether your backups survive a ransomware wave: who has access to what, who initiates the data movement, and how long do data live where? With Proxmox Backup Server (PBS) these can be answered architecturally — through a pull architecture and asymmetric retention between the primary and offsite locations.
This post explains why the sync direction determines the safety of your backups, how to split retention cleanly across two locations, what to consider in the NIS2 context, and which hosting options actually hold up for the external PBS.
Table of contents
- Pull or push: why the sync direction matters
- How the pull architecture works
- What offsite really means
- Deduplication: why offsite stays affordable
- Asymmetric retention: short locally, long offsite
- Setting sync job flags correctly
- Hosting options for the external PBS
- 3-2-1-1-0 and the NIS2 context
- Common pitfalls
- Our approach at WZ-IT
- Further reading
Pull or push: why the sync direction matters
Proxmox Backup Server supports two modes for replication between two PBS instances: pull (the target server actively fetches data from the source) and push (the source server sends data to the target). Technically both reach the same result. Security-wise they are worlds apart.
In a push setup, the production PBS must be able to reach the external server, must have write permissions, and must know the credentials of the remote. If the production infrastructure is compromised — through ransomware, a compromised admin account, or an insider — the attacker has all the tools to also encrypt or delete the offsite backups.
In a pull setup, the production PBS does not even know the external server exists. Sync is initiated by the remote, the remote pulls data and writes it to its own datastore. On the source it only needs read permissions. Even if the entire production environment is hijacked, the offsite storage remains intact — it is neither reachable nor known from the source.
The Proxmox team puts it explicitly: pull is the recommended option wherever possible. Push was added later to allow sources behind firewalls without port forwarding to participate — but it is the worse default security-wise. Another pull advantage: corrupted snapshots on the source can be re-synced, which push cannot.
Practically this means: the production firewall only needs to allow outgoing connections, or — more elegantly — allow only inbound connections from the external server on TCP/8007. The entire communication is TLS-encrypted; with pinned fingerprints, self-signed certificates are perfectly safe.
How the pull architecture works
A production-grade PBS pull architecture has three components.
The local PBS sits in the production network, ideally integrated into a HA Proxmox cluster. PVE hosts send VM and container backups there. Incremental transfer with deduplication means that after the first full backup only deltas go over the wire.
The offsite PBS sits in a geographically and administratively separated data center. It pulls new snapshots, deduplicates them again into its own datastore, and applies its own retention policy. The only outbound connection it needs is to the source — no inbound connections from the internet.
The sync user is a dedicated API user on the local PBS with only the DatastoreReader role on a specific datastore. Instead of a password we recommend an API token that can be rotated without user-level intervention. This user has no write rights anywhere — even if the token were compromised, the attacker could at most read, not destroy.
During the pull the remote first downloads the chunk list for each snapshot to be synced. If a chunk already exists locally (because a previous sync brought it in or another VM has the same content), the hash is enough. Only genuinely new chunks are transferred in full. Deduplication therefore works over the wire too.
Configuration-wise sync jobs land in /etc/proxmox-backup/sync.cfg, remote definitions in /etc/proxmox-backup/remote.cfg. Both files are versionable and belong in your provisioning pipeline.
What offsite really means
"Offsite" is often confused in practice with "external" or "on another server". That is dangerous. A PBS in the same rack as the PVE is off-host, a PBS at the same site but outside the PVE cluster is off-cluster — both help against single-server hardware failures, but not against fire, flood, data center power outages, theft, or a hoster suspending the account.
True offsite requires three separations.
Geographic. Rule of thumb: at least 50-100 km distance, ideally different power-grid region. If you sit in NRW, the offsite PBS should not be in NRW. Large-area outages and flood risk maps are arguments for real geographic separation.
Administrative. Ideally not at the same hoster as the primary system. If you run on Hetzner production, the offsite location should be elsewhere — not because Hetzner sites are unsafe, but because account suspension (delayed payment, compromised hoster credentials) could otherwise take both sites down at once.
Legal. In case of insolvency or a dispute with the primary provider, the offsite data must remain accessible. Provider diversity is also a legal argument.
Practical proposal for a German mid-market constellation: primary at Hetzner Falkenstein, offsite at netcup Nuremberg or Vienna. Different power-grid region, different hoster, different administrative domain, both GDPR-compliant and EU-resident.
Deduplication: why offsite stays affordable
Offsite storage seems expensive at first — until you understand how aggressively PBS deduplicates. The mechanics:
VM backups are split into fixed-size chunks of 4 MiB. Each chunk gets a SHA-256 hash. Identical chunks (e.g. across multiple snapshots of the same VM, or across multiple VMs with the same base image) end up only once physically in the datastore — subsequent snapshots reference the same file.
Container and host backups are split into dynamic chunks whose boundaries are determined by a Buzhash rolling hash over pxar archives. The advantage: a small change in the middle of a large file does not shift the entire chunk layout — only the affected chunk is regenerated.
Incremental with bitmap. For running VMs, QEMU tracks changed blocks via dirty bitmap. PBS reads only those blocks and transfers only the unknown chunks. Important to know: the bitmap lives in RAM. After VM stop, restart, or migration, it is gone, and the next backup must read the entire disk again — but still transfers only unknown chunks. Longer runtime, no extra storage consumption.
Cross-VM dedup. Five VMs with a Debian 12 base share the system files. Three weeks of daily snapshots of the same VM share everything that has not changed. PBS handles this automatically.
In practice: HorizonIQ documented a reduction from 7 TB of uncompressed NFS backups to just over 1 TB after switching to PBS — at simultaneously faster restore. The concrete factor per datastore is visible in the WebUI but only updates after a garbage collection run.
A few honest caveats. With heavily fragmented databases, a byte update can rewrite an entire 4 MiB chunk. If you need block-level fine-grained deduplication, you need additional storage-level dedup (ZFS dedup or similar). PBS is designed as a backup layer, not as a storage engine.
Asymmetric retention: short locally, long offsite
A PBS quirk that many overlook on first setup: sync jobs are independent of prune settings. The sync pulls all snapshots newer than the locally available latest one. What was pruned locally does not come back.
The rule: retention must be planned in one direction — locally shorter than offsite, never the other way around. If the local PBS prunes after 14 days before the next sync runs, the snapshot is lost offsite.
A sensible split in practice:
Local PBS (short, fast, operational): keep-last 3 plus keep-daily 7. Fast restores for the common cases (accidentally deleted VM, broken update).
Offsite PBS (long, compliance): The official PBS docs show a 10-year strategy: keep-last 3, keep-daily 13, keep-weekly 8, keep-monthly 11, keep-yearly 9. You get day-precise backups in the recent past, weekly retrospectively, then monthly, then yearly — at minimal storage cost thanks to dedup.
Important: the keep-* options are processed in order (last → hourly → daily → weekly → monthly → yearly). Each option only marks snapshots in its own time window that have not yet been marked by an earlier option. If you want to understand the interplay, use the official prune simulator — you can play policies against example data.
Three places where retention can be configured — and this is the most common mistake:
- PBS datastore (Prune & Garbage Collection tab)
- PVE storage configuration (PBS as a storage in PVE)
- PVE backup job
If retention is set in multiple places, all of them apply in parallel — and the most aggressive one wins. The clean solution: set retention only at the PBS datastore (or namespace) and enable "Keep all backups" in the PVE storage configuration. In addition, the PBS user with which PVE writes backups should have no prune or destroy rights — otherwise a compromised PVE can delete its own backups.
By the way, pruning only removes the snapshot reference, not the underlying chunks. Only garbage collection (typically a weekend job) deletes unreferenced chunks after a grace period of 24h plus 5 minutes. Storage is therefore only freed after the next GC run.
Setting sync job flags correctly
A few flags decide between "runs cleanly" and "produces data loss":
--remove-vanished false (recommended for asymmetric retention). When a snapshot is pruned on the source, it should not vanish on offsite. Set to true, the pull job would turn the offsite into a pure 1:1 mirror — destroying the entire retention asymmetry.
--verified-only true syncs only snapshots that have already passed a verify job on the source. Recommended when the remote runs in a less trusted environment.
--encrypted-only true pulls only client-side encrypted snapshots offsite. If you use client-side encryption with AES-256-GCM — which should be mandatory for offsite storage — this flag prevents unencrypted backups from accidentally leaving the building.
--rate-in <bandwidth> caps the sync speed. With a tight uplink (e.g. 50 Mbps in an SMB connection) it makes sense to limit the sync to 20 MiB/s or less so daily business is not throttled. Tip: schedule sync jobs for night hours.
--ns and --remote-ns map namespaces. One namespace per tenant in the same datastore: separate ACLs, separate prune policies, but shared deduplication. For multi-tenant setups or hosting scenarios this is often the right answer.
--schedule accepts systemd's calendar event syntax: daily 03:30, *-*-* 02:00, Wed 02:30. Sync jobs should run after backup jobs but before prune jobs.
Hosting options for the external PBS
Four sensible architecture variants, depending on volume, compliance, and budget.
netcup root server with local block storage. Current G12 generation with AMD EPYC, guaranteed cores, and DDR5 RAM. Local block storage up to 8 TB per server, expandable without minimum term. Locations Nuremberg or Vienna. PBS benefits noticeably from local block storage instead of network filesystems — garbage collection and verify become significantly faster. Attractive pricing, EU-sovereign.
Hetzner Storage Box as CIFS mount. Works technically, is tempting price-wise, but performs poorly with PBS. Garbage collection on network filesystems with high latency takes hours. If you still want to go this route, limit it to restore-only or use it as an additional cold layer alongside a performant primary offsite.
Managed PBS providers. Tuxis, Cloud-PBS, Inett and others offer managed PBS instances starting around €12-20/TB/month. Advantage: you don't operate PBS yourself, the provider handles updates, hardware faults, and scaling. Disadvantage: less control, vendor lock-in, depending on the setup no own cryptography sovereignty.
Own hardware in a second data center. Best performance and full control, economically sensible only at larger volumes (typically from 20 TB) or when a second site with connectivity already exists. Often overkill for smaller setups.
Bonus since PBS 4.0: S3 backend with Object Lock. Since mid-2025 PBS natively supports S3-compatible object storage as a datastore backend. With Object Lock (available at AWS S3, Backblaze B2, Wasabi, Cloudflare R2, MinIO self-hosted) the backup becomes a logically immutable copy — even the root account cannot delete locked objects before the retention period expires. A hybrid architecture we often recommend: PVE → local PBS (hot, fast), local PBS → external PBS at netcup (warm, pull), external PBS → S3 backend with Object Lock (cold, immutable). That covers the NIS2 "1 immutable" requirement without resorting to tape.
3-2-1-1-0 and the NIS2 context
The classical "3-2-1" backup rule — three copies, two media, one offsite — is extended in the ransomware era by two slots: 3-2-1-1-0. An additional copy must be immutable or offline (against ransomware), and 0 errors must show up in verify jobs.
A pull PBS architecture covers 3-2-1 elegantly:
- 3 copies: original on PVE storage, backup on local PBS, sync on external PBS
- 2 media: PVE storage and PBS datastore (or two PBS datastores on different storage)
- 1 offsite: the external PBS
The additional immutable copy demands more — either tape backup (PBS supports LTO natively), removable datastores (rotating external disks attached only for sync, since PBS 3.4), or the S3 object lock backend described above.
Verify jobs at 0 errors are not a nice-to-have. A backup that has never been restored or verified is hope — not a backup. Quarterly restore tests of a test VM from the offsite datastore belong on the operations calendar.
In the NIS2 context this architecture is not just good practice but de facto a duty. The NIS2 transposition act in force in German law since early 2026 requires documented risk management of essential and important entities — explicitly including backup and recovery strategy. In case of breach, management is personally liable, with fines up to 10 million euros or two percent of worldwide group turnover.
Auditors want evidence: documented patch times, documented backup strategy with RTO/RPO, documented verify results, documented data residency. A PBS pull architecture in EU data centers with provable asymmetric retention delivers exactly this evidence — and additionally protects against the ransomware scenario that has caught countless mid-market companies cold in recent years.
For deeper context: we wrote at length about "Linux Kernel Vulnerabilities 2026: Why Patch Management Is Now a Board-Level Issue" — that post details the NIS2 context in depth.
Common pitfalls
A few lessons from production setups:
Latency eats bandwidth. Sync speed depends not only on uplink but also on round-trip time between source and destination. With high latency or packet loss, TCP congestion control throttles aggressively. Workaround: WireGuard tunnel with BBR congestion control. The bug tracker entry 4182 documents the issue in detail.
Initial sync locally. The first sync can take days for several terabytes over a 50 Mbps uplink. In practice it has worked well to set up the offsite PBS locally, run the first sync on the local network, and then ship the machine to the remote data center. Configs are then adjusted to the new IP, the datastore is already filled. Alternatively: removable datastores for the initial sync via external disk.
Set up verify jobs — on both sides. On the source to catch bit rot early. On the offsite to know that the sync has not silently transferred broken chunks. Verify is IO- and CPU-intensive; the schedule should sit between GC and sync.
Store encryption keys offsite. If backups are client-side encrypted and the key only lives on the destroyed PVE, the backups are useless. Keys belong in a cloud password manager (Bitwarden, 1Password) plus an offline copy at a second location — in an emergency a printed QR code in a safe deposit box. Yes, really.
Prepare DNS failover for restore. In a disaster recovery case the local PBS is often gone too. Restore must work directly from the external PBS to a freshly installed PVE. That means: external PBS must be reachable, credentials must be available (password manager with offline backup), encryption keys must be safely stored, restore walkthrough must be documented. Restore testing is a duty, not a courtesy.
Use namespaces for multi-tenancy. One namespace per customer or workload class in the same datastore: separate ACLs, separate prune policies, separate sync filters. But: if the same VM is backed up to two namespaces, the dirty bitmap is lost — the entire disk must be re-hashed each backup. Practically: a VM lives in exactly one namespace.
Three retention places — one source of truth. We mentioned it above, but it is so important that it deserves a second hearing: retention only at the PBS datastore. PVE job at "Keep all backups". Otherwise you mark away snapshots you hoped to have offsite.
Mind bandwidth asymmetry. Consumer connections often have 1 Gbps down, 50 Mbps up. Initial sync from the offsite back (in a DR case) is fast — sync to offsite is slow. With business connections usually symmetric, but worth checking.
Our approach at WZ-IT
At WZ-IT we build and operate PBS pull architectures as an integral part of our Proxmox & Private Cloud services. Three stations:
Architecture workshop (Proxmox consulting). Together we clarify data volume, RTO/RPO requirements, compliance obligations (NIS2, ISO 27001, BSI C5), and bandwidth realities. Output: an architecture document with hosting recommendation, retention policy, encryption strategy, and restore runbook.
Building the pull architecture (Proxmox setup). Local PBS highly available in the cluster, offsite PBS at netcup or another EU provider of your choice, dedicated sync user with DatastoreReader role, encryption keys safely stored, verify and restore tests documented. Including provisioning as code so the configuration stays reproducible.
Operations with defined SLAs (Maintenance & SLA). 24/7 monitoring of sync jobs, garbage collection, and verify results. Quarterly restore tests, monthly compliance report to management. For critical CVEs at kernel or PBS level, we respond within the agreed window through our Managed CVE monitoring — live patching for the PBS kernel included.
For compliance-relevant setups: for NIS2-relevant systems we document the architecture as an annex to your risk-management documentation. Auditors get the full evidence package — data residency, retention, encryption, restore tests, verify results — exportable on demand.
Further reading
- Proxmox Backup Server (hub) — overview of PBS, S3 backend, and managed-service options
- Proxmox Disaster Recovery — RTO/RPO, failover runbooks, and DR drills for the entire cluster
- Proxmox Security Hardening — cluster hardening per BSI guidance and CIS benchmark
- Linux Kernel Vulnerabilities 2026: Patch Management Board-Level — the NIS2 context in detail
- CVE Monitoring for Self-Hosted Software — how to respond structurally to CVE disclosures
- PBS 4.2 S3 Storage Stable — the S3 backend for object-lock immutability
Planning a PBS offsite setup or doubting the safety of your current backup architecture? We review your configuration free of charge and deliver a concrete recommendation — pull or push, local or managed, S3 or tape. With documented retention policy, restore runbook, and compliance assessment.
Book a free backup architecture review · Learn about PBS · Proxmox hub
Frequently Asked Questions
Answers to important questions about this topic
Pull. In pull mode the external PBS only needs read access to the source, and the source PBS does not even know the remote exists. Even if the production infrastructure is compromised, the attacker cannot delete backups on the external PBS — and the offsite server stays invisible to the source. Proxmox explicitly recommends pull wherever possible.
Locally, backups are kept short (e.g. 7-14 days for fast operational restores), offsite long (months to years for compliance and worst-case scenarios). Important: PBS sync does not pull back snapshots that were already pruned locally — retention must be planned in one direction. Locally shorter than offsite, never the other way around.
Offsite is a physically and administratively separated location — different power supply, different network, ideally different natural-hazard zone. A PBS in the same rack is not offsite. Rule of thumb: at least 50-100 km distance, different hoster, EU region for GDPR compliance. With Hetzner Falkenstein primary, netcup Nuremberg is a good offsite location.
VMs are split into 4 MiB fixed-size chunks, containers and hosts into dynamic chunks via a Buzhash rolling-hash. Each chunk is identified by SHA-256 — identical content lands only once in the datastore. Deduplication works automatically across multiple VMs and snapshots. In practice, reductions from 7 TB to 1 TB of storage are not unusual.
Extension of the classical 3-2-1 rule: 3 data copies, 2 different media, 1 offsite, 1 immutable or offline (against ransomware), 0 errors on verify. Pure PBS pull does not yet cover 'immutable' — that requires Object Lock on S3, tape backup, or removable datastores.
Own root server at netcup with local block storage (good performance, EU, no vendor lock-in), managed PBS providers like Tuxis, Cloud-PBS or Inett, own hardware in a second data center, or — since PBS 4.0 — an S3-compatible backend with Object Lock for true immutability. Hetzner Storage Box as a CIFS mount works technically but performs poorly for garbage collection and verify.
NIS2 requires documented risk management including backup and recovery strategy. The managing director is personally liable. A pull-based offsite architecture with documented retention, verify jobs, and restore tests delivers exactly the evidence an auditor wants to see: provable EU data residency, protection against ransomware lateral movement, defined recovery time and recovery point objectives.

Written by
Timo Wevelsiep
Co-Founder & CEO
Co-Founder of WZ-IT. Specialized in cloud infrastructure, open-source platforms and managed services for SMEs and enterprise clients worldwide.
LinkedInLet's Talk About Your Idea
Whether a specific IT challenge or just an idea – we look forward to the exchange. In a brief conversation, we'll evaluate together if and how your project fits with WZ-IT.


Timo Wevelsiep & Robin Zins
Managing Directors of WZ-IT




