Proxmox Backup Server 4.2: S3 Storage Exits Tech Preview — What You Need to Know

Editorial note: The information in this article was compiled to the best of our knowledge at the time of publication. Technical details, prices, versions, licensing terms, and external content may change. Please verify the information provided independently, particularly before making business-critical or security-related decisions. This article does not replace individual professional, legal, or tax advice.

Need help setting up and operating Proxmox Backup Server? WZ-IT handles planning, installation and ongoing operations of your backup infrastructure. Schedule a free consultation
With Proxmox Backup Server 4.2 (released late April 2026), S3-compatible object storage is officially supported as a backup backend. No more tech preview label, no experimental warnings. But there are limitations you need to know — and one of them can break your datastore structure.
Table of Contents
- What's new in PBS 4.2
- S3 as backup backend: what's now stable
- The local cache: how the architecture works
- Object Lock: why it doesn't work and what happens when you enable it
- Other limitations you need to know
- Which S3 providers work
- Our recommendation
What's new in PBS 4.2
PBS 4.2 is based on Debian 13.4 ("Trixie") with Linux Kernel 7.0 and ZFS 2.4. Key highlights:
- S3 storage officially stable — no longer tech preview
- Request and traffic statistics for S3 datastores directly in the web GUI
- Parallel sync jobs — the new
worker-threadsparameter allows multiple backup groups to be synced simultaneously - Backup groups and namespaces can be moved within a datastore
- Encryption for push/pull sync — snapshots can be encrypted and decrypted on the fly
- Centralized key management — tape and sync encryption keys in a single panel
S3 as backup backend: what's now stable
S3-compatible object stores can be used as a datastore backend in PBS. This means chunks and metadata are stored in the S3 bucket instead of a local filesystem.
What works:
- Backup and restore of VMs and containers
- Deduplication (chunks are deduplicated just like with local datastores)
- Client-side encryption (AES-256-GCM)
- Garbage collection
- Sync jobs (push and pull)
- Traffic and request monitoring in the GUI
The local cache: how the architecture works
PBS doesn't store everything directly in the bucket when using S3 datastores. There's a local cache that's essential for operation:
- Metadata is stored locally and persistently — PBS needs it for datastore operations
- Chunks are cached using LRU — frequently used chunks stay local, rarely used ones are only kept in S3
- The cache is "greedy" — it uses as much space as available
Recommendation: 64-128 GB of dedicated storage for the cache. Ideally a separate filesystem or partition so the cache doesn't crowd out other services. To limit the cache, use filesystem quotas or a dedicated partition.
Important: In PBS 4.1, an S3 refresh (syncing the local cache with the bucket) had a bug that caused a panic. This is fixed in 4.2. However, an S3 refresh still briefly blocks the datastore — plan for this when running multiple datastores.
Object Lock: why it doesn't work and what happens when you enable it
This is the most critical point. PBS does not support S3 Object Lock, S3 Versioning, or Immutable Objects.
Why not:
- PBS manages data integrity through its own garbage collection
- Chunks are deleted when no index references them anymore
- Object Lock prevents chunk deletion → garbage collection can't clean up → datastore structure becomes inconsistent
What happens when Object Lock is active on the bucket:
- Garbage collection fails or produces errors
- Chunks that are no longer referenced can't be deleted
- Storage consumption grows uncontrollably
- In the worst case, the datastore becomes unusable
Our clear recommendation: keep Object Lock disabled on the S3 bucket. If you need immutable backups, you need a solution outside of PBS (e.g., a separate S3 bucket with Object Lock for manual snapshot copies).
Other limitations you need to know
-
No native immutability — PBS doesn't have its own immutable backup feature. This is tracked as a feature request (issue 6780) but isn't trivial to implement due to the deduplication architecture.
-
Latency — S3 is slower than local storage. Restore times are higher, especially with many small chunks. For critical RTOs (Recovery Time Objectives), we recommend a local PBS as the primary datastore and S3 as an offsite copy.
-
S3 API costs — Every backup operation generates S3 requests (PUT, GET, DELETE, LIST). With providers that charge per request (AWS S3, partially Wasabi), costs can become significant with many small backups. The new traffic statistics in PBS 4.2 help with cost control.
-
No multi-datastore S3 refresh — When an S3 refresh runs on one datastore, in PBS 4.1 all datastores were briefly unavailable. This is improved in 4.2, but it's still not a zero-impact operation.
Which S3 providers work
PBS works with all S3-compatible providers:
| Provider | Endpoint | Notes |
|---|---|---|
| Hetzner Object Storage | s3.{region}.hetzner.com | GDPR-compliant, German data centers |
| MinIO (Self-Hosted) | Custom endpoint | Full control, on-premise possible |
| AWS S3 | s3.{region}.amazonaws.com | Most comprehensive feature set, US CLOUD Act applies |
| Wasabi | s3.{region}.wasabisys.com | Affordable, but 90-day minimum retention |
| Backblaze B2 | s3.{region}.backblazeb2.com | Affordable, good S3 compatibility |
| STACKIT Object Storage | Custom | BSI C5, sovereign, GDPR-compliant |
For GDPR-compliant setups, we recommend Hetzner Object Storage or STACKIT as S3 backends.
Our recommendation
The most sensible architecture is a hybrid setup:
- Primary PBS with local storage (ZFS/LVM) for fast backups and fast restores
- Secondary PBS or sync job with S3 backend for offsite copies
- No Object Lock on the S3 bucket
- Monitor S3 request costs via the new PBS 4.2 statistics
- Sufficient local cache (64-128 GB) for the S3 datastore
S3 as the sole backend for production backups is possible, but restore performance and the lack of immutability make it more of a complement than a replacement for a local PBS.
We operate Proxmox Backup Server for you — planning, installation, monitoring and ongoing maintenance. Whether local PBS, S3 offsite or hybrid setup: we're happy to advise you.
Further Guides
- Proxmox Backup Server Expertise — Everything about PBS at WZ-IT
- Encrypted Backups with Hetzner Storage Box — Classic PBS backup via NFS
- Install Proxmox on Hetzner — Auto-install script for Hetzner Dedicated
- Managed Operations — Professional IT operations with SLA
Frequently Asked Questions
Answers to important questions about this topic
Yes. S3-compatible object stores are officially supported as a backup backend — no longer tech preview. However, there are limitations you need to know.
No. PBS does not support S3 Object Lock or versioning. Enabling Object Lock on the bucket can corrupt the datastore structure.
All S3-compatible providers: Hetzner Object Storage, MinIO, AWS S3, Wasabi, Backblaze B2 and more.
Proxmox recommends 64-128 GB for the local cache. The cache stores metadata (persistent) and chunks (LRU-based).

Written by
Timo Wevelsiep
Co-Founder & CEO
Co-Founder of WZ-IT. Specialized in cloud infrastructure, open-source platforms and managed services for SMEs and enterprise clients worldwide.
LinkedInLet's Talk About Your Idea
Whether a specific IT challenge or just an idea – we look forward to the exchange. In a brief conversation, we'll evaluate together if and how your project fits with WZ-IT.


Timo Wevelsiep & Robin Zins
Managing Directors of WZ-IT




