Migrate AWS S3 to Hetzner Object Storage – rclone & MinIO Client Guide

Editorial note: The information in this article was compiled to the best of our knowledge at the time of publication. Technical details, prices, versions, licensing terms, and external content may change. Please verify the information provided independently, particularly before making business-critical or security-related decisions. This article does not replace individual professional, legal, or tax advice.

AWS S3 too expensive? Egress costs exploding? We support you with the complete migration to Hetzner Object Storage – from analysis to cutover. 👉 AWS Migration Overview | Request Free Analysis
When migrating from AWS, S3 is often the first big challenge: lots of data, lots of requests – and everything needs to be correct in the end. Hetzner Object Storage is S3-compatible (Docs), so existing S3 tools continue to work – just with Hetzner endpoints.
This guide shows a proven workflow: Initial Copy → Delta Sync → Cutover, including tool comparison (rclone vs. MinIO Client vs. AWS CLI) and verification checks.
Table of Contents
- Prerequisites and Planning
- Tool Selection: rclone vs. MinIO Client vs. AWS CLI
- Step-by-Step: Initial Sync → Delta Sync → Cutover
- Verify Data Integrity
- Common Pitfalls
- Our Approach at WZ-IT
- Related Guides
Prerequisites and Planning
Pre-Migration Checklist
Before running any commands, clarify these points – they determine tool choice and cutover strategy:
| Question | Why Relevant |
|---|---|
| Bucket size & object count | TB? Millions of objects? Determines parallelism and duration |
| Change rate | New/updated objects per hour/day? Number of delta syncs |
| Versioning / Object Lock | Compliance features may need to be replicated |
| Custom Metadata | x-amz-meta-* headers? → Tool choice critical! |
⚠️ Important: rclone, s3cmd, and
aws s3do not copy custom metadata. If you use custom metadata headers, use MinIO Client (mc) (Hetzner Docs).
Choose Target Region/Endpoint
Hetzner Object Storage offers regional endpoints in the EU:
fsn1.your-objectstorage.com(Falkenstein)nbg1.your-objectstorage.com(Nuremberg)hel1.your-objectstorage.com(Helsinki)
Choose based on:
- Proximity to compute workloads (minimize latency)
- Proximity to users (for public assets)
- Compliance requirements (data location)
Create S3 Credentials
At Hetzner, you create S3 credentials in the Console. The secret key is only visible once – store it directly in your secret manager (1Password, Vault, KMS), not in plaintext repos (Hetzner Docs).
Tool Selection: rclone vs. MinIO Client vs. AWS CLI
rclone (recommended without custom metadata)
Strengths:
- Excellent for sync, delta sync, listings, and checks
- Can compare source/destination without modifying data
- Ideal for "Initial Copy → Delta → Cutover"
Weakness:
- Does not copy custom metadata
MinIO Client mc (recommended with custom metadata)
Strengths:
- Preserves custom metadata during migration
mc mirroris very migration-friendly (can be used incrementally)
Weaknesses:
- For very large data volumes, fine-tuning may be needed (retries, parallelism)
AWS CLI (aws s3 sync)
Limitations:
- Cannot use two profiles simultaneously – requires local staging (download from source, upload to target)
- Does not copy custom metadata
Rule of thumb:
- Metadata doesn't matter? → rclone
- Metadata important? → MinIO
mc - Quick & simple, no metadata? → AWS CLI (but often cumbersome)
Step-by-Step: Initial Sync → Delta Sync → Cutover
Step 1: Create Target Bucket in Hetzner
Create a bucket in the Hetzner Console (choose name and location).
Public URL format at Hetzner:
https://<bucket-name>.<location>.your-objectstorage.com/<file-name>
Step 2: Dry Run with Subset
Before migrating 5–50 TB: Test with a prefix like test/ or assets/2025-….
Option A: rclone
# Configure remotes first (aws + hetzner)
rclone sync aws:my-bucket/assets/ hetzner:my-bucket/assets/ \
--progress --transfers 32 --checkers 32
Option B: MinIO Client
mc mirror --overwrite --remove aws/my-bucket/assets/ hetzner/my-bucket/assets/
Step 3: Initial Copy (Bulk Sync)
Now migrate everything (or the large prefixes first).
rclone Bulk Copy:
rclone sync aws:my-bucket hetzner:my-bucket \
--progress --transfers 64 --checkers 64 \
--fast-list --retries 10 --low-level-retries 20
MinIO Client Bulk Copy:
mc mirror --overwrite aws/my-bucket hetzner/my-bucket
Tip: For very many objects,
--fast-list(rclone) helps reduce API calls – but test beforehand (memory/listing behavior).
Step 4: Delta Sync (Before Cutover)
If your S3 bucket continues to change, run multiple delta syncs before cutover:
- T-24h: Delta Sync
- T-1h: Delta Sync
- T-5min: Final Delta Sync after "freeze" (if possible)
rclone sync aws:my-bucket hetzner:my-bucket \
--progress --transfers 64 --checkers 64
Step 5: Cutover (Switch Application)
What "cutover" means depends on the use case:
| Use Case | Action |
|---|---|
| Backups/Archive | Producer writes to Hetzner from now on |
| Static Assets/CDN | Change origin URL in CDN |
| App Uploads | Change S3 endpoint + credentials in app |
Note: The public URL contains the location in the hostname (.<location>.).
Verify Data Integrity
1. Compare Listings
Compare per prefix: object count, total size.
rclone size aws:my-bucket
rclone size hetzner:my-bucket
2. rclone check: Compare Sizes + Hashes
rclone check compares size and hashes (e.g., MD5/SHA1) and writes reports without modifying data (rclone Docs).
rclone check aws:my-bucket hetzner:my-bucket --one-way --progress
⚠️ ETag ≠ MD5 for Multipart Uploads: For objects uploaded via multipart upload, the ETag is not the MD5 digest (AWS Docs). Don't rely blindly on ETag-MD5 comparisons.
3. Spot Checks for Critical Data
For "must be 100% correct" data (DB dumps, releases, invoices):
- Randomly select 100–1,000 objects (various size classes)
- Download from both buckets and hash locally (SHA-256), compare
Common Pitfalls
Pitfall 1: Custom Metadata Lost
rclone, aws s3, and s3cmd do not copy custom metadata. If you use any → use MinIO Client.
Pitfall 2: Wrong Endpoint
Hetzner endpoints are location-specific (fsn1/nbg1/hel1). If you use fsn1 but the bucket is in nbg1, you'll see strange errors or empty listings.
Pitfall 3: Public URL Doesn't Work
Common errors with public buckets:
- Location missing (
.<location>.) - Object key incorrectly encoded (spaces/UTF-8)
Pitfall 4: Costs/Requests Miscalculated
At Hetzner Object Storage:
- Ingress is free
- S3 operations (PUT/GET/DELETE) free
- Billing via base price + included quota + pay-as-you-go
With extremely many objects, migration speed may be limited by API/parallelism, not costs.
Our Approach at WZ-IT
Our standard flow for AWS S3 → Hetzner Object Storage:
- Assessment: Bucket features (versioning/object lock/metadata), sizes, change rate
- Pilot: Migrate test prefix + validation checks
- Bulk + Delta: Initial sync + multiple delta runs
- Cutover Plan: Freeze window, endpoint switch, rollback plan
- Monitoring: Error rates, 404s, latencies, app logs
👉 We create a concrete cutover runbook for you (including commands, parallelism tuning, verification plan).
Related Guides
You might also be interested in these guides:
- Migrate AWS RDS/Aurora PostgreSQL to Hetzner – pg_dump/restore Guide
- AWS to Hetzner Migration – Complete Overview – All AWS services at a glance
- Cloud Migration Hub – All provider migrations at a glance
- Save Cloud Costs: AWS EC2 & RDS – Up to 90% savings
- Set Up Hetzner Cloud Volumes – Block storage for your workloads
- Hetzner Expertise – Our Hetzner services overview
Ready for migration? At WZ-IT, we handle analysis, migration, and operation of your infrastructure – from AWS to cost-effective European alternatives.
Frequently Asked Questions
Answers to important questions about this topic
Duration depends on data volume, object count, parallelism, and network bandwidth. Roughly: Initial Copy takes hours to days, Delta Syncs take minutes to hours.
Custom metadata (x-amz-meta-*) is lost when using rclone, aws s3, and s3cmd. If custom metadata is important, use MinIO Client (mc) instead.
Hetzner offers regional endpoints: fsn1.your-objectstorage.com (Falkenstein), nbg1.your-objectstorage.com (Nuremberg), and hel1.your-objectstorage.com (Helsinki).
For multipart uploads, the ETag is not the MD5 digest of the entire file. Don't rely blindly on ETag-MD5 comparisons – use rclone check or sample downloads instead.
Ingress (upload) is free, S3 operations (PUT/GET/DELETE) are also free. Billing is based on base price plus included quota, with pay-as-you-go for overages.

Written by
Timo Wevelsiep
Co-Founder & CEO
Co-Founder of WZ-IT. Specialized in cloud infrastructure, open-source platforms and managed services for SMEs and enterprise clients worldwide.
LinkedInLet's Talk About Your Idea
Whether a specific IT challenge or just an idea – we look forward to the exchange. In a brief conversation, we'll evaluate together if and how your project fits with WZ-IT.


Timo Wevelsiep & Robin Zins
CEOs of WZ-IT




