WZ-IT Logo

Migrate AWS S3 to Hetzner Object Storage – rclone & MinIO Client Guide

Timo Wevelsiep
Timo Wevelsiep
#AWS #S3 #Hetzner #ObjectStorage #Migration #rclone #MinIO #CloudMigration

Editorial note: The information in this article was compiled to the best of our knowledge at the time of publication. Technical details, prices, versions, licensing terms, and external content may change. Please verify the information provided independently, particularly before making business-critical or security-related decisions. This article does not replace individual professional, legal, or tax advice.

Migrate AWS S3 to Hetzner Object Storage – rclone & MinIO Client Guide

AWS S3 too expensive? Egress costs exploding? We support you with the complete migration to Hetzner Object Storage – from analysis to cutover. 👉 AWS Migration Overview | Request Free Analysis

When migrating from AWS, S3 is often the first big challenge: lots of data, lots of requests – and everything needs to be correct in the end. Hetzner Object Storage is S3-compatible (Docs), so existing S3 tools continue to work – just with Hetzner endpoints.

This guide shows a proven workflow: Initial Copy → Delta Sync → Cutover, including tool comparison (rclone vs. MinIO Client vs. AWS CLI) and verification checks.

Table of Contents


Prerequisites and Planning

Pre-Migration Checklist

Before running any commands, clarify these points – they determine tool choice and cutover strategy:

Question Why Relevant
Bucket size & object count TB? Millions of objects? Determines parallelism and duration
Change rate New/updated objects per hour/day? Number of delta syncs
Versioning / Object Lock Compliance features may need to be replicated
Custom Metadata x-amz-meta-* headers? → Tool choice critical!

⚠️ Important: rclone, s3cmd, and aws s3 do not copy custom metadata. If you use custom metadata headers, use MinIO Client (mc) (Hetzner Docs).

Choose Target Region/Endpoint

Hetzner Object Storage offers regional endpoints in the EU:

  • fsn1.your-objectstorage.com (Falkenstein)
  • nbg1.your-objectstorage.com (Nuremberg)
  • hel1.your-objectstorage.com (Helsinki)

Choose based on:

  • Proximity to compute workloads (minimize latency)
  • Proximity to users (for public assets)
  • Compliance requirements (data location)

Create S3 Credentials

At Hetzner, you create S3 credentials in the Console. The secret key is only visible once – store it directly in your secret manager (1Password, Vault, KMS), not in plaintext repos (Hetzner Docs).


Tool Selection: rclone vs. MinIO Client vs. AWS CLI

Strengths:

  • Excellent for sync, delta sync, listings, and checks
  • Can compare source/destination without modifying data
  • Ideal for "Initial Copy → Delta → Cutover"

Weakness:

  • Does not copy custom metadata

Strengths:

  • Preserves custom metadata during migration
  • mc mirror is very migration-friendly (can be used incrementally)

Weaknesses:

  • For very large data volumes, fine-tuning may be needed (retries, parallelism)

AWS CLI (aws s3 sync)

Limitations:

  • Cannot use two profiles simultaneously – requires local staging (download from source, upload to target)
  • Does not copy custom metadata

Rule of thumb:

  • Metadata doesn't matter? → rclone
  • Metadata important? → MinIO mc
  • Quick & simple, no metadata? → AWS CLI (but often cumbersome)

Step-by-Step: Initial Sync → Delta Sync → Cutover

Step 1: Create Target Bucket in Hetzner

Create a bucket in the Hetzner Console (choose name and location).

Public URL format at Hetzner:

https://<bucket-name>.<location>.your-objectstorage.com/<file-name>

Step 2: Dry Run with Subset

Before migrating 5–50 TB: Test with a prefix like test/ or assets/2025-….

Option A: rclone

# Configure remotes first (aws + hetzner)
rclone sync aws:my-bucket/assets/ hetzner:my-bucket/assets/ \
  --progress --transfers 32 --checkers 32

Option B: MinIO Client

mc mirror --overwrite --remove aws/my-bucket/assets/ hetzner/my-bucket/assets/

Step 3: Initial Copy (Bulk Sync)

Now migrate everything (or the large prefixes first).

rclone Bulk Copy:

rclone sync aws:my-bucket hetzner:my-bucket \
  --progress --transfers 64 --checkers 64 \
  --fast-list --retries 10 --low-level-retries 20

MinIO Client Bulk Copy:

mc mirror --overwrite aws/my-bucket hetzner/my-bucket

Tip: For very many objects, --fast-list (rclone) helps reduce API calls – but test beforehand (memory/listing behavior).

Step 4: Delta Sync (Before Cutover)

If your S3 bucket continues to change, run multiple delta syncs before cutover:

  • T-24h: Delta Sync
  • T-1h: Delta Sync
  • T-5min: Final Delta Sync after "freeze" (if possible)
rclone sync aws:my-bucket hetzner:my-bucket \
  --progress --transfers 64 --checkers 64

Step 5: Cutover (Switch Application)

What "cutover" means depends on the use case:

Use Case Action
Backups/Archive Producer writes to Hetzner from now on
Static Assets/CDN Change origin URL in CDN
App Uploads Change S3 endpoint + credentials in app

Note: The public URL contains the location in the hostname (.<location>.).


Verify Data Integrity

1. Compare Listings

Compare per prefix: object count, total size.

rclone size aws:my-bucket
rclone size hetzner:my-bucket

2. rclone check: Compare Sizes + Hashes

rclone check compares size and hashes (e.g., MD5/SHA1) and writes reports without modifying data (rclone Docs).

rclone check aws:my-bucket hetzner:my-bucket --one-way --progress

⚠️ ETag ≠ MD5 for Multipart Uploads: For objects uploaded via multipart upload, the ETag is not the MD5 digest (AWS Docs). Don't rely blindly on ETag-MD5 comparisons.

3. Spot Checks for Critical Data

For "must be 100% correct" data (DB dumps, releases, invoices):

  • Randomly select 100–1,000 objects (various size classes)
  • Download from both buckets and hash locally (SHA-256), compare

Common Pitfalls

Pitfall 1: Custom Metadata Lost

rclone, aws s3, and s3cmd do not copy custom metadata. If you use any → use MinIO Client.

Pitfall 2: Wrong Endpoint

Hetzner endpoints are location-specific (fsn1/nbg1/hel1). If you use fsn1 but the bucket is in nbg1, you'll see strange errors or empty listings.

Pitfall 3: Public URL Doesn't Work

Common errors with public buckets:

  • Location missing (.<location>.)
  • Object key incorrectly encoded (spaces/UTF-8)

Pitfall 4: Costs/Requests Miscalculated

At Hetzner Object Storage:

  • Ingress is free
  • S3 operations (PUT/GET/DELETE) free
  • Billing via base price + included quota + pay-as-you-go

With extremely many objects, migration speed may be limited by API/parallelism, not costs.


Our Approach at WZ-IT

Our standard flow for AWS S3 → Hetzner Object Storage:

  1. Assessment: Bucket features (versioning/object lock/metadata), sizes, change rate
  2. Pilot: Migrate test prefix + validation checks
  3. Bulk + Delta: Initial sync + multiple delta runs
  4. Cutover Plan: Freeze window, endpoint switch, rollback plan
  5. Monitoring: Error rates, 404s, latencies, app logs

👉 We create a concrete cutover runbook for you (including commands, parallelism tuning, verification plan).


You might also be interested in these guides:


Ready for migration? At WZ-IT, we handle analysis, migration, and operation of your infrastructure – from AWS to cost-effective European alternatives.

👉 Request Free Cost Analysis

Frequently Asked Questions

Answers to important questions about this topic

Duration depends on data volume, object count, parallelism, and network bandwidth. Roughly: Initial Copy takes hours to days, Delta Syncs take minutes to hours.

Custom metadata (x-amz-meta-*) is lost when using rclone, aws s3, and s3cmd. If custom metadata is important, use MinIO Client (mc) instead.

Hetzner offers regional endpoints: fsn1.your-objectstorage.com (Falkenstein), nbg1.your-objectstorage.com (Nuremberg), and hel1.your-objectstorage.com (Helsinki).

For multipart uploads, the ETag is not the MD5 digest of the entire file. Don't rely blindly on ETag-MD5 comparisons – use rclone check or sample downloads instead.

Ingress (upload) is free, S3 operations (PUT/GET/DELETE) are also free. Billing is based on base price plus included quota, with pay-as-you-go for overages.

Timo Wevelsiep

Written by

Timo Wevelsiep

Co-Founder & CEO

Co-Founder of WZ-IT. Specialized in cloud infrastructure, open-source platforms and managed services for SMEs and enterprise clients worldwide.

LinkedIn

Let's Talk About Your Idea

Whether a specific IT challenge or just an idea – we look forward to the exchange. In a brief conversation, we'll evaluate together if and how your project fits with WZ-IT.

E-Mail
[email protected]

Trusted by leading companies

  • Rekorder
  • Keymate
  • Führerscheinmacher
  • SolidProof
  • ARGE
  • Boese VA
  • NextGym
  • Maho Management
  • Golem.de
  • Millenium
  • Paritel
  • Yonju
  • EVADXB
  • Mr. Clipart
  • Aphy
  • Negosh
  • ABCO Water
Timo Wevelsiep & Robin Zins - CEOs of WZ-IT

Timo Wevelsiep & Robin Zins

CEOs of WZ-IT

1/3 – Topic Selection33%

What is your inquiry about?

Select one or more areas where we can support you.