WZ-IT Logo

Migrate Azure Blob Storage to Hetzner Object Storage – AzCopy & rclone Guide

Timo Wevelsiep
Timo Wevelsiep
#Azure #BlobStorage #Hetzner #ObjectStorage #Migration #AzCopy #rclone #CloudMigration

Azure costs exploding? Blob Storage too expensive? We migrate your Azure Blob containers to Hetzner Object Storage – with minimal risk and a clear cutover plan. 👉 Azure Migration Overview | Request Free Analysis

Azure Blob Storage is often the "collection point" for assets, backups, logs, media – and thus a typical cost driver (storage + requests + egress) and a lock-in component. If you want to consolidate your storage layer, Hetzner Object Storage is attractive: It's S3-compatible, so you work with Buckets and S3 tools at the target instead of Azure-specific APIs. Objects are immutable (WORM: write once, read many), and public buckets follow a clear URL format (Hetzner Docs).

This guide shows two proven migration paths:

  • Variant A (safe & simple): AzCopy → local (staging) → rclone → Hetzner
  • Variant B (direct): rclone azureblob → rclone s3 (Hetzner)

Plus: Initial Copy → Delta Sync → Cutover, integrity checks, and an important gotcha about Custom Metadata.

Table of Contents


Basics: Azure Blob vs. Hetzner Object Storage

Azure Blob – the Hierarchy

  • Storage AccountContainerBlobs
  • "Folders" in Azure Blob are usually virtual (prefix in the name), not actual directories

Hetzner Object Storage – the Target

  • BucketObjects (immutable)
  • Public URL format (when bucket is set to public): https://<bucket-name>.<location>.your-objectstorage.com/<file-name> (Hetzner Docs)

The location is a region (e.g., fsn1/nbg1/hel1). Even if you specify the region in other tools: Endpoint must contain the location.


Prerequisites

On Azure Side

  • Access to Storage Account / Container
  • Auth strategy for AzCopy: User Identity / Managed Identity / Service Principal / SAS Token (Microsoft Learn)

On Hetzner Side

  • Bucket created in Hetzner Object Storage
  • S3 Credentials (Access Key/Secret) ready (store secret securely)
  • Endpoint/Location determined

Migration Host / Network

  • A machine/VM that can reach both Azure Blob and Hetzner Object Storage simultaneously (tools run "on your side", not server-to-server)

Tool Selection: AzCopy vs. rclone

AzCopy (Microsoft Standard for Azure Storage)

AzCopy is Microsoft's CLI to copy data in/out of Azure Storage. It supports SAS tokens in URLs and includes many copy/sync examples (Microsoft Learn).

Important: AzCopy cannot speak "directly" to S3. Therefore, AzCopy is mainly for Blob ↔ local or Blob ↔ Blob.

rclone (the bridge: azureblob ↔ s3)

rclone supports Azure Blob as a backend and can also sync to S3-compatible targets (rclone Docs). AWS Prescriptive Guidance even has a pattern "Azure Blob → S3 with rclone" – the principle is identical, only your S3 target here is Hetzner (AWS Docs).

Metadata Gotcha

rclone, s3cmd, and aws s3 do not copy custom metadata. If your objects carry custom metadata and you need to preserve it, consider MinIO Client (Hetzner Docs).

Practically this means:

  • Do you use custom metadata in Azure (e.g., for app logic, cache keys, processing flags)?
  • → Plan an explicit metadata spot check and decide if "preserve custom metadata" is a must

Variant A: AzCopy → local → rclone → Hetzner

This variant is ideal if you:

  • want a clear staging step
  • want to cleanly pull Blob data from Azure
  • want to make the upload to Hetzner separate and reproducible

Step A1: Install AzCopy & Authenticate

AzCopy v10 is the current version, with auth via Identity or SAS Token (Microsoft Learn).

Typical auth options:

  • azcopy login (User/Entra ID)
  • SAS Token appended to source URL

Step A2: Download Container Recursively to Local

# Source (Azure Blob Container)
SRC="https://<account>.blob.core.windows.net/<container>?<SAS_TOKEN>"

# Destination (local path)
DST="/data/azure-export/<container>/"

azcopy copy "$SRC" "$DST" --recursive

Optional: Sync Instead of Copy

AzCopy Sync is recursive by default and can use hash comparison (GitHub):

azcopy sync "$SRC" "$DST" --compare-hash

Step A3: Configure rclone to Hetzner (S3)

rclone config
  • Configure remote hetzner as S3-compatible target
  • Endpoint: https://<location>.your-objectstorage.com

Step A4: Upload/Sync from Local to Hetzner

rclone sync /data/azure-export/<container>/ hetzner:<bucket>/<prefix>/ \
  --progress --transfers 32 --checkers 32

Step A5: Delta Sync Before Cutover

# T-24h
rclone sync /data/azure-export/<container>/ hetzner:<bucket>/<prefix>/ --progress

# T-1h (after another azcopy sync)
rclone sync /data/azure-export/<container>/ hetzner:<bucket>/<prefix>/ --progress

Variant B: rclone direct (azureblob → s3/Hetzner)

This variant is faster to set up if you:

  • don't need to keep a local staging copy
  • already use rclone
  • want to run initial+delta directly

Step B1: Configure rclone Remote azureblob

rclone documents the configuration of azureblob (Account Name/Key) (rclone Docs).

Step B2: Configure rclone Remote hetzner (S3)

  • S3 provider: "Other" / S3-compatible
  • Endpoint: https://<location>.your-objectstorage.com

Step B3: Initial Copy

rclone sync azureblob:<container>/<prefix>/ hetzner:<bucket>/<prefix>/ \
  --progress --transfers 32 --checkers 32

Step B4: Delta Sync & Cutover

As above: 1–3 delta runs, then switch the app.

Note: Even "direct" means: rclone runs on your host and streams between the APIs.


Verify Data Integrity

1. Compare Object Count & Total Size

rclone size hetzner:<bucket>/<prefix>/

2. Sample Downloads + Hash

  • 100 random objects of various size classes
  • Hash locally and compare (SHA256)

3. Metadata Spot Check

For frontend assets especially, Content-Type/Caching often determines "works / doesn't work".

Custom Metadata is the big trap: rclone/s3cmd/aws s3 don't copy it (Hetzner Docs).

4. Test Public URLs (if relevant)

If you use public buckets:

  • Fetch 10 random assets directly via browser/curl
  • Check status codes, MIME, cache headers

Performance Tips

AzCopy

  • Use sync for repeated runs with hash comparison (--compare-hash)
  • Shard large containers into prefixes and run in parallel

rclone

  • Increase --transfers/--checkers, but watch for throttling
  • Shard large prefixes when you have many small files
  • Write logs/reports cleanly (--log-file)

Common Pitfalls

Problem Solution
Custom Metadata lost Evaluate MinIO Client if custom metadata is critical
Public assets broken after cutover (MIME/Cache) Metadata spot check + CDN/frontend smoke tests before switching
Wrong endpoint (location missing) Endpoint must contain location: fsn1.your-objectstorage.com
Underestimated change rate Plan multiple delta syncs + freeze (if possible)

Cutover Checklist

T-7 to T-2 Days

  • Hetzner bucket + location determined
  • S3 credentials created & stored securely
  • Migration host has access to Azure + Hetzner
  • Metadata requirements clarified (custom metadata yes/no)

T-24h

  • Test migration + validation completed
  • Plan for Initial + Delta Sync ready
  • Cutover steps & rollback documented

Cutover

  • Final Delta Sync complete
  • Smoke tests (Public URLs / App Reads/Writes) passed
  • App/Config/Origin switched
  • Monitoring active for 24h

Our Approach at WZ-IT

Our standard runbook for Azure Blob → Hetzner Object Storage:

  1. Assessment: Size, object count, change rate, metadata requirements
  2. Pilot: Test prefix + validation plan
  3. Bulk: Initial Copy (sharded, parallel)
  4. Delta: 1–3 sync runs
  5. Cutover: Switch app config/URLs
  6. Monitoring: 404s, MIME/Cache, access latency, error rates

👉 We create a concrete cutover runbook for you (including commands, parallelism tuning, verification plan).


You might also be interested in these guides:


Ready for migration? At WZ-IT, we handle analysis, migration, and operation of your storage infrastructure – from Azure to cost-effective European alternatives.

👉 Request Free Cost Analysis

Frequently Asked Questions

Answers to important questions about this topic

Duration depends on data volume, object count, parallelism, and network bandwidth. Roughly: Initial Copy takes hours to days, Delta Syncs take minutes to hours.

Yes, usually with Delta Syncs and a short freeze/read-only window. For pure read workloads, it's often possible without any downtime.

AzCopy is Azure-native and excellent for Blob↔local / Blob↔Blob. rclone is the better bridge between Azure Blob and S3-compatible targets like Hetzner.

rclone, s3cmd, and aws s3 do not copy custom metadata. If custom metadata is critical, MinIO Client should be evaluated.

The format is: https://<bucket>.<location>.your-objectstorage.com/<file> – the location (fsn1/nbg1/hel1) must be included in the hostname.

Let's Talk About Your Idea

Whether a specific IT challenge or just an idea – we look forward to the exchange. In a brief conversation, we'll evaluate together if and how your project fits with WZ-IT.

Trusted by leading companies

  • Keymate
  • SolidProof
  • Rekorder
  • Führerscheinmacher
  • ARGE
  • NextGym
  • Paritel
  • EVADXB
  • Boese VA
  • Maho Management
  • Aphy
  • Negosh
  • Millenium
  • Yonju
  • Mr. Clipart
Timo Wevelsiep & Robin Zins - CEOs of WZ-IT

Timo Wevelsiep & Robin Zins

CEOs of WZ-IT

1/3 – Topic Selection33%

What is your inquiry about?

Select one or more areas where we can support you.