Migrate Azure Database for PostgreSQL to Hetzner Cloud – pg_dump/restore Guide

Azure costs exploding? Flexible Server too expensive? We migrate your PostgreSQL databases from Azure to Hetzner Cloud – with minimal risk and a clear cutover plan. 👉 Azure Migration Overview | Request Free Analysis
When teams migrate out of Azure, the database is often the most critical component: costs and lock-in play a role, but in day-to-day operations what matters most is predictability, data integrity, and a clean cutover. For Azure Database for PostgreSQL (Flexible Server), Microsoft describes Dump & Restore as the standard approach and recommends Directory format + parallelization for large databases (Microsoft Learn).
For the target: In Hetzner Cloud there's no "Azure-like Managed PostgreSQL" – you build the target as a VM + self-installed PostgreSQL (and handle backups/monitoring yourself). For VM backups, you can enable Daily Backups at Hetzner (7 slots, rotation) (Hetzner Docs).
Table of Contents
- Recommendation and Target Architecture
- Pre-Migration Checklist
- Prepare Target System: Hetzner VM and PostgreSQL
- Path 1: Dump and Restore (Offline)
- Validation After Migration
- Path 2: Logical Replication (Low Downtime)
- Common Pitfalls
- Cutover Checklist
- Our Approach at WZ-IT
- Related Guides
Recommendation and Target Architecture
Default Recommendation: pg_dump/pg_restore
If you're using Azure and a maintenance window (downtime) is acceptable:
- The approach is proven, stable, and transparent – Microsoft provides very specific best practices (Microsoft Learn)
- Complex replication or DMS setups are eliminated
- For databases up to ~100-500 GB, a maintenance window is realistically plannable
Low-Downtime (Path 2) is worth it when:
- your maintenance window is too small
- the DB is so large that restore takes too long
- or you can't cleanly "freeze" writes
Pre-Migration Checklist
Before you dump anything:
- Check Postgres Major Version (Azure → Target should be compatible; ideally: same major version)
- Inventory Extensions
SELECT extname, extversion FROM pg_extension ORDER BY 1; - Roles/Privileges: Who needs what rights after the move?
- Size + Downtime Tolerance: determines dump format, parallelism, cutover window
- Network Access: Target VM must be reachable from Azure
Prepare Target System: Hetzner VM and PostgreSQL
Create VM and Storage
- Create VM in Hetzner Cloud
- Ubuntu LTS (e.g., 22.04) or Debian
- Sufficient RAM/CPU for your DB load
- Separate Volume for
/var/lib/postgresql(growth/backups) - Private Network / Firewall: Port 5432 only from admin/migration IPs
Install PostgreSQL
sudo apt update
sudo apt install -y postgresql postgresql-contrib
sudo systemctl enable --now postgresql
Create database and user:
sudo -u postgres createuser --pwprompt appuser
sudo -u postgres createdb -O appuser appdb
Enable Backups
At Hetzner you can enable Daily Backups for Cloud servers (7 slots, oldest gets overwritten) (Hetzner Docs). This doesn't replace DB logical backups, but it's a good safety net.
Path 1: Dump and Restore (Offline)
Microsoft describes the procedure for Azure Database for PostgreSQL Flexible Server explicitly including role export, dump options, and restore; the Directory format is recommended because it enables parallelization (Microsoft Learn).
Workflow (High-Level)
- Start maintenance window – App in Maintenance / Read-Only
- Export roles (separately with pg_dumpall)
- Create dump (Directory format, parallel)
- Transfer dump to Hetzner VM
- Restore on target PostgreSQL (parallel)
- Fixups – Ownership, Grants, Sequences, Stats
- Validation – Counts, spot checks, smoke tests
- Cutover – Switch app connection string
- Monitoring – Keep Azure DB initially (rollback window)
Step 1: Export Roles/Users
Microsoft explicitly lists the pg_dumpall -r pattern as a step for roles/users (Microsoft Learn).
export PGPASSWORD="<AZURE_PASSWORD>"
pg_dumpall -h <AZURE_HOST> -p 5432 -U <AZURE_ADMIN_USER> -r > roles.sql
Tip: Review the roles file later (don't execute blindly) to avoid unwanted owner/privileges.
Step 2: Dump Database (parallel)
Directory Format (recommended for large DBs):
export PGPASSWORD="<AZURE_PASSWORD>"
pg_dump \
-h <AZURE_HOST> -p 5432 -U <AZURE_ADMIN_USER> -d <DBNAME> \
-Fd -j 8 \
-f dumpdir/
Custom Format (for small/medium DBs):
pg_dump \
-h <AZURE_HOST> -p 5432 -U <AZURE_ADMIN_USER> -d <DBNAME> \
-Fc -Z 6 \
-f db.dump
Microsoft has a dedicated article "Best practices for pg_dump and pg_restore" with parallelism and VM considerations (Microsoft Learn).
Step 3: Transfer to Hetzner VM
rsync -avP dumpdir/ root@<HETZNER_VM_IP>:/root/dumpdir/
rsync -avP roles.sql root@<HETZNER_VM_IP>:/root/
Step 4: Restore on Hetzner VM
Create database:
export PGPASSWORD="<HETZNER_PASSWORD>"
createdb -h <HETZNER_VM_IP> -p 5432 -U <HETZNER_USER> <DBNAME>
Import roles (with care):
psql -h <HETZNER_VM_IP> -p 5432 -U <HETZNER_USER> -f /root/roles.sql
Restore (Directory, parallel):
pg_restore \
-h <HETZNER_VM_IP> -p 5432 -U <HETZNER_USER> -d <DBNAME> \
--no-owner --no-privileges \
-j 8 \
/root/dumpdir/
Why
--no-owner --no-privileges? When switching providers, you want to apply roles/grants deliberately to your target model, rather than importing Azure defaults.
Update statistics:
ANALYZE;
Validation After Migration
Hard Checks (Data)
- Rowcounts for critical tables
- Sum checks (e.g., invoice amounts)
- Spot checks (50-100 IDs) across tables
SELECT 'users' AS t, count(*) FROM public.users
UNION ALL
SELECT 'orders' AS t, count(*) FROM public.orders;
Smoke Tests (Application)
- Login works
- 2-3 most important business processes (Create/Update/Payment)
- Background Jobs / Cron
- Test write paths (Insert/Update/Delete)
Path 2: Logical Replication (Low Downtime)
Azure Flexible Server supports native logical replication and logical decoding; Microsoft explicitly describes Publisher/Subscriber models (Microsoft Learn). PostgreSQL explains: Each subscription uses a replication slot (PostgreSQL Docs).
High-Level Runbook:
- Target VM ready, schema prepared
- On Azure: Create publication
- On Hetzner: Subscription (Initial Sync + Streaming)
- Monitor lag
- Cutover: short write-freeze → Catch-up → Switch app → Clean up subscription/slots
Logical Replication is powerful – but has more moving parts (network, slots, lag, maintenance). Therefore, with acceptable downtime: Dump/Restore remains the default.
Common Pitfalls
| Problem | Solution |
|---|---|
| Major Version Mismatch | Determine which Postgres version runs on Hetzner beforehand |
| Extensions missing | Inventory and install before restore runs |
| Sequences/Identity drift | Check after restore (classic "Insert fails" error) |
| Roles/Grants missing | Apply manually since --no-owner/--no-privileges is used |
| Backup gap after cutover | Activate DB backup plan immediately |
Cutover Checklist
T-7 to T-2 Days
- Hetzner VM + Volume + Firewall ready
- PostgreSQL installed, Monitoring/Logs ok
- Hetzner Server Backups enabled (7 slots)
- Extensions & versions checked
- Test run (small DB or snapshot) successful
T-24h
- Maintenance window communicated
- Rollback plan documented (Connection string back)
- Smoke test scenarios defined
Cutover
- App Read-Only / Maintenance
-
pg_dumpall -r+pg_dump+ Transfer +pg_restore - Validation: Counts + spot checks + smoke tests
- App switched to Hetzner DB
- Monitoring (Errors, DB connections, latency)
T+24h
- Keep Azure DB as fallback
- Monitor performance & errors
- Decommission Azure DB when stable
Our Approach at WZ-IT
Our standard flow for Azure PostgreSQL → Hetzner:
- Assessment: DB size, extensions, change rate, downtime budget
- Target System: Provision Hetzner VM, install PostgreSQL, firewall/backup
- Test Migration: Test dump on staging DB, validation, performance check
- Cutover Planning: Maintenance window, rollback plan, communication
- Migration: Dump → Transfer → Restore → Validation → Cutover
- Monitoring: Error rates, latencies, app logs, backup verification
👉 We create a concrete cutover runbook with all commands and checklists for you.
Related Guides
You might also be interested in these guides:
- Migrate Azure Blob Storage to Hetzner Object Storage – AzCopy & rclone Guide
- Azure to Hetzner Migration – Complete Overview – All Azure services at a glance
- Cloud Migration Hub – All provider migrations at a glance
- Migrate AWS RDS/Aurora PostgreSQL to Hetzner – pg_dump/restore Guide
- Hetzner Expertise – Our Hetzner services overview
Ready for migration? At WZ-IT, we handle analysis, migration, and operation of your database infrastructure – from Azure to cost-effective European alternatives.
Frequently Asked Questions
Answers to important questions about this topic
Duration depends on database size. With pg_dump/restore and parallel jobs (-j 8), 50-100 GB can be migrated in a few hours. For larger DBs or minimal downtime: consider Logical Replication.
No, if you implement a write-freeze during the dump (App Read-Only). pg_dump creates a consistent snapshot. Validation via rowcounts and spot checks recommended.
No, Hetzner doesn't offer Managed PostgreSQL like Azure Database for PostgreSQL. You operate PostgreSQL yourself on a Cloud VM – with full control over configuration, backups, and updates.
A Hetzner Cloud VM (e.g., CPX31 with 8 vCPU, 16 GB RAM) costs about €15/month. Comparable Azure Flexible Server instances run €100-200/month – excluding storage and backups.
Yes, with Logical Replication (Publication/Subscription). Azure Flexible Server supports native logical replication. This requires more setup but enables cutover with minimal freeze.
Let's Talk About Your Idea
Whether a specific IT challenge or just an idea – we look forward to the exchange. In a brief conversation, we'll evaluate together if and how your project fits with WZ-IT.

Timo Wevelsiep & Robin Zins
CEOs of WZ-IT



