Migrate AWS RDS/Aurora PostgreSQL to Hetzner Cloud – pg_dump/restore Guide

AWS costs exploding? RDS too expensive? We migrate your PostgreSQL databases from AWS RDS/Aurora to Hetzner Cloud – with minimal risk and a clear cutover plan. 👉 AWS Migration Overview | Request Free Analysis
AWS is the starting point for many teams – and later the cost driver. Once an application runs stable, the "soft" line items become visible: ongoing RDS instance costs, storage/IO, backups – and especially data transfer/egress. AWS itself points out that data transfer costs are often overlooked in architecture design (AWS Blog).
For EU companies, there's increasingly a second motive: digital sovereignty and data control. Since Schrems II, third-country transfers and EU-compliant cloud architecture are more on the radar (EuroCloud). Hetzner operates its own data centers in Nuremberg, Falkenstein (DE) and Helsinki (FI) (Hetzner Docs).
This post covers the most common exit step after storage: AWS RDS PostgreSQL → PostgreSQL on a Hetzner Cloud VM. And since downtime is acceptable, the default recommendation is clear: pg_dump/pg_restore – the most pragmatic, best controllable approach.
Table of Contents
- Recommendation and Target Architecture
- Prepare Target System: Hetzner VM and PostgreSQL
- Migration with pg_dump/pg_restore
- CMD Examples
- Validation After Migration
- Common Pitfalls
- Cutover Checklist
- Alternatives for Minimal Downtime
- Our Approach at WZ-IT
- Related Guides
Recommendation and Target Architecture
Default Recommendation: pg_dump/pg_restore
If you're using RDS and a maintenance window (downtime) is acceptable:
- The approach is proven, stable, and transparent – native PostgreSQL tools are sufficient (AWS Docs)
- Complex replication, Logical Replication, or DMS setups are eliminated
- For databases up to ~100-500 GB, a maintenance window is realistically plannable
Target at Hetzner: There's no "Managed PostgreSQL" at Hetzner Cloud. The goal is: create a VM and install/operate PostgreSQL yourself – with full control over configuration, backups, and updates.
Prepare Target System: Hetzner VM and PostgreSQL
Create VM and Storage
- Create VM in Hetzner Cloud
- Ubuntu LTS (e.g., 22.04) or Debian
- Sufficient RAM/CPU for your DB load
- Separate Volume for
/var/lib/postgresql(growth/backups) - Private Network / Firewall rules for access
Install PostgreSQL
sudo apt update
sudo apt install -y postgresql postgresql-contrib
sudo systemctl enable --now postgresql
Create database and user:
sudo -u postgres createuser --pwprompt appuser
sudo -u postgres createdb -O appuser appdb
Configure Remote Access
postgresql.conf:listen_addressesonly on required IPspg_hba.conf: only allow explicit networks/IPs- Firewall: Port 5432 only from migration host/app network
Define Backup Strategy
You're now the DBA. Minimum:
- Nightly
pg_dump+ rotation - VM/Volume snapshots (additionally, not alone)
- Optional: WAL archiving for Point-in-Time Recovery
Migration with pg_dump/pg_restore
Workflow (High-Level)
- Start maintenance window – App in Maintenance / Read-Only
- Create dump (compressed, optionally parallel)
- Transfer dump to Hetzner VM
- Restore on target PostgreSQL
- Fixups – Extensions, Owner, Grants, Sequences
- Validation – Counts, spot checks, smoke tests
- Cutover – Switch app connection string
- Monitoring – Keep old RDS read-only initially (rollback window)
CMD Examples
Dump (AWS RDS as Source)
Custom Format (compact, flexible):
export PGPASSWORD="<AWS_PASSWORD>"
pg_dump \
-h <AWS_HOST> -p 5432 -U <AWS_USER> -d <DBNAME> \
-Fc -Z 6 \
-f db.dump
-Fc: Custom Format (ideal for pg_restore)-Z 6: Compression
Directory Format + parallel (for large DBs):
pg_dump \
-h <AWS_HOST> -p 5432 -U <AWS_USER> -d <DBNAME> \
-Fd -j 8 \
-f dumpdir/
Transfer to Hetzner VM
rsync -avP db.dump root@<HETZNER_VM_IP>:/root/
Restore on Hetzner VM
Create database:
export PGPASSWORD="<HETZNER_PASSWORD>"
createdb -h <HETZNER_VM_IP> -p 5432 -U <HETZNER_USER> <DBNAME>
Restore:
pg_restore \
-h <HETZNER_VM_IP> -p 5432 -U <HETZNER_USER> -d <DBNAME> \
--no-owner --no-privileges \
-j 8 \
/root/db.dump
Why
--no-owner --no-privileges? When switching providers, you want to apply roles/grants deliberately to your target model, rather than importing AWS defaults.
Update statistics:
ANALYZE;
Validation After Migration
Hard Checks (Data)
- Rowcounts for critical tables
- Sum checks (e.g., invoice amounts)
- Spot checks (IDs) across tables
SELECT 'users' AS t, count(*) FROM public.users
UNION ALL
SELECT 'orders' AS t, count(*) FROM public.orders;
Smoke Tests (Application)
- Login works
- 2-3 most important business processes (Create/Update/Payment)
- Background Jobs / Cron
- Test write paths (Insert/Update/Delete)
Performance Quick Check
- Connection pooling configured?
- Monitor error rate & DB connections after cutover
Common Pitfalls
| Problem | Solution |
|---|---|
| Extensions missing | Inventory beforehand (SELECT * FROM pg_extension) and install on target |
| Sequences wrong | nextval() incorrect → adjust via script with setval() |
| Grants/Owner missing | Apply manually since --no-owner/--no-privileges is used |
| Timeouts during dump | Parallelize (-j), adjust maintenance window |
Cutover Checklist
T-7 to T-2 Days
- Hetzner VM + Volume ready (location chosen)
- PostgreSQL installed, firewall/pg_hba configured
- Backup job active on target
- Test dump/restore completed
T-24h
- Maintenance window communicated
- Rollback defined (connection string back to RDS)
- Smoke test scenarios documented
Cutover
- App Read-Only / Maintenance
- Dump created + transferred + restored
- Validation: Counts + spot checks + smoke tests
- App switched to Hetzner DB
- Monitoring (Errors, DB connections, latency)
T+24h
- Keep RDS read-only (safety net)
- Monitor performance & errors
- Decommission RDS when stable
Alternatives for Minimal Downtime
If dump/restore takes too long or downtime is critical:
- Logical Replication (Publication/Subscription) – Low downtime, more setup (AWS Blog)
- AWS DMS (Full Load + CDC) – orchestrated, but setup overhead (AWS Docs)
For this article – since downtime is acceptable – pg_dump/restore remains the default recommendation.
Our Approach at WZ-IT
Our standard flow for AWS RDS → Hetzner PostgreSQL:
- Assessment: DB size, extensions, change rate, downtime budget
- Target System: Provision Hetzner VM, install PostgreSQL, firewall/backup
- Test Migration: Test dump on staging DB, validation, performance check
- Cutover Planning: Maintenance window, rollback plan, communication
- Migration: Dump → Transfer → Restore → Validation → Cutover
- Monitoring: Error rates, latencies, app logs, backup verification
👉 We create a concrete cutover runbook with all commands and checklists for you.
Related Guides
You might also be interested in these guides:
- Migrate AWS S3 to Hetzner Object Storage – rclone & MinIO Client Guide
- AWS to Hetzner Migration – Complete Overview – All AWS services at a glance
- Cloud Migration Hub – All provider migrations at a glance
- Save Cloud Costs: AWS EC2 & RDS – Up to 90% savings
- Hetzner Expertise – Our Hetzner services overview
Ready for migration? At WZ-IT, we handle analysis, migration, and operation of your database infrastructure – from AWS to cost-effective European alternatives.
Frequently Asked Questions
Answers to important questions about this topic
Duration depends on database size. With pg_dump/restore and parallel jobs (-j 8), 50-100 GB can be migrated in a few hours. For larger DBs or minimal downtime: consider Logical Replication.
No, if you implement a write-freeze during the dump (App Read-Only). pg_dump creates a consistent snapshot. Validation via rowcounts and spot checks recommended.
No, Hetzner doesn't offer Managed PostgreSQL like AWS RDS. You operate PostgreSQL yourself on a Cloud VM – with full control over configuration, backups, and updates.
A Hetzner Cloud VM (e.g., CPX31 with 8 vCPU, 16 GB RAM) costs about €15/month. Comparable RDS instances (db.m5.large) run €150-200/month – excluding storage, I/O, and backups.
Install the same major version as on RDS (e.g., PostgreSQL 15). pg_dump/restore works seamlessly across minor version differences.
Let's Talk About Your Idea
Whether a specific IT challenge or just an idea – we look forward to the exchange. In a brief conversation, we'll evaluate together if and how your project fits with WZ-IT.

Timo Wevelsiep & Robin Zins
CEOs of WZ-IT



