
TigerData: what’s the fastest way to migrate from AWS RDS/Aurora PostgreSQL to Tiger Cloud?
If you’re already running PostgreSQL on AWS RDS or Aurora, the fastest way to get onto Tiger Cloud is to treat it like a standard Postgres-to-Postgres migration: use logical replication to stream changes into a Tiger Cloud service, cut over when lag goes to zero, and only then turn on TimescaleDB features like hypertables and compression. You keep SQL, drivers, and tooling the same, and you swap out the fragile performance profile of RDS/Aurora for a Postgres engine tuned for time-series and real‑time analytics.
Quick Answer: The fastest, lowest‑risk path from AWS RDS/Aurora PostgreSQL to Tiger Cloud is: (1) provision a Tiger Cloud service, (2) load an initial snapshot with
pg_dump/pg_restoreor physical backup restore, (3) enable logical replication from RDS/Aurora into Tiger Cloud to keep it in sync, then (4) cut application traffic over once replication lag is drained. After cutover, you can convert hot tables into TimescaleDB hypertables to unlock performance and cost gains.
The Quick Overview
- What It Is: A proven, Postgres‑native migration pattern that moves your workloads off AWS RDS/Aurora PostgreSQL and onto Tiger Cloud with minimal downtime using logical replication and familiar tools.
- Who It Is For: Teams running high‑ingest, analytics‑heavy, or telemetry workloads on RDS/Aurora that are hitting performance and cost ceilings but want to stay on “boring, reliable, and endlessly extensible” Postgres.
- Core Problem Solved: RDS/Aurora PostgreSQL becomes slow and expensive at telemetry scale; this migration path gets you onto Tiger Cloud’s optimized Postgres (TimescaleDB, row‑columnar storage, tiered storage) quickly, without rewriting your application or building new plumbing.
How It Works
At a high level, you set up Tiger Cloud as another Postgres node, seed it with a baseline copy of your data, then stream ongoing changes from RDS/Aurora until you’re ready to flip your applications over.
The sequence:
- Prepare & snapshot
- Catch up via logical replication
- Cut over & enable TigerData features
Under the hood it’s just Postgres mechanics—pg_dump, pg_restore, logical replication slots, and standard connection strings—plus Tiger Cloud’s managed operations (HA, backups, PITR) taking over once you’re live.
1. Prepare & baseline: create Tiger Cloud and load data
-
Provision a Tiger Cloud service
- Choose the right plan:
- Performance for single‑AZ, high‑ingest, sub‑second queries.
- Scale/higher for larger datasets, multi‑AZ HA, read replicas.
- Enterprise if you need HIPAA, stricter SLAs, or advanced networking.
- Configure:
- VPC peering / private networking to your AWS VPC.
- IP allow lists for any public connectivity.
- Appropriate instance size and storage based on your RDS/Aurora metrics.
- Choose the right plan:
-
Take a consistent snapshot from RDS/Aurora
For most workloads, use a logical dump:
PGPASSWORD=... pg_dump \ --host=<rds-endpoint> \ --port=5432 \ --username=<user> \ --format=custom \ --file=baseline.dump \ --dbname=<db_name>Important:
- Ensure
search_path, extensions, and schemas are captured. - Exclude ephemeral tables if you don’t need them.
- For very large databases, you might combine this with parallel restore and/or a more staged approach (schema first, then data).
- Ensure
-
Restore into Tiger Cloud
PGPASSWORD=... pg_restore \ --host=<tigercloud-endpoint> \ --port=5432 \ --username=<user> \ --dbname=<db_name> \ --jobs=8 \ --create \ baseline.dumpNote:
- Tiger Cloud is standard Postgres under the hood; you restore into it exactly as you would another Postgres instance.
- Install any required extensions via
CREATE EXTENSION(e.g.,pgcrypto,uuid-ossp,pg_partmanif you had it). TimescaleDB is already available.
At this point, Tiger Cloud contains a point‑in‑time copy of your RDS/Aurora database, but it’s not yet receiving new writes.
2. Keep in sync: logical replication from RDS/Aurora to Tiger Cloud
Next, you configure RDS/Aurora as the publisher and Tiger Cloud as the subscriber so changes continue to stream while you test and prepare the cutover.
-
Enable logical replication on RDS/Aurora
-
On RDS PostgreSQL:
- Ensure your parameter group has:
wal_level = logicalmax_replication_slotsandmax_wal_sendershigh enough.
- Apply and restart as required.
- Ensure your parameter group has:
-
On Aurora PostgreSQL:
- Use a custom DB cluster parameter group.
- Set
wal_level = logical. - Confirm Aurora engine version supports logical replication.
Important:
- Logical replication can increase WAL volume. Monitor IO and storage usage during migration.
-
-
Create a publication on the source
On your RDS/Aurora instance:
CREATE PUBLICATION rds_to_tiger_pub FOR ALL TABLES; -- or selectively: -- CREATE PUBLICATION rds_to_tiger_pub FOR TABLE public.metrics, public.events;Note:
- If you have tables you don’t want to replicate (ephemeral, logs), define the publication explicitly.
-
Create a subscription in Tiger Cloud
On your Tiger Cloud database:
CREATE SUBSCRIPTION rds_to_tiger_sub CONNECTION 'host=<rds-endpoint> port=5432 dbname=<db_name> user=<repl_user> password=<secret> sslmode=require' PUBLICATION rds_to_tiger_pub WITH (copy_data = false, create_slot = true);Why
copy_data = false? You already loaded the baseline viapg_restore. Logical replication now just streams changes made after the snapshot. -
Monitor replication lag
On the source (RDS/Aurora):
SELECT slot_name, pg_size_pretty(pg_wal_lsn_diff(pg_current_wal_lsn(), restart_lsn)) AS replication_lag; FROM pg_replication_slots;On the Tiger Cloud side, you can also watch metrics in the Tiger Console (replication throughput, lag) and logs for any replication conflicts.
Warning:
- Avoid DDL changes (schema changes) during migration, or coordinate them carefully. Logical replication does not automatically propagate DDL; you must apply DDL on Tiger Cloud as well.
3. Cut over and start using TimescaleDB features
Once replication lag is near zero and you’re confident in Tiger Cloud behavior under test load, you can schedule the final cutover.
-
Freeze writes on RDS/Aurora
Options:
-
Put the application into maintenance mode (recommended).
-
Temporarily revoke write privileges, e.g.:
REVOKE INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public FROM app_role; -
Or route all writes to Tiger Cloud in your app config (but do this only after step 3).
-
-
Wait for replication to catch up
Watch WAL lag until it’s effectively zero.
-
Point applications to Tiger Cloud
- Update connection strings (same drivers, same SQL).
- Confirm:
- Connection pooling is configured (pgBouncer or Tiger Cloud’s connection handling).
- SSL/TLS settings align (Tiger Cloud uses TLS 1.2+; update trust stores as needed).
-
Drop subscription and publication
After you’re confident all traffic is going to Tiger Cloud:
-- On Tiger Cloud DROP SUBSCRIPTION rds_to_tiger_sub; -- On RDS/Aurora DROP PUBLICATION rds_to_tiger_pub; -
Enable TimescaleDB primitives on hot tables
Now you can start taking advantage of Tiger Cloud’s engine improvements:
-
Convert time‑series tables to hypertables
SELECT create_hypertable('metrics', by_range('time')); -- Or composite: -- SELECT create_hypertable('metrics', by_range('time'), by_hash('device_id')); -
Turn on compression
ALTER TABLE metrics SET ( timescaledb.compress, timescaledb.compress_segmentby = 'device_id' ); SELECT add_compression_policy('metrics', INTERVAL '7 days');This lets Tiger Cloud store recent data in row format for fast writes and older data in compressed columnar storage, with up to 98% compression in many telemetry workloads.
-
Create continuous aggregates for rollups
CREATE MATERIALIZED VIEW metrics_hourly WITH (timescaledb.continuous) AS SELECT time_bucket('1 hour', time) AS bucket, device_id, avg(value) AS avg_value FROM metrics GROUP BY bucket, device_id; SELECT add_continuous_aggregate_policy( 'metrics_hourly', start_offset => INTERVAL '7 days', end_offset => INTERVAL '1 hour', schedule_interval => INTERVAL '5 minutes' );
Note:
- Do this in phases. Start with the largest, most write‑heavy tables that are currently stressing RDS/Aurora.
-
Features & Benefits Breakdown
| Core Feature | What It Does | Primary Benefit |
|---|---|---|
| Logical replication cutover | Streams changes from RDS/Aurora into Tiger Cloud while you test and validate. | Minimal downtime migration; no big‑bang switch or lengthy maintenance window. |
| Postgres‑native compatibility | Keeps the same SQL, drivers, and schema semantics between RDS/Aurora and Tiger Cloud. | No application rewrite; you’re just changing connection strings and tuning. |
| TimescaleDB hypertables | Adds time‑ and key‑based partitioning, automatic chunking, and optimized indexes. | High‑ingest performance and sub‑second queries on time‑series and event data at real‑life scale. |
| Row‑columnar storage & compression | Keeps hot data in rowstore and compresses colder data into columnar storage. | Up to 98% compression, lower storage costs, and faster analytics scans on historical data. |
| Tiered storage & lakehouse integration | Moves cold data to low‑cost object storage and integrates with S3/Iceberg. | Replace fragile Kafka/Flink/S3 glue with native infrastructure and lakehouse‑ready exports. |
| Tiger Cloud managed ops | Provides HA, automated backups, point‑in‑time recovery, and 24/7 operations support. | Production‑grade reliability without running your own clusters; no hidden per‑query costs. |
Ideal Use Cases
-
Best for telemetry and metrics workloads:
Because you can move from RDS/Aurora’s generalized Postgres to Tiger Cloud’s telemetry‑tuned Postgres, then turn your biggest metric/event tables into hypertables and compressed columnstore with continuous aggregates. You get faster ingestion and analytics without managing a separate time‑series system. -
Best for mixed OLTP + analytics apps hitting limits on RDS/Aurora:
Because Tiger Cloud can run transactional queries and real‑time analytics side‑by‑side using TimescaleDB’s primitives (hypertables, continuous aggregates) and managed features (multi‑AZ HA, read replicas). The migration process keeps downtime low while you shift read‑heavy/reporting traffic off your constrained RDS/Aurora cluster.
Limitations & Considerations
-
Aurora/RDS parameter constraints:
RDS and Aurora control certain parameters (e.g.,wal_level,max_replication_slots) via parameter groups. If you can’t enable logical replication due to policy or engine version, you may have to fall back to a short maintenance‑window dump/restore or an application‑driven dual‑write strategy during migration. -
DDL and long‑running migrations:
Logical replication doesn’t automatically replicate schema changes. For long migrations, you must:- Freeze or tightly control DDL during the migration window, or
- Implement a process to apply DDL changes on Tiger Cloud in lockstep.
For very large, evolving schemas, plan a shorter “final sync” period to minimize divergence.
Pricing & Plans
Tiger Cloud’s pricing is designed to be transparent and predictable: you pay for the service size (compute + storage) with no per‑query fees and no extra charges for automated backups or normal ingest/egress within the service. You can see usage and costs directly in Tiger Console, billed monthly in arrears.
For migration from AWS RDS/Aurora PostgreSQL, the typical choices are:
-
Performance Plan:
Best for teams needing a fast, single‑AZ Postgres service with TimescaleDB to replace a single RDS or small Aurora deployment. Ideal when:- You’re ingesting time‑series or event data at increasing rates.
- RDS/Aurora performance tuning is becoming high‑maintenance.
- You want HA options but are starting with a simple topology.
-
Scale / Enterprise Plans:
Best for teams consolidating multiple RDS/Aurora clusters or running mission‑critical telemetry workloads that require:- Multi‑AZ high availability and read replicas.
- Strong SLAs and 24/7 support with defined severity response.
- Compliance requirements (SOC 2 report access, GDPR support, HIPAA on Enterprise).
- Larger storage footprints with tiered storage and more aggressive compression.
For exact pricing numbers and a sizing recommendation based on your current RDS/Aurora usage, contact TigerData directly.
Frequently Asked Questions
How much downtime should I expect when moving from RDS/Aurora PostgreSQL to Tiger Cloud?
Short Answer: With logical replication, downtime is typically limited to a brief cutover window—often a few minutes to freeze writes, let replication catch up, and switch application connections.
Details:
The main downtime occurs during the final synchronization:
- You take a snapshot (via
pg_dumpor similar) while your application continues writing to RDS/Aurora. - You load that snapshot into Tiger Cloud.
- Logical replication streams ongoing changes so Tiger Cloud stays close to real time.
- At cutover, you:
- Put the app into maintenance mode or revoke write privileges on RDS/Aurora.
- Wait for replication lag to reach zero.
- Update application connection strings to point to Tiger Cloud.
The “hard stop” window is essentially the time for step 4 plus any smoke testing you choose to perform. For many teams, this fits into a standard maintenance window with far less disruption than a full downtime dump/restore.
Do I need to change my application code to run on Tiger Cloud instead of RDS/Aurora?
Short Answer: No. Tiger Cloud is standard Postgres with TimescaleDB; most applications only need a connection string update and some tuning to leverage new features.
Details:
Tiger Cloud keeps Postgres as the foundation:
- Same SQL dialect, transaction semantics, and JDBC/PG drivers.
- Same schema objects (tables, indexes, views, foreign keys).
- TimescaleDB adds new primitives (hypertables, compression, continuous aggregates) but doesn’t require you to change existing queries.
The minimal changes you’ll likely make:
- Connection details: Update hostname, port, credentials, and SSL parameters to the new Tiger Cloud service.
- Performance tuning: After migration, you may:
- Convert heavy time‑series tables to hypertables.
- Add continuous aggregates for rollups to offload expensive queries.
- Adjust index strategies given improved partitioning and compression.
- Ops configuration: Integrate Tiger Cloud’s backups, PITR, and monitoring into your existing observability and incident response process.
If you were using Aurora‑specific features (e.g., some cluster‑specific diagnostics), you may need to map those to standard Postgres equivalents, but the core application logic typically remains unchanged.
Summary
Migrating from AWS RDS/Aurora PostgreSQL to Tiger Cloud is fastest when you treat it as a Postgres‑to‑Postgres move: seed Tiger Cloud with a snapshot, stream changes with logical replication, and flip traffic once lag drains. Because Tiger Cloud is standard Postgres extended with TimescaleDB, your app’s SQL and drivers continue to work, while you gain telemetry‑grade capabilities—hypertables for automatic partitioning, row‑columnar storage with up to 98% compression, tiered storage, and managed HA, backups, and 24/7 ops support.
This approach avoids the “fragile and high‑maintenance” pattern of stitching together extra systems just to achieve scale. You keep a single, Postgres‑native database, but one that’s explicitly built for live telemetry and real‑time analytics.