TigerData: how do I create a zero-copy fork for debugging an incident or testing a migration?
Time-Series Databases

TigerData: how do I create a zero-copy fork for debugging an incident or testing a migration?

8 min read

When you’re debugging a production incident or rehearsing a big migration, you want real data and a realistic environment—but you can’t afford to break prod. TigerData’s zero-copy forks give you an isolated Postgres service based on your production data, without a full data copy, so you can investigate, test, and iterate safely.

Quick Answer: Create a zero-copy fork in Tiger Cloud by forking your existing service at NOW, LAST_SNAPSHOT, or a specific PITR target_time. You get a fully isolated Postgres+TimescaleDB instance that reuses underlying storage, so you can debug incidents or test migrations at production scale with minimal cost and downtime.

The Quick Overview

  • What It Is: A zero-copy fork is a new Tiger Cloud service created from an existing one that reuses underlying storage blocks instead of copying all data. It’s a full Postgres instance with its own compute, connection string, and configuration, but it starts from the data state of the source service.
  • Who It Is For: Postgres and platform engineers, SREs, and developers who need a safe, realistic environment for incident debugging, schema changes, performance tuning, or migration rehearsals—without impacting production.
  • Core Problem Solved: It removes the need for slow, expensive full clones or risky “test in prod” changes by giving you a fast, storage-efficient way to spin up isolated environments with real telemetry data.

How It Works

Zero-copy forks are built on Tiger Cloud’s snapshot and point‑in‑time recovery (PITR) mechanisms. Instead of duplicating all data files, Tiger Cloud creates a new managed Postgres service that initially references the same underlying storage as the source at a specific point in time. From there, the fork diverges: it has its own WAL stream, compute, configuration, and lifecycle.

When you create a fork, you choose a fork strategy:

  • NOW – fork at the current database state.
  • LAST_SNAPSHOT – fork from the last existing snapshot (faster to provision).
  • PITR – fork at an arbitrary point in time; you must also set the target_time parameter.

Within that forked service, everything is Postgres‑native: you connect with a standard connection string, run SQL against hypertables, continuous aggregates, and vector indexes, and exercise your schema or migration scripts exactly as you would in production.

A typical flow:

  1. Choose the source and strategy:
    Select your production Tiger Cloud service and decide whether you want “right now,” the latest snapshot, or an earlier time (PITR) that captures pre-incident state.

  2. Create the forked service:
    Use the Tiger Console, CLI, or API to create a new service with the chosen strategy and a name that clearly marks it as a debug or migration test environment.

  3. Connect, test, and iterate safely:
    Point your migration scripts, experiments, or incident playbook at the fork. Because it’s a separate service, you can run heavy queries, drop and recreate indexes, or even destroy entire schemas without any risk to production.

1. NOW: Fork at the Current State

  • What it does: Takes the current database state (including all committed transactions) and forks a new service from that exact moment.
  • When to use it:
    • Debugging an ongoing incident where the current state is the thing you need to inspect.
    • Testing a migration or release against what’s in prod right now.

Important: If you’re analyzing data corruption or a bad batch write, consider whether you actually want the broken state (NOW) or a pre‑incident state (PITR).

2. LAST_SNAPSHOT: Fork From the Latest Snapshot

  • What it does: Creates the fork based on the most recent snapshot Tiger Cloud has already taken of the source service.
  • Why it’s faster: No need to compute a new snapshot; Tiger Cloud uses existing snapshot metadata, so provisioning time is reduced.
  • When to use it:
    • Non‑urgent debugging where “up to last snapshot” is good enough.
    • Routine staging/test environments that mirror prod, but don’t need to be second‑perfect.

Note: There may be a gap between LAST_SNAPSHOT and the current time, depending on your snapshot schedule. Recent writes after the snapshot won’t appear in the fork.

3. PITR: Point‑in‑Time Recovery Fork

  • What it does: Replays WAL (write‑ahead logs) to reconstruct the database state at a specific time, then creates a forked service at that state.
  • Required parameter:
    • target_time – the timestamp you want to recover to (for example, just before a bad deploy).

Example scenario:
You deployed a migration at 12:05:00 UTC, and metrics show problems starting at 12:07:30 UTC. You can fork with:

  • fork_strategy = PITR
  • target_time = '2026-03-01T12:07:00Z'

Now you have a full Tiger Cloud service mirroring the exact state shortly before the incident.

Warning: PITR forks may take longer than NOW or LAST_SNAPSHOT because they require WAL replay up to target_time. For large, high‑ingest telemetry workloads, factor this into your incident runbook.

Features & Benefits Breakdown

Core FeatureWhat It DoesPrimary Benefit
Zero-copy forking strategiesLets you fork at NOW, LAST_SNAPSHOT, or a PITR target_time.Get precisely the data state you need for debugging or tests.
Isolated Postgres servicesCreates a separate Tiger Cloud instance with its own compute and config.Run heavy queries or destructive tests with zero impact on production.
Postgres-native interfaceKeeps SQL, extensions, hypertables, and Timescale features intact.Test real migrations and performance changes exactly as in prod.

Ideal Use Cases

  • Best for debugging a production incident:
    Because it lets you recreate the exact data state (current, last snapshot, or point-in-time) in an isolated environment, so you can run expensive diagnostic queries, adjust indexes, and replay application behavior without risk.

  • Best for testing a schema or data migration:
    Because you can fork your production service and run full migration scripts, backfills, ALTER TABLE operations, and index changes at real telemetry scale—validating duration, lock behavior, and impact before touching prod.

Limitations & Considerations

  • Forks are separate services:
    Each forked service consumes its own compute resources and counts as a Tiger Cloud service for billing. While storage is initially shared at the block level, ongoing writes to the fork will allocate new storage.

  • Time‑based accuracy depends on the strategy:

    • NOW may capture data after an incident has already started.
    • LAST_SNAPSHOT omits writes that occurred after the snapshot.
    • PITR hinges on accurate target_time selection and available WAL history.
      Always confirm your incident timeline before choosing a strategy.

Pricing & Plans

Zero-copy forks are created as standard Tiger Cloud services. You select a plan (typically mirroring or slightly smaller than production, depending on your use case) and size compute and storage accordingly.

  • You are not charged per query, per backup, or per GB of ingest/egress; billing is based on service size and runtime, with automated backups included.
  • Because zero-copy forks reuse underlying storage up front, they are more space-efficient than full clones, especially for large telemetry datasets.

Common patterns:

  • Performance Plan fork: Best for teams needing a realistic replica of production for incident debugging or performance testing, with similar ingest/query characteristics but potentially smaller HA requirements.
  • Scale or Enterprise Plan fork: Best for regulated or mission-critical environments that require the same HA, security posture (for example, HIPAA on Enterprise), and operational guarantees as production during rehearsals and DR testing.

(Exact pricing and plan details may change; check the Tiger Console or sales team for current numbers.)

Frequently Asked Questions

How do I decide between NOW, LAST_SNAPSHOT, and PITR for my fork?

Short Answer: Use NOW for live state debugging, LAST_SNAPSHOT when you want a faster fork and can tolerate missing very recent writes, and PITR when you must reconstruct the database at a precise moment (for example, just before a bad deploy).

Details:

  • Choose NOW if you’re investigating how the system looks right now and you want every committed transaction included.
  • Choose LAST_SNAPSHOT if you’re setting up a general-purpose staging or test environment where “fresh enough” data is fine, and you value quicker provisioning.
  • Choose PITR when your incident analysis depends on exact timing—for example, to compare pre‑ and post‑migration behavior or to isolate when corruption started. Make sure WAL retention covers your desired target_time, and confirm the timestamp in UTC to avoid timezone confusion.

Can I use a zero-copy fork as a long-lived staging environment?

Short Answer: Yes, but treat it as a full Tiger Cloud service with its own lifecycle and costs.

Details:
A zero-copy fork is a normal, managed Postgres+TimescaleDB instance once created. You can:

  • Apply schema changes independently.
  • Run continuous ingest from lower environments or synthetic workloads.
  • Configure retention, compression, and tiered storage policies differently from prod.

However, it will diverge from the source over time. If you need “always fresh” staging that closely tracks production data, you’ll periodically create new forks or design a sync strategy. Remember that each forked service uses compute continuously while it’s running, so clean up forks that you no longer need.

Summary

Zero-copy forks in Tiger Cloud give you an operationally safe way to debug incidents and test migrations with real telemetry data, without touching production. By choosing between NOW, LAST_SNAPSHOT, and PITR (with a specific target_time), you control the exact data state you replicate. Each fork is a full, Postgres-native TigerData service, so your incident response and migration rehearsal workflows can use the same SQL, hypertables, and tooling you rely on in production—just in an isolated, disposable environment.

Next Step

Get Started