
How do I enable TimescaleDB features on TigerData Tiger Cloud and convert an existing table to a hypertable?
Most teams land on Tiger Cloud with a Postgres mental model: spin up a service, connect with psql or your favorite client, and start creating tables. To unlock TimescaleDB’s performance for telemetry workloads—automatic partitioning, hybrid row-columnar storage, and time-series functions—you need two things:
- A Tiger Cloud service with TimescaleDB enabled (already true for Tiger Cloud).
- Tables converted to hypertables, so TimescaleDB’s primitives can kick in.
This guide walks you step by step through enabling TimescaleDB features on Tiger Cloud and converting an existing table into a hypertable safely.
The Quick Overview
- What It Is: TimescaleDB on Tiger Cloud is a Postgres-native extension that adds hypertables, dynamic partitioning, hybrid row-columnar storage (Hypercore), and time-series functions—without changing your SQL.
- Who It Is For: Teams ingesting time-series, event, or tick data that outgrows plain Postgres tables: IoT, observability, fintech, Web3, and any live telemetry workload.
- Core Problem Solved: Plain Postgres tables become slow and expensive at high ingest + historical query scale. TimescaleDB’s hypertables and storage engine keep writes fast and analytics efficient while staying fully compatible with Postgres.
How It Works
On Tiger Cloud, TimescaleDB is installed and wired in for you. You connect using standard Postgres credentials, then:
-
Verify the TimescaleDB extension is available.
Confirmtimescaledbis installed and enabled for your database. -
Prepare your existing table.
Identify the time column (and optional partitioning key) that defines “time-series” for your workload, and make sure the schema and indexes make sense. -
Convert the table into a hypertable.
RunSELECT create_hypertable(...)on your existing table to turn it into a hypertable. TimescaleDB automatically partitions your data into chunks and unlocks advanced features like compression, columnar storage, and continuous aggregates.
The rest of this guide goes into the specifics, including SQL examples and operational considerations when doing this in production.
Step 1: Confirm TimescaleDB is enabled on Tiger Cloud
You don’t need to install TimescaleDB manually on Tiger Cloud—Tiger Cloud extends Postgres with the TimescaleDB extension out of the box.
1. Connect to your Tiger Cloud service
Use any Postgres client:
psql "postgres://YOUR_USER:YOUR_PASSWORD@YOUR_HOST:5432/YOUR_DB?sslmode=require"
Or via a GUI like pgAdmin, DataGrip, etc., using the connection string from Tiger Console.
2. Check that TimescaleDB is installed
Run:
SELECT *
FROM pg_available_extensions
WHERE name = 'timescaledb';
You should see an entry with a default_version and installed_version (or NULL if not enabled in this database yet).
3. Enable TimescaleDB in your database (if needed)
If the extension is not already created in your target database, run:
CREATE EXTENSION IF NOT EXISTS timescaledb;
Important:
RunCREATE EXTENSIONonce per database where you want to use TimescaleDB features. You don’t need to enable it on thepostgrestemplate unless you intentionally want all new databases to have TimescaleDB by default.
To verify:
SELECT extname, extversion
FROM pg_extension
WHERE extname = 'timescaledb';
At this point, TimescaleDB features are available. The next step is to convert your existing table into a hypertable so TimescaleDB can manage it efficiently.
Step 2: Inspect and prepare your existing table
Before you convert to a hypertable, you need to know:
- The time column: a
TIMESTAMP,TIMESTAMPTZ,DATE, or integer-based time (e.g., Unix epoch) that defines time for your events. - Optional partition key: a secondary dimension like
device_id,customer_id, orregionthat you may want to partition by alongside time for better data distribution. - Existing indexes and constraints: you want to avoid redundant indexes after conversion.
1. Look at the table schema
Example:
\d+ telemetry_events;
Sample output (simplified):
Table "public.telemetry_events"
Column | Type | Modifiers
--------------+-----------------------------+------------------------
id | bigserial | not null
device_id | text | not null
ts | timestamptz | not null
metric_name | text | not null
metric_value | double precision | not null
metadata | jsonb |
Indexes:
"telemetry_events_pkey" PRIMARY KEY, btree (id)
"telemetry_events_ts_idx" btree (ts DESC)
"telemetry_events_device_ts_idx" btree (device_id, ts DESC)
In this case:
- Time column:
ts - Partition key candidate:
device_id
Note:
Choose a time column that is present on all rows, is monotonically increasing for most inserts, and aligns with your queries (e.g.,WHERE ts >= NOW() - interval '7 days').
Step 3: Convert an existing table into a hypertable
TimescaleDB’s central primitive is the hypertable. It keeps your table’s logical interface (same name, same columns), but partitions it automatically into time-based chunks and optionally by a secondary key.
Basic conversion: time-only partitioning
If you only want to partition by time:
SELECT create_hypertable(
'telemetry_events', -- table_name
'ts' -- time_column
);
This:
- Keeps the table name
telemetry_events - Creates underlying chunk tables automatically
- Starts routing all new inserts through the hypertable logic
Advanced conversion: time + space partitioning
If your workload is multi-tenant or device-heavy, use a space partition:
SELECT create_hypertable(
'telemetry_events',
'ts',
'device_id', -- partitioning key
number_partitions => 8
);
Important:
number_partitionsshould be chosen relative to your insert throughput and cardinality of the key. Too many partitions with low data per partition can fragment data; too few might not distribute load effectively.
What happens to existing data?
When you run create_hypertable on a non-empty table, TimescaleDB:
- Rewrites the table metadata so it becomes a hypertable
- Automatically “adopts” existing rows into chunks using your time column
- Keeps your existing table name and schema intact
You do not need to rewrite data manually.
Warning:
On very large tables, converting to a hypertable can be resource-intensive. For production systems under heavy load, schedule conversion during a maintenance window and monitor CPU/IO.
Step 4: Review indexes, constraints, and defaults
When you convert, TimescaleDB preserves existing constraints and indexes. However, hypertables and chunks add their own indexing strategies.
1. Default TimescaleDB indexes
By default, TimescaleDB may create indexes optimized for time-based queries. In some environments, you may want to turn off automatic index creation and define your own.
To disable default index creation before creating the hypertable:
ALTER DATABASE your_db
SET timescaledb.create_default_indexes = 'off';
Then reconnect and run create_hypertable with explicitly chosen indexes.
2. Keep or drop redundant indexes
For time-series workloads, you usually want indexes like:
-- Time-only
CREATE INDEX ON telemetry_events (ts DESC);
-- Time + dimension
CREATE INDEX ON telemetry_events (device_id, ts DESC);
If you had multiple overlapping indexes pre-conversion, you can drop ones that are no longer useful:
DROP INDEX IF EXISTS telemetry_events_ts_idx;
Note:
Always profile your queries (EXPLAIN ANALYZE) before dropping indexes. Hypertables improve underlying storage behavior, but query plans still rely heavily on appropriate indexes.
Step 5: Validate hypertable behavior
After conversion, treat it like a normal table—but verify that TimescaleDB is actually managing it.
1. Confirm it’s a hypertable
SELECT hypertable_name, schema_name, chunk_sizing_func
FROM timescaledb_information.hypertables
WHERE hypertable_name = 'telemetry_events';
You should see telemetry_events listed as a hypertable.
2. Check chunk creation
Insert some data and confirm chunks exist:
SELECT create_hypertable('telemetry_events', 'ts')
-- already run above
INSERT INTO telemetry_events (device_id, ts, metric_name, metric_value)
VALUES
('device-1', now(), 'temp', 22.3),
('device-2', now() - interval '1 day', 'temp', 21.8);
SELECT *
FROM timescaledb_information.chunks
WHERE hypertable_name = 'telemetry_events';
You should see one or more chunk tables with ranges over ts.
What TimescaleDB Features You Unlock on Tiger Cloud
Once your table is a hypertable on Tiger Cloud, you can start using the full TimescaleDB toolkit.
Automatic partitioning
Hypertables automatically partition data into chunks by time (and optionally space). You don’t manage partitions manually, and queries stay fast as data grows.
Hybrid row-columnar storage (Hypercore) and compression
Convert older chunks to compressed, columnar storage to cut storage costs and speed up analytics scans.
Example:
ALTER TABLE telemetry_events
SET (
timescaledb.compress,
timescaledb.compress_segmentby = 'device_id',
timescaledb.compress_orderby = 'ts DESC'
);
SELECT add_compression_policy(
'telemetry_events',
INTERVAL '7 days' -- compress data older than 7 days
);
This pattern:
- Keeps recent data in row-store for fast inserts
- Converts older data to columnar with up to ~95% compression (and often much better scan performance for aggregates)
Continuous aggregates
For always-fresh rollups (like per-minute or per-hour metrics), use continuous aggregates:
CREATE MATERIALIZED VIEW telemetry_events_1m
WITH (timescaledb.continuous) AS
SELECT
time_bucket('1 minute', ts) AS bucket,
device_id,
avg(metric_value) AS avg_metric
FROM telemetry_events
GROUP BY bucket, device_id;
SELECT add_continuous_aggregate_policy(
'telemetry_events_1m',
start_offset => INTERVAL '1 day',
end_offset => INTERVAL '1 minute',
schedule_interval => INTERVAL '1 minute'
);
Note:
Continuous aggregates are incrementally refreshed. There’s a refresh schedule and a watermark—queries might see slightly lagged rollups relative to raw data. Use direct queries on the hypertable when you absolutely require the latest unaggregated points.
Features & Benefits Breakdown
| Core Feature | What It Does | Primary Benefit |
|---|---|---|
| Hypertables & chunks | Automatically partitions tables by time (and optional key) into chunks | Keeps writes fast and queries responsive as data volume grows |
| Hybrid row-columnar store | Stores recent data in row format; compresses older chunks into columnar form | Up to ~95% compression and faster analytical queries |
| Continuous aggregates | Maintains incrementally updated materialized views over hypertables | Real-time rollups without constant recomputation |
Ideal Use Cases
-
Best for live telemetry & metrics:
Because hypertables handle high ingest rates and mixed real-time + historical queries without you managing partitions or sharding manually. -
Best for analytics on large history:
Because compression and columnar scanning make “months/years of data” queries cheap and fast, while Tiger Cloud handles HA, backups, and scaling.
Limitations & Considerations
-
Conversion cost on very large tables:
Converting a multi-terabyte table to a hypertable can be resource-intensive. Plan a maintenance window, throttle ingest, and monitor the service. For extremely large legacy tables, consider a phased migration (new hypertable + backfill) instead of in-place conversion. -
Schema changes and compression:
DDL on hypertables (especially with compressed chunks) is more nuanced than plain tables. Review TimescaleDB docs before frequent schema changes, and test in a staging Tiger Cloud service.
Pricing & Plans
Tiger Cloud pricing is service-based and transparent:
- You pay for the resources you allocate (compute, storage) and see them itemized in Tiger Console.
- There are no per-query fees, no surprise ingest/egress charges for standard usage, and you don’t pay separately for automated backups.
Exact plan names and limits vary over time, but conceptually:
-
Performance (or equivalent entry plan):
Best for teams starting with dedicated telemetry workloads that need TimescaleDB features, HA, and predictable performance. -
Scale / Enterprise:
Best for teams with strict SLOs, multi-region/high-availability requirements, compliance needs (SOC 2, GDPR, HIPAA on Enterprise), and 24/7 support with SLAs.
Check the current Tiger Cloud pricing page or contact TigerData for detailed plan specs and region availability.
Frequently Asked Questions
Can I convert a busy production table to a hypertable without downtime?
Short Answer: Yes, but you should plan for impact and monitor closely.
Details:
create_hypertable is an online operation from the client’s perspective—connections remain open and you can continue reading/writing. However:
- On large tables, the metadata transition and chunk creation can consume CPU and IO.
- Index maintenance and catalog updates might temporarily affect latency.
If you’re converting a heavily used table:
- Schedule during off-peak hours.
- Temporarily slow down bulk ingest if possible.
- Monitor Tiger Cloud metrics (CPU, IO, query latency).
- Have a rollback plan (snapshot/backup) via Tiger Cloud’s automated backups and point-in-time recovery (PITR).
For extreme-scale tables, a dual-write approach (new hypertable + backfill from the old table) can give you more control.
Do I need to change my application code after converting to a hypertable?
Short Answer: Usually no—hypertables behave like normal Postgres tables.
Details:
A hypertable keeps:
- The same table name
- The same schema
- The same SQL interface
Your existing INSERT, SELECT, UPDATE, DELETE, and JOIN queries continue to work. The difference is internal: TimescaleDB manages chunks, compression, and continuous aggregates.
Exceptions to watch for:
- Tools that rely on physical table names might see chunk tables; always target the hypertable name (
telemetry_events), not the chunk names. - Very specialized
pg_class/pg_inheritsintrospection might behave differently; test any custom catalog queries.
For normal app workloads, no code changes are required beyond the initial CREATE EXTENSION and create_hypertable calls.
Summary
On Tiger Cloud, TimescaleDB is already built into your Postgres service. To actually benefit from it, you:
- Enable the
timescaledbextension in your database. - Identify your time column (and optional partition key).
- Run
create_hypertableon your existing table. - Tune indexes and then layer on compression and continuous aggregates.
You keep the Postgres you already know—same SQL, same drivers—but gain hypertables, hybrid row-columnar storage, and time-series functions that keep live telemetry workloads fast and cost-efficient at scale.