
How do I connect Datadog/Prometheus to TigerData Tiger Cloud for database and query monitoring?
Most teams reach for Datadog or Prometheus once their Tiger Cloud services hit real production traffic—and they’re right to do it. TigerData already exposes rich, Postgres-native metrics inside Tiger Console; connecting those metrics to your observability stack lets you correlate database behavior with application latency, host metrics, and alerting.
Quick Answer: You connect Datadog or Prometheus to Tiger Cloud by exporting Tiger Cloud metrics and Postgres statistics (via the service connection string) into your preferred collector, then wiring dashboards and alerts around TigerData’s built‑in telemetry (query performance, compression, continuous aggregates, and tiered storage).
The Quick Overview
- What It Is: A Tiger Cloud monitoring setup that streams database and query metrics into Datadog or Prometheus using standard Postgres integrations and TigerData’s built-in observability.
- Who It Is For: SREs, DBAs, and platform engineers running production workloads on Tiger Cloud who want centralized visibility and alerting.
- Core Problem Solved: You avoid “flying blind” on database performance and stop guessing about query health, storage behavior, and background jobs—everything lands in the same observability stack as your apps and infrastructure.
How It Works
At a high level, you monitor Tiger Cloud with Datadog or Prometheus in the same way you would monitor Postgres—using standard exporters or agents—while taking advantage of TigerData’s extended metrics for time-series, compression, continuous aggregates, and storage tiering.
The flow looks like this:
-
The Tiger Cloud service exposes:
- A standard Postgres endpoint (for
pg_stat*views and query metrics). - Built-in Tiger Cloud metrics visible in Tiger Console (resource consumption, compression, continuous aggregates, tiering behavior).
- A standard Postgres endpoint (for
-
A Datadog Agent or Prometheus exporter connects to the service using a read-only Postgres user, pulling metrics at a fixed interval.
-
Your observability platform stores, visualizes, and alerts on those metrics alongside host and application telemetry.
In practice, you’ll follow three phases:
- Prepare Tiger Cloud for monitoring
- Connect Datadog or Prometheus
- Build database and query dashboards/alerts
1. Prepare Tiger Cloud for Monitoring
Before wiring tools together, make sure your Tiger Cloud environment is ready.
1.1. Confirm network access
- Ensure your Datadog Agent or Prometheus exporter can reach the Tiger Cloud service endpoint:
- If you’re peering VPCs or using a private network path, configure routing and security groups.
- If you’re using public connectivity, set up IP allow lists so only your monitoring hosts can connect.
Important: Treat the monitoring client like an application: it must be allowed to connect to the Tiger Cloud service on the standard Postgres port, and you should enforce TLS 1.2+ (Tiger Cloud encrypts in transit).
1.2. Create a read-only monitoring user
Use a least-privilege user for metrics collection. Connect as an admin to your Tiger Cloud service and run:
-- Create a dedicated monitoring role
CREATE ROLE monitor WITH LOGIN PASSWORD 'use-a-strong-password';
-- Restrict to read-only catalog access
GRANT CONNECT ON DATABASE mydb TO monitor;
GRANT USAGE ON SCHEMA public TO monitor;
-- Optionally allow SELECT on catalog views if needed
GRANT pg_monitor TO monitor;
Note: Many exporters require
pg_monitorto readpg_stat_activity,pg_stat_database, and related views.
1.3. Decide which metrics you care about
TigerData adds observability on top of standard Postgres stats:
- Database load:
pg_stat_database, connection counts, TPS. - Query performance: slow queries, time spent in execution, lock wait times.
- Time-series primitives:
- Compression ratios and chunk states.
- Continuous aggregate refresh behavior.
- Tiered storage and movement of data to object storage.
You’ll map these to Datadog/Prometheus metrics so you can see:
- Are writes/backfills saturating the service?
- Are continuous aggregates keeping up with real-time data (watermarks, lag)?
- Is compression/tiering actually delivering the storage and cost savings you expect?
2. Connect Datadog to Tiger Cloud
Datadog’s Postgres integration works with Tiger Cloud because Tiger Cloud is Postgres + TimescaleDB under the hood.
2.1. Configure Datadog Agent for Postgres
On the host where the Datadog Agent runs (Kubernetes, VM, or container), enable the Postgres integration.
A minimal postgres.d/conf.yaml might look like:
init_config:
instances:
- host: <your-tiger-cloud-hostname>
port: 5432
dbname: mydb
username: monitor
password: "<strong-password>"
ssl: true
sslmode: "require"
tags:
- env:prod
- service:tiger-cloud-mydb
- provider:tigerdata
Important: Use
sslmode: require(orverify-fullif you manage CA certs) to align with Tiger Cloud’s encryption in transit.
Restart the Datadog Agent after updating the configuration.
2.2. Enable custom query collection (optional but recommended)
To get richer query- and TimescaleDB-specific insights, configure custom queries. For example:
custom_queries:
- metric_prefix: tigerdb.compression
query: >
SELECT hypertable_name, compression_status, compressed_segment_bytes
FROM timescaledb_information.compressed_hypertables;
columns:
- name: hypertable_name
type: tag
- name: compression_status
type: tag
- name: compressed_segment_bytes
type: gauge
- metric_prefix: tigerdb.cagg
query: >
SELECT view_name, completed_threshold, invalidations_total
FROM timescaledb_information.continuous_aggregates;
columns:
- name: view_name
type: tag
- name: completed_threshold
type: gauge
- name: invalidations_total
type: gauge
These queries:
- Use TimescaleDB introspection views to surface:
- Which hypertables are compressed and how much space they occupy.
- How continuous aggregates are progressing toward real-time.
- Produce Datadog metrics you can graph and alert on.
2.3. Validate metrics in Datadog
In Datadog, search for standard Postgres metrics like:
postgresql.connectionspostgresql.rows_returnedpostgresql.rows_fetchedpostgresql.deadlocks
And your custom ones:
tigerdb.compression.compressed_segment_bytestigerdb.cagg.completed_threshold
Use these to build dashboards that combine:
- Database health: connection utilization, deadlocks, disk usage, WAL generation.
- Time-series behavior: compression ratio over time, continuous aggregate lag.
- Query performance: slow queries and resource-heavy statements.
3. Connect Prometheus to Tiger Cloud
Prometheus typically monitors Postgres via an exporter (such as postgres_exporter) that connects using standard SQL and surfaces metrics at an HTTP endpoint. The same pattern works with Tiger Cloud.
3.1. Deploy a Postgres exporter
Configure your exporter with the Tiger Cloud service connection, for example via environment variables:
export DATA_SOURCE_NAME="postgresql://monitor:<strong-password>@<your-tiger-cloud-hostname>:5432/mydb?sslmode=require"
./postgres_exporter
Or via config if your exporter uses a YAML/TOML file.
Warning: Don’t embed credentials in images or repos. Use secrets management in your orchestrator (Kubernetes secrets, AWS SSM, etc.).
3.2. Scrape with Prometheus
Add a scrape job in prometheus.yml:
scrape_configs:
- job_name: 'tiger-cloud-postgres'
scrape_interval: 15s
static_configs:
- targets: ['postgres-exporter:9187']
labels:
env: 'prod'
service: 'tiger-cloud-mydb'
provider: 'tigerdata'
Prometheus will now ingest metrics exported from the Tiger Cloud service.
3.3. Extend metrics with TimescaleDB introspection
Many Postgres exporters support custom queries via configuration. Use that to export TigerData-specific metrics:
queries:
- name: tigerdb_compression
help: "TimescaleDB compression metrics for hypertables"
metrics:
- gauge:
name: tigerdb_compressed_segment_bytes
help: "Compressed bytes per hypertable"
key: [hypertable_name]
values: [compressed_segment_bytes]
query: |
SELECT hypertable_name, compressed_segment_bytes
FROM timescaledb_information.compressed_hypertables;
- name: tigerdb_cagg_progress
help: "Continuous aggregate completion metrics"
metrics:
- gauge:
name: tigerdb_cagg_completed_threshold
help: "Completed threshold for continuous aggregates"
key: [view_name]
values: [completed_threshold]
query: |
SELECT view_name, completed_threshold
FROM timescaledb_information.continuous_aggregates;
Now you can graph:
tigerdb_compressed_segment_bytestigerdb_cagg_completed_threshold
from your Prometheus UI or Grafana.
Features & Benefits Breakdown
| Core Feature | What It Does | Primary Benefit |
|---|---|---|
| Postgres-compatible monitoring | Uses standard Postgres stats views (pg_stat_database, pg_stat_activity, etc.) via Datadog or Prometheus exporters. | No custom agents required; you reuse existing Postgres monitoring patterns and tooling. |
| TigerData observability metrics | Exposes metrics for compression, continuous aggregates, and tiered storage through Tiger Console and SQL. | Deep visibility into live telemetry workloads, not just generic Postgres health. |
| Secure, production-ready setup | Uses TLS 1.2+, read-only users, and existing network controls (VPC peering, IP allow lists). | Aligns with production security/compliance standards while still giving your SREs full visibility. |
Ideal Use Cases
- Best for production telemetry workloads: Because it lets you monitor high-ingest time-series, events, and analytics queries in real time, including chunk compression and continuous aggregates.
- Best for teams consolidating monitoring: Because it plugs Tiger Cloud into Datadog or Prometheus using the same pipelines and agents you already use for other Postgres and services.
Limitations & Considerations
- Exporter-level overhead: Exporters run queries against statistics and TimescaleDB introspection views. This is normally light, but avoid overly aggressive scrape intervals or heavy custom queries. Start with 15–30s and adjust.
- Eventual-consistency in rollups: Metrics around continuous aggregates and tiered storage reflect eventual behavior (refresh windows, watermarks). Don’t treat them as “per-row real-time”; build alerts with expected lag in mind.
Pricing & Plans
Tiger Cloud monitoring access is included with your service:
- Built-in observability in Tiger Console (query metrics, compression, continuous aggregates, data tiering) is part of the platform—you don’t pay extra for these metrics or for automated backups, and there are no per-query fees.
- Connecting Datadog/Prometheus uses your existing cloud egress and monitoring billing models.
Typical Tiger Cloud plans:
- Performance: Best for teams needing a managed Postgres + TimescaleDB service with built-in metrics, automated backups, and standard HA for production apps.
- Scale / Enterprise: Best for teams needing multi-AZ high availability, larger ingest volumes, fine-grained access controls, SOC 2 Type II reports, GDPR support, and—on Enterprise—HIPAA support and 24/7 SLA-backed operations.
Note: You pay TigerData for the Tiger Cloud service (compute, storage, replicas) with transparent, itemized billing and no per-query fees; Datadog/Prometheus costs are billed by your observability provider.
Frequently Asked Questions
Do I need a special TigerData agent, or can I use standard Postgres integrations?
Short Answer: You can use standard Postgres integrations—no special TigerData agent is required.
Details: Tiger Cloud is a managed Postgres service with the TimescaleDB extension. That means Datadog’s Postgres integration and Prometheus Postgres exporters work out of the box. You configure them with the Tiger Cloud connection string, a read-only monitoring user, and TLS. To expose TigerData-specific metrics (compression, continuous aggregates, tiering), you add custom SQL queries that read TimescaleDB introspection views; the results surface as standard metrics in your observability stack.
How do I monitor slow queries and real-time performance on Tiger Cloud?
Short Answer: Use your monitoring tool’s Postgres integration to collect query stats, then correlate them with Tiger Cloud’s built-in query and compression metrics.
Details: On the Tiger Cloud side, you get query-level visibility directly in Tiger Console—no extra instrumentation. To centralize this in Datadog or Prometheus:
- Ensure the monitoring user has
pg_monitorso the exporter can readpg_stat_statements(if enabled),pg_stat_activity, and related views. - In Datadog, enable query metrics (
collect_query_metrics) and build dashboards around top slow queries, total execution time, and rows scanned vs returned. - In Prometheus/Grafana, use exporter metrics like
pg_stat_activity_max_tx_duration_secondsandpg_stat_database_tup_returnedand augment them with custom TimescaleDB metrics. - Combine these with TigerData metrics (compression, continuous aggregates) to see when slow queries correlate with uncompressed chunks, backfill workloads, or lagging continuous aggregates.
Summary
Connecting Datadog or Prometheus to Tiger Cloud is straightforward because TigerData stays Postgres-native. You:
- Treat your Tiger Cloud service like any other Postgres database from a monitoring perspective.
- Use a read-only monitoring user over TLS to feed metrics into Datadog or Prometheus.
- Extend your observability with TigerData-specific metrics—compression, continuous aggregates, tiered storage—using simple SQL against TimescaleDB’s introspection views.
The result is a unified monitoring story: Postgres performance, live telemetry workloads, and Tiger Cloud’s background processes all show up in the same dashboards and alerts that your SREs already trust.