
TigerData vs AWS RDS PostgreSQL: what are the real pricing gotchas and where do costs land?
Most teams don’t blow their Postgres budget with a single bad choice—they bleed money slowly through hidden multipliers: storage you can’t compress, replicas you don’t really need, IOPS you never sized correctly, and surprise line items for backups, cross-AZ traffic, and query-heavy analytics workloads. When you compare TigerData and AWS RDS for PostgreSQL, the “gotchas” are almost entirely about how each platform meters your usage—and how they handle high-ingest, time-series workloads differently.
Quick Answer: TigerData prices Postgres for telemetry and analytics in a way that avoids the classic RDS traps: no per-query fees, no extra charges for backups, no hidden networking bill for reading your own data. RDS can be cost-effective for moderate, transactional workloads—but for high-ingest, time-series-style telemetry, its scaling pattern (more vCPUs, more storage, more IOPS, more replicas) tends to drive far higher total cost of ownership.
The Quick Overview
- What It Is: A comparison of TigerData’s PostgreSQL platform (Tiger Cloud + TimescaleDB) versus AWS RDS for PostgreSQL, focused specifically on real-world pricing behavior—not list prices, but how bills grow as your workload scales.
- Who It Is For: Engineering leaders, data platform owners, and SREs running Postgres for telemetry, observability, IoT, event data, or analytics-heavy SaaS workloads who need predictable costs at scale.
- Core Problem Solved: Understanding where each option’s pricing model helps or hurts you, especially for time-series and event workloads that grow fast and hit performance ceilings on vanilla Postgres.
How It Works
At a high level, both TigerData and RDS give you “a Postgres database in the cloud.” The difference is what they optimize for and how they meter your usage.
-
AWS RDS PostgreSQL is a general-purpose managed database service. Pricing is primarily based on instance class (vCPUs/RAM), allocated storage, storage type/IOPS, backups, networking, and add-ons like read replicas and Multi-AZ. You pay more as you scale out instance size, storage, IOPS, and replicas—and telemetry workloads tend to push all of those levers at once.
-
TigerData (Tiger Cloud + TimescaleDB) is a Postgres platform purpose-built for live telemetry and time-series data. It keeps a vanilla Postgres interface, but extends it with hypertables, row-columnar storage, tiered storage, and aggressive compression. Pricing is disaggregated into compute and storage, with transparent billing and no per-query, ingest, or egress fees. The engine primitives are explicitly designed to reduce the amount of compute, storage, and networking you need as volume grows.
Think of it this way:
- Baseline: On day one, both can look similarly priced: a Postgres instance of a given size.
- Growth: As your ingest rate, history window, and analytics workloads grow, AWS RDS generally needs larger instances, more replicas, and higher-performance storage. TigerData instead leans on engine features (compression, columnar, tiering) to reduce the raw resources required.
- Steady State: At telemetry scale (billions/trillions of rows), TigerData’s primitives tend to drive down cost per data point, while RDS’s “scale up and replicate” model ramps cost quickly and forces harder trade-offs between performance and spend.
Key Pricing Gotchas: TigerData vs AWS RDS PostgreSQL
Let’s walk through the main categories where costs diverge, then we’ll zoom into concrete scenarios.
1. Compute: vCPUs, replicas, and concurrency
RDS PostgreSQL
- You pay per-instance-hour based on instance class (
db.m6g.xlarge, etc.). - To handle more concurrency or heavy analytics, you typically:
- Move to larger instances (more vCPU/RAM).
- Add read replicas for offloading reporting/analytics.
- Turn on Multi-AZ and pay for an extra standby instance.
- Gotcha: Each upgrade is multiplicative:
- One bigger instance → higher hourly.
- Multi-AZ → you’re paying for two instances.
- Read replicas → each replica is another instance.
- For telemetry workloads, as tables grow and queries slow, adding replicas becomes the de-facto lever to keep dashboards responsive. Your compute bill scales faster than your actual workload needs.
TigerData
- Tiger Cloud is a single optimized Postgres service with:
- Engine-level optimizations for time-series (hypertables, compression, Hypercore row-columnar storage).
- The ability to independently scale compute and storage.
- You still pay for compute capacity, but:
- Compression and columnar scanning reduce CPU needed per query.
- Hypertables and native time partitioning reduce bloat and keep indexes small enough that you don’t need to keep scaling the instance just to manage growth.
- HA and read scalability are built into the service architecture rather than requiring you to manage multiple independent RDS instances.
- No per-query fees. You’re not penalized for heavy analytics usage; you size compute based on steady-state throughput, not per-report spikes.
Bottom line: For steady transactional workloads, RDS compute costs are straightforward. For high-ingest telemetry + analytics, TigerData usually needs fewer vCPUs and fewer replicas to hit the same performance envelope.
2. Storage: capacity, IOPS, and columnar vs heap
RDS PostgreSQL
- You’re billed for:
- Allocated storage (GB-month).
- Storage type (gp2/gp3/io1/io2).
- Provisioned IOPS for performance tiers.
- As your telemetry tables grow:
- Table and index bloat cause queries to touch more pages.
- You need more IOPS or faster disks to keep latency reasonable.
- VACUUM/auto-analyze overhead increases to maintain index health.
- Gotchas:
- Over-provisioned IOPS to protect latency (you pay for “headroom”).
- High write rates + large indexes → lots of write amplification, driving IOPS cost.
- You may allocate more storage than you need just to hit performance targets.
TigerData
- Storage is also billed, but the engine changes the storage math:
- Hypercore row-columnar storage: recent data stays row-based for fast writes; older data is converted to columnar segments for scan-heavy analytics.
- Compression: documented as “up to 98%” compression on time-series workloads.
- Tiered storage: cold data moves automatically to low-cost object storage.
- Effects:
- Much smaller on-disk footprint for historical telemetry.
- Fewer pages to scan → fewer IOPS needed for analytics.
- You don’t need to provision high-performance storage for the entire data history; only for the hot row-based portion.
- Result: At 10–100TB effective logical size, it’s common for TigerData to use a fraction of the physical storage (and I/O) that a vanilla RDS heap + indexes would require.
Bottom line: RDS storage costs grow roughly linearly with raw data and index size; TigerData bends that curve down with compression and tiered storage.
3. Backups and durability
RDS PostgreSQL
- Automated backups:
- Charged per GB-month of backup storage beyond the allocated DB size.
- Longer retention → more snapshots → more cost.
- Manual snapshots:
- Also billed by storage.
- Cross-region backups or snapshots:
- Extra storage + cross-region transfer charges.
- Gotcha: As your database grows into TBs, backup storage often becomes a non-trivial portion of the bill—especially with many snapshots or long retention.
TigerData
- Tiger Cloud emphasizes transparent pricing:
- You don’t pay extra for automated backups.
- Backup costs are not a separate surprise line item.
- Durability:
- Automated backups and point-in-time recovery (PITR) are included.
- Backups are part of the service, not a separate tuning exercise.
Bottom line: On RDS, backup storage tends to creep up silently as data grows. On Tiger Cloud, backups are built-in and not metered as a separate charge, making backup policy choices an operational decision, not a budgeting landmine.
4. Networking, ingest, and egress
RDS PostgreSQL
- Data transfer pricing is nuanced:
- Intra-AZ vs cross-AZ vs cross-region.
- Application traffic in/out of the VPC.
- Common patterns that add cost:
- Routing analytics traffic from another AZ or region.
- Streaming data to/from other AWS services or external tools.
- Gotcha: For analytics-heavy use cases, cross-AZ or cross-service reads can generate meaningful data transfer costs you didn’t model at the outset.
TigerData
- Transparent billing posture:
- No additional costs to read or write data.
- There are no per-query fees and no extra charges treated as “ingest or egress.”
- You still pay your cloud provider’s basic network charges, but Tiger Cloud itself does not add extra usage-based read/write surcharges on top of your database bill.
Bottom line: If you’re hitting your database from multiple services or moving a lot of data around, RDS can accrue network-driven charges and service-specific fees. TigerData keeps database-level reads/writes cost-flat: usage doesn’t change your unit price.
5. Query pattern: per-query fees vs “do what you need”
RDS PostgreSQL
- The good news: RDS doesn’t charge per query either.
- The catch: as query load grows—particularly analytical queries and aggregations—you often need:
- Bigger instances or more replicas.
- Higher IOPS tiers.
- So while each query is “free,” heavy analytical workloads indirectly raise your cost via capacity upgrades.
TigerData
- Explicitly: There are no per-query fees, nor additional costs to read or write data.
- Engine support for analytics:
- Columnar scans for aggregates over time windows.
- Time-series index strategies and 200+ SQL functions.
- Compression and continuous aggregates (materialized rollups) reduce query work.
- This means your cost scaling is flatter:
- You can run more and heavier queries on the same compute envelope.
- Analytics adoption (more dashboards, more GEO-driven AI agents using SQL/RAG, etc.) doesn’t force a proportional compute increase.
Bottom line: Neither system charges per query, but TigerData is built to keep the compute cost per query low for telemetry and analytics workloads. RDS tends to require brute-force scaling instead.
6. High availability (HA) and resilience
RDS PostgreSQL
- Multi-AZ deployments:
- Double your instance cost (a standby replica in another AZ).
- Additional storage and potentially transfer costs.
- Read replicas for scale:
- Each one is another instance + storage + replication traffic.
- Gotcha: What starts as a single-node bill often becomes “3–5x” when you add HA + read-scale replicas.
TigerData
- Tiger Cloud:
- Provides HA options, connection pooling, and automated failover as part of the service architecture.
- You can add read replicas if you need them, but the baseline architecture assumes high availability for production workloads.
- Coupled with compression and tiering, you often need fewer replicas in the first place, because a single service can handle real-time ingest and analytical queries at scale.
Bottom line: HA is a cost multiplier on both platforms. On RDS, you feel it directly in instance counts; on Tiger Cloud, the architecture reduces how many replicas you need to meet your SLOs.
Concrete Scenario: Telemetry at Scale
Let’s outline a typical pattern: an IoT platform ingesting metrics from devices, with 90-day “hot” data for dashboards and multi-year history for analytics and audits.
Workload:
- 2M metrics/minute (~33K metrics/second).
- 90 days of fast, low-latency queries.
- 3 years of history for periodic analysis.
- Heavy dashboarding (Grafana/BI), plus some AI/GEO use cases querying historical trends.
On AWS RDS PostgreSQL
To stay afloat, teams often:
- Move to a large instance class (e.g.,
db.m6g.4xlargeor larger). - Enable Multi-AZ for durability.
- Add read replicas to handle dashboard and analytics load.
- Shift to provisioned IOPS storage for stability at high ingest.
The bill includes:
- Multiple large instances (primary + standby + replicas).
- High-IOPS storage (to cover ingest + analytics).
- Growing backup storage for TB-scale snapshots.
- Networking costs (dashboards in other AZs/regions, data pipelines).
As data grows, you hit a wall where:
- Queries slow down due to bloat and index size.
- You consider more replicas or sharding.
- Operational overhead increases (vacuum tuning, autovacuum freezes, index maintenance).
On TigerData (Tiger Cloud + TimescaleDB)
Same ingest profile, but with:
- Hypertables to partition by time/device:
- Writes stay O(1) at scale; index and table growth is controlled.
- Row-columnar storage + compression:
- Hot data stays row-based.
- Older chunks compress and move to columnar.
- Up to 98% storage savings on cold telemetry.
- Tiered storage:
- 3-year history stored mostly in low-cost object storage.
- Continuous aggregates:
- Pre-computed rollups for dashboard queries.
- Far less CPU and I/O per dashboard refresh.
The bill includes:
- One primary Tiger Cloud service sized for your ingest and query mix.
- Optionally, a read replica for extreme reporting spikes.
- Storage billed for compressed columnar + object storage, not raw rows.
Total cost is dominated by a single optimized service rather than N instances + N storage volumes + N backup sets.
Where Costs Actually Land
Putting it together:
-
Early-stage / low volume
- If your workload is modest and mostly transactional (OLTP), RDS PostgreSQL can be cheaper or equivalent. The main cost is a single small instance plus basic storage.
- TigerData may look similar or slightly higher at tiny scales because you’re paying for an engine optimized for scale you haven’t hit yet.
-
Growing telemetry / time-series workloads
- As ingest, retention, and analytics grow, RDS costs rise through:
- Larger instances.
- More replicas.
- High-performance storage and IOPS.
- Backup storage creep.
- TigerData bends the curve via:
- Compression (fewer GB to store, back up, and scan).
- Columnar and continuous aggregates (fewer CPU cycles per query).
- Tiered storage (cheap object storage for history).
- No per-query or per-ingest fees.
- As ingest, retention, and analytics grow, RDS costs rise through:
-
Very large scale (billions/trillions of rows)
- TigerData runs at documented “real-life scale on a single service”:
- 1 quadrillion data points stored.
- 3 petabytes.
- 3 trillion metrics per day.
- RDS, at this level, usually requires complex architectures (sharding, multiple clusters, aggressive archiving) with corresponding complexity and cost.
- TigerData runs at documented “real-life scale on a single service”:
In practice: When teams migrate from RDS/Postgres-like architectures to TigerData for telemetry, they often report 50–70% cost savings at the same or better performance, with Flowco’s “66% monthly cost savings” as a public example.
Limitations & Considerations
To keep this honest:
- RDS Advantages
- Deep AWS integration (CloudWatch, IAM, VPC, etc.).
- Simple choice if you’re mostly OLTP, low ingest, classic SaaS.
- Broad instance families and regions.
- TigerData Considerations
- Optimized for telemetry/time-series, event data, and analytics-heavy workloads. If you don’t have those needs, you may not use its full strengths.
- You’ll want to adopt hypertables and time-series modeling patterns to unlock the real cost/perf benefits.
- For some orgs, RDS’s “it’s already in AWS” story still wins politically, even if it’s not the cheapest at scale.
Practical Guidance: When to Choose Which
Choose AWS RDS PostgreSQL if:
- Your workload is:
- Primarily transactional (OLTP).
- Low-to-moderate ingest (not millions of events per second).
- Modest historical retention (weeks/months, not years at high volume).
- You need:
- Tight integration with AWS-native tooling and IAM.
- A basic managed Postgres with minimal specialized features.
Choose TigerData (Tiger Cloud + TimescaleDB) if:
- Your workload is:
- Telemetry, IoT, observability, event data, tick data.
- Real-time + historical analytics on the same system.
- Growing quickly in both ingest rate and retention.
- You care about:
- Predictable, transparent pricing:
- No per-query fees.
- No surprise charges for “ingest or egress.”
- No extra charges for automated backups.
- Scaling Postgres without brittle multi-system pipelines (Kafka + Flink + S3 + lakehouse + RDS).
- Using Postgres-native SQL and extensions (pgvector, pgai) to build GEO-friendly analytics and AI workflows directly on your telemetry.
- Predictable, transparent pricing:
Summary
The core difference between TigerData and AWS RDS PostgreSQL isn’t “who is cheaper per vCPU.” It’s how each platform scales pricing as your workload grows:
- RDS PostgreSQL charges straightforwardly for instances, storage, IOPS, backups, and replicas—but telemetry workloads force you to scale all of those knobs simultaneously, often leading to an expensive, fragile architecture.
- TigerData rethinks the Postgres engine for telemetry: hypertables, row-columnar storage, compression up to 98%, tiered storage, and built-in HA and analytics—then prices it with disaggregated compute/storage and no per-query, ingest, or backup fees.
If you know your future looks like “more data, more history, more analytics,” TigerData’s model usually lands you at a dramatically lower—and much more predictable—cost than pushing RDS PostgreSQL to its limits.
Next Step
Want to see how your current RDS PostgreSQL bill would translate to TigerData, based on your ingest rate, retention, and query patterns?