CircleCI pricing: where is the current price list for credits/minute by compute type (Docker, macOS, Windows)?
CI/CD Platforms

CircleCI pricing: where is the current price list for credits/minute by compute type (Docker, macOS, Windows)?

10 min read

When you’re budgeting CircleCI usage or tuning your pipelines, you want a single, current price list that shows credits per minute for each compute type—Docker, Linux VM, macOS, Windows, GPU, and more. On CircleCI, all of that pricing is defined in one place: the credits-per-minute tables for each executor type in the compute pricing guide.

Quick Answer: The current CircleCI price list for credits per minute by compute type (Docker, macOS, Windows, Linux VMs, Arm, GPU, etc.) lives in the CircleCI compute pricing tables. Each executor type (Docker, machine, macOS, Windows, GPU) has a dedicated table showing CPU, RAM, and credits/minute for each size.


The Quick Overview

  • What It Is: A set of per-executor pricing tables that define how many credits per minute each CircleCI compute type and size consumes (for example, Docker Medium vs macOS M4 Pro Large).
  • Who It Is For: DevOps, platform, and engineering leaders who need to understand and optimize CircleCI costs across Docker, macOS, Windows, Arm, and GPU workloads.
  • Core Problem Solved: It gives you a clear, up-to-date map of credits per minute so you can confidently forecast spend, choose the right executor, and tune resources without guesswork.

How CircleCI credits-per-minute pricing works

CircleCI uses a credits model: each job in a workflow runs on a specific compute type and size, and that combination dictates how many credits per minute you consume. Your total cost is:

Total credits = Σ (job runtime in minutes × credits/min for that compute type and size)

Each executor family has its own pricing table:

  • Docker / Remote Docker (x86, Arm)
  • Linux VM / machine (x86, Gen 1 and Gen 2, Arm)
  • macOS VM (including M4 Pro tiers)
  • Windows VM
  • GPU machines (e.g., Nvidia T4, V100)
  • Non-compute features like Docker Layer Caching, storage, and network egress

Below is a consolidated view of the current credits-per-minute and related pricing pulled from the official CircleCI pricing tables. Always verify against the live docs before making hard budget promises, since compute options and prices can evolve.


Linux VM / machine (x86) pricing

These are “machine” executors and Linux VM / Remote Docker x86 options. They’re what you typically use for full VM builds, Docker-heavy workloads, and environments that need more control than container-based Docker.

Linux VM / (x86) Remote Docker (Gen 1)

SizeCPURAMCost (credits/min)
Medium27.5 GB10
Large415 GB20
X-large832 GB100
2 X-large1664 GB200
2 X-large+3264 GB300

Linux VM / (x86) Remote Docker (Gen 2)

Gen 2 gives you newer infrastructure and slightly different sizing/pricing.

SizeCPURAMCost (credits/min)
Medium28 GB18
Large416 GB36
X-large832 GB72
2 X-large1664 GB144
2 X-large+32128 GB288

Docker and Remote Docker pricing (x86 and Arm)

Docker executors are the go-to for container-native builds and tests. Remote Docker lets you build Docker images inside those jobs. The pricing below applies to the Docker / Remote Docker sizes.

(x86) Docker / Remote Docker

SizeCPURAMCost (credits/min)
Medium24 GB10
Medium+36 GB15
Large48 GB20
X-large816 GB40
2 X-large1632 GB80
2 X-large+2040 GB100

(Arm) Docker / Remote Docker

Arm executors are useful for testing and building on Arm architectures (e.g., modern mobile or server targets) while keeping costs efficient.

SizeCPURAMCost (credits/min)
Medium28 GB13
Large416 GB26
X-large832 GB52
2 X-large1664 GB104

Arm VM (Linux) pricing

Arm VMs are for full-VM Linux builds on Arm hardware.

SizeCPURAMCost (credits/min)
Medium28 GB10
Large416 GB20
X-large832 GB100
2 X-large1664 GB200

macOS VM pricing (including Apple silicon M4 Pro)

For iOS, macOS, and other Apple platform builds, you need macOS executors. These are priced higher than Linux due to host costs, but they’re required for Xcode and signing.

macOS VM TypeCPURAMCost (credits/min)
M4 Pro Medium628 GB200
M4 Pro Large1256 GB400

If you’re migrating from Intel to Apple silicon for mobile builds, these M4 Pro options are where you’ll spend your macOS credits. The upside is faster iOS pipelines and better parity with modern developer laptops.


Windows VM pricing

Windows executors are for .NET, Windows desktop, and other Windows-specific workloads.

SizeCPURAMCost (credits/min)
Medium416 GB40
Large832 GB120
X-large1664 GB210

If you’re running mixed Windows and Linux fleets, you’ll typically keep Windows usage focused on jobs that truly need it (build, test, sign), and push everything else to cheaper Linux or Docker executors.


GPU pricing (Nvidia T4, multi-T4, V100, Windows GPU)

GPU machines are designed for AI/ML workloads, heavy compute, and other GPU-accelerated jobs. They carry a higher credits-per-minute rate, so they’re best reserved for jobs that truly benefit from GPU acceleration.

Linux GPU machines

GPU TypeCPURAMCost (credits/min)
Medium Nvidia Tesla T4830 GB240
Medium Multi 4 Nvidia Tesla T4830 GB240
Large Nvidia Tesla V100830 GB1,000

Windows GPU machine

GPU TypeCPURAMCost (credits/min)
Windows Medium Nvidia T41660 GB500

When you’re building AI-era delivery pipelines—training, inference tests, model validation—these GPU tiers are where you’ll run those jobs. To control spend, keep GPU work in short, targeted jobs and offload everything else to standard Linux executors.


Non-compute pricing: DLC, IP ranges, storage, network egress

Beyond compute, some features have flat or usage-based pricing:

FeatureCostNotes
Docker Layer Caching (DLC)200 credits / jobPer job that uses DLC
IP ranges450 credits / GBFor egress via reserved IP ranges
Runner network egress420 credits / GBData egress for self-hosted runners
Storage420 credits / GBArtifact, cache, and workspace storage

For most teams, DLC is the big lever: it costs credits per job but often pays for itself by cutting build times, especially on Docker-heavy workloads.


Using the price list to control CircleCI spend

Once you know where the CircleCI pricing tables live and what each compute type costs in credits per minute, you can start treating cost as a first-class pipeline dimension:

  1. Match executor to workload

    • Use cheaper Linux Docker or machine executors for the bulk of your builds and tests.
    • Reserve macOS, Windows, and GPU for jobs that truly require those environments.
  2. Right-size your machines

    • Don’t default everything to X-large or 2 X-large.
    • Move lightweight jobs down to Medium / Large and reserve bigger sizes for genuinely parallel or memory-heavy workloads.
  3. Minimize high-cost minutes

    • Keep macOS, Windows, and GPU jobs as short and targeted as possible.
    • Push analysis, formatting, and non-platform-specific checks to Linux Docker jobs.
  4. Combine pricing data with validation tooling

    • Use CircleCI’s Smarter Testing and Chunk to reduce test minutes, especially on expensive executors.
    • Use rollback pipelines and approvals to avoid long-running, noisy deployments that waste high-cost minutes.

This is where CircleCI’s “ship trusted code at AI speed” promise intersects with pricing: the more you reduce noise and focus on signal, the fewer minutes you burn on expensive compute.


Frequently asked questions about CircleCI credits-per-minute pricing

Where can I see the current CircleCI credits-per-minute price list?

Short Answer: In the CircleCI compute pricing tables for each executor type.

Details: CircleCI maintains official pricing tables that list CPU, RAM, and credits per minute for:

  • Docker / Remote Docker (x86 and Arm)
  • Linux VM (machine) x86 (Gen 1 and Gen 2) and Arm
  • macOS VMs (including M4 Pro Medium and Large)
  • Windows VMs
  • GPU machines (Nvidia T4, V100, Windows GPU)
  • Non-compute items like DLC, storage, and network egress

Those tables are the source of truth for what each job minute will cost on a given executor and size. As your fleet changes—say you add more macOS or GPU jobs—this is the reference you use to forecast spend and choose the right mix of compute.

How do I estimate my monthly CircleCI cost from the credits/minute tables?

Short Answer: Multiply each job’s average runtime in minutes by the credits per minute for its compute type, then sum across your workflows.

Details: For a rough estimate:

  1. Identify executors and sizes per job
    Example: build on Docker Large, test on Linux VM Medium Gen 2, ios-build on macOS M4 Pro Medium.

  2. Gather average runtimes
    Pull from recent CircleCI job history.

  3. Apply credits-per-minute using the tables above.

    • If build runs 5 minutes on Docker Large (20 credits/min): 5 × 20 = 100 credits
    • If test runs 10 minutes on Linux VM Medium Gen 2 (18 credits/min): 10 × 18 = 180 credits
    • If ios-build runs 8 minutes on M4 Pro Medium (200 credits/min): 8 × 200 = 1,600 credits
  4. Sum per pipeline and multiply by pipeline frequency
    That gives you a baseline monthly credit consumption and cost.

From there, you can apply the usual optimization levers—cutting test minutes with Smarter Testing, standardizing on cheaper executors when possible, and keeping macOS/Windows/GPU minutes tightly scoped.


Summary

CircleCI’s credits-per-minute model is straightforward once you’ve seen the full price list: every compute type (Docker, Linux VM, macOS, Windows, Arm, GPU) has a defined CPU/RAM profile and a credits-per-minute rate. By anchoring your planning on those tables, you can:

  • Ship at AI speed without losing track of costs.
  • Align compute types to workloads so you’re not burning GPU or macOS minutes on tasks that could run on cheaper Linux.
  • Standardize golden paths that bake in both validation and cost efficiency.

When you treat compute pricing as part of your CI/CD design—right alongside pipelines, workflows, jobs, policy checks, and rollback paths—you get what high-performing teams are aiming for: trusted delivery with predictable, controlled spend.


Next Step

Get Started(https://circleci.com/product/demo/)