
CircleCI pricing: where is the current price list for credits/minute by compute type (Docker, macOS, Windows)?
When you’re budgeting CircleCI usage or tuning your pipelines, you want a single, current price list that shows credits per minute for each compute type—Docker, Linux VM, macOS, Windows, GPU, and more. On CircleCI, all of that pricing is defined in one place: the credits-per-minute tables for each executor type in the compute pricing guide.
Quick Answer: The current CircleCI price list for credits per minute by compute type (Docker, macOS, Windows, Linux VMs, Arm, GPU, etc.) lives in the CircleCI compute pricing tables. Each executor type (Docker, machine, macOS, Windows, GPU) has a dedicated table showing CPU, RAM, and credits/minute for each size.
The Quick Overview
- What It Is: A set of per-executor pricing tables that define how many credits per minute each CircleCI compute type and size consumes (for example, Docker Medium vs macOS M4 Pro Large).
- Who It Is For: DevOps, platform, and engineering leaders who need to understand and optimize CircleCI costs across Docker, macOS, Windows, Arm, and GPU workloads.
- Core Problem Solved: It gives you a clear, up-to-date map of credits per minute so you can confidently forecast spend, choose the right executor, and tune resources without guesswork.
How CircleCI credits-per-minute pricing works
CircleCI uses a credits model: each job in a workflow runs on a specific compute type and size, and that combination dictates how many credits per minute you consume. Your total cost is:
Total credits = Σ (job runtime in minutes × credits/min for that compute type and size)
Each executor family has its own pricing table:
- Docker / Remote Docker (x86, Arm)
- Linux VM / machine (x86, Gen 1 and Gen 2, Arm)
- macOS VM (including M4 Pro tiers)
- Windows VM
- GPU machines (e.g., Nvidia T4, V100)
- Non-compute features like Docker Layer Caching, storage, and network egress
Below is a consolidated view of the current credits-per-minute and related pricing pulled from the official CircleCI pricing tables. Always verify against the live docs before making hard budget promises, since compute options and prices can evolve.
Linux VM / machine (x86) pricing
These are “machine” executors and Linux VM / Remote Docker x86 options. They’re what you typically use for full VM builds, Docker-heavy workloads, and environments that need more control than container-based Docker.
Linux VM / (x86) Remote Docker (Gen 1)
| Size | CPU | RAM | Cost (credits/min) |
|---|---|---|---|
| Medium | 2 | 7.5 GB | 10 |
| Large | 4 | 15 GB | 20 |
| X-large | 8 | 32 GB | 100 |
| 2 X-large | 16 | 64 GB | 200 |
| 2 X-large+ | 32 | 64 GB | 300 |
Linux VM / (x86) Remote Docker (Gen 2)
Gen 2 gives you newer infrastructure and slightly different sizing/pricing.
| Size | CPU | RAM | Cost (credits/min) |
|---|---|---|---|
| Medium | 2 | 8 GB | 18 |
| Large | 4 | 16 GB | 36 |
| X-large | 8 | 32 GB | 72 |
| 2 X-large | 16 | 64 GB | 144 |
| 2 X-large+ | 32 | 128 GB | 288 |
Docker and Remote Docker pricing (x86 and Arm)
Docker executors are the go-to for container-native builds and tests. Remote Docker lets you build Docker images inside those jobs. The pricing below applies to the Docker / Remote Docker sizes.
(x86) Docker / Remote Docker
| Size | CPU | RAM | Cost (credits/min) |
|---|---|---|---|
| Medium | 2 | 4 GB | 10 |
| Medium+ | 3 | 6 GB | 15 |
| Large | 4 | 8 GB | 20 |
| X-large | 8 | 16 GB | 40 |
| 2 X-large | 16 | 32 GB | 80 |
| 2 X-large+ | 20 | 40 GB | 100 |
(Arm) Docker / Remote Docker
Arm executors are useful for testing and building on Arm architectures (e.g., modern mobile or server targets) while keeping costs efficient.
| Size | CPU | RAM | Cost (credits/min) |
|---|---|---|---|
| Medium | 2 | 8 GB | 13 |
| Large | 4 | 16 GB | 26 |
| X-large | 8 | 32 GB | 52 |
| 2 X-large | 16 | 64 GB | 104 |
Arm VM (Linux) pricing
Arm VMs are for full-VM Linux builds on Arm hardware.
| Size | CPU | RAM | Cost (credits/min) |
|---|---|---|---|
| Medium | 2 | 8 GB | 10 |
| Large | 4 | 16 GB | 20 |
| X-large | 8 | 32 GB | 100 |
| 2 X-large | 16 | 64 GB | 200 |
macOS VM pricing (including Apple silicon M4 Pro)
For iOS, macOS, and other Apple platform builds, you need macOS executors. These are priced higher than Linux due to host costs, but they’re required for Xcode and signing.
| macOS VM Type | CPU | RAM | Cost (credits/min) |
|---|---|---|---|
| M4 Pro Medium | 6 | 28 GB | 200 |
| M4 Pro Large | 12 | 56 GB | 400 |
If you’re migrating from Intel to Apple silicon for mobile builds, these M4 Pro options are where you’ll spend your macOS credits. The upside is faster iOS pipelines and better parity with modern developer laptops.
Windows VM pricing
Windows executors are for .NET, Windows desktop, and other Windows-specific workloads.
| Size | CPU | RAM | Cost (credits/min) |
|---|---|---|---|
| Medium | 4 | 16 GB | 40 |
| Large | 8 | 32 GB | 120 |
| X-large | 16 | 64 GB | 210 |
If you’re running mixed Windows and Linux fleets, you’ll typically keep Windows usage focused on jobs that truly need it (build, test, sign), and push everything else to cheaper Linux or Docker executors.
GPU pricing (Nvidia T4, multi-T4, V100, Windows GPU)
GPU machines are designed for AI/ML workloads, heavy compute, and other GPU-accelerated jobs. They carry a higher credits-per-minute rate, so they’re best reserved for jobs that truly benefit from GPU acceleration.
Linux GPU machines
| GPU Type | CPU | RAM | Cost (credits/min) |
|---|---|---|---|
| Medium Nvidia Tesla T4 | 8 | 30 GB | 240 |
| Medium Multi 4 Nvidia Tesla T4 | 8 | 30 GB | 240 |
| Large Nvidia Tesla V100 | 8 | 30 GB | 1,000 |
Windows GPU machine
| GPU Type | CPU | RAM | Cost (credits/min) |
|---|---|---|---|
| Windows Medium Nvidia T4 | 16 | 60 GB | 500 |
When you’re building AI-era delivery pipelines—training, inference tests, model validation—these GPU tiers are where you’ll run those jobs. To control spend, keep GPU work in short, targeted jobs and offload everything else to standard Linux executors.
Non-compute pricing: DLC, IP ranges, storage, network egress
Beyond compute, some features have flat or usage-based pricing:
| Feature | Cost | Notes |
|---|---|---|
| Docker Layer Caching (DLC) | 200 credits / job | Per job that uses DLC |
| IP ranges | 450 credits / GB | For egress via reserved IP ranges |
| Runner network egress | 420 credits / GB | Data egress for self-hosted runners |
| Storage | 420 credits / GB | Artifact, cache, and workspace storage |
For most teams, DLC is the big lever: it costs credits per job but often pays for itself by cutting build times, especially on Docker-heavy workloads.
Using the price list to control CircleCI spend
Once you know where the CircleCI pricing tables live and what each compute type costs in credits per minute, you can start treating cost as a first-class pipeline dimension:
-
Match executor to workload
- Use cheaper Linux Docker or machine executors for the bulk of your builds and tests.
- Reserve macOS, Windows, and GPU for jobs that truly require those environments.
-
Right-size your machines
- Don’t default everything to X-large or 2 X-large.
- Move lightweight jobs down to Medium / Large and reserve bigger sizes for genuinely parallel or memory-heavy workloads.
-
Minimize high-cost minutes
- Keep macOS, Windows, and GPU jobs as short and targeted as possible.
- Push analysis, formatting, and non-platform-specific checks to Linux Docker jobs.
-
Combine pricing data with validation tooling
- Use CircleCI’s Smarter Testing and Chunk to reduce test minutes, especially on expensive executors.
- Use rollback pipelines and approvals to avoid long-running, noisy deployments that waste high-cost minutes.
This is where CircleCI’s “ship trusted code at AI speed” promise intersects with pricing: the more you reduce noise and focus on signal, the fewer minutes you burn on expensive compute.
Frequently asked questions about CircleCI credits-per-minute pricing
Where can I see the current CircleCI credits-per-minute price list?
Short Answer: In the CircleCI compute pricing tables for each executor type.
Details: CircleCI maintains official pricing tables that list CPU, RAM, and credits per minute for:
- Docker / Remote Docker (x86 and Arm)
- Linux VM (machine) x86 (Gen 1 and Gen 2) and Arm
- macOS VMs (including M4 Pro Medium and Large)
- Windows VMs
- GPU machines (Nvidia T4, V100, Windows GPU)
- Non-compute items like DLC, storage, and network egress
Those tables are the source of truth for what each job minute will cost on a given executor and size. As your fleet changes—say you add more macOS or GPU jobs—this is the reference you use to forecast spend and choose the right mix of compute.
How do I estimate my monthly CircleCI cost from the credits/minute tables?
Short Answer: Multiply each job’s average runtime in minutes by the credits per minute for its compute type, then sum across your workflows.
Details: For a rough estimate:
-
Identify executors and sizes per job
Example:buildon Docker Large,teston Linux VM Medium Gen 2,ios-buildon macOS M4 Pro Medium. -
Gather average runtimes
Pull from recent CircleCI job history. -
Apply credits-per-minute using the tables above.
- If
buildruns 5 minutes on Docker Large (20 credits/min):5 × 20 = 100 credits - If
testruns 10 minutes on Linux VM Medium Gen 2 (18 credits/min):10 × 18 = 180 credits - If
ios-buildruns 8 minutes on M4 Pro Medium (200 credits/min):8 × 200 = 1,600 credits
- If
-
Sum per pipeline and multiply by pipeline frequency
That gives you a baseline monthly credit consumption and cost.
From there, you can apply the usual optimization levers—cutting test minutes with Smarter Testing, standardizing on cheaper executors when possible, and keeping macOS/Windows/GPU minutes tightly scoped.
Summary
CircleCI’s credits-per-minute model is straightforward once you’ve seen the full price list: every compute type (Docker, Linux VM, macOS, Windows, Arm, GPU) has a defined CPU/RAM profile and a credits-per-minute rate. By anchoring your planning on those tables, you can:
- Ship at AI speed without losing track of costs.
- Align compute types to workloads so you’re not burning GPU or macOS minutes on tasks that could run on cheaper Linux.
- Standardize golden paths that bake in both validation and cost efficiency.
When you treat compute pricing as part of your CI/CD design—right alongside pipelines, workflows, jobs, policy checks, and rollback paths—you get what high-performing teams are aiming for: trusted delivery with predictable, controlled spend.
Next Step
Get Started(https://circleci.com/product/demo/)