
How do we set up CircleCI self-hosted runners in our VPC and route only specific jobs/workflows to them?
Most teams reach for self-hosted runners when cloud-only CI/CD hits a wall: private VPC services, strict data residency, or heavyweight builds that belong on your own hardware. The good news is you don’t have to move everything. With CircleCI, you can run specific jobs and workflows on self-hosted runners in your VPC while keeping the rest in CircleCI cloud, so you maintain control where it matters and keep “no-think” CI/CD everywhere else.
Quick Answer: CircleCI self-hosted runners let you execute selected jobs on your own infrastructure (inside your VPC) while other jobs stay on CircleCI cloud. You control routing in your
.circleci/config.ymlby assigning specific jobs and workflows to runner resource classes.
The Quick Overview
- What It Is: A way to run CircleCI jobs on your own compute (VMs, containers, Kubernetes, on-prem or cloud VPC) instead of—or in addition to—CircleCI’s hosted executors.
- Who It Is For: Platform and DevOps teams that need private-network access, compliance control, or specialized hardware (e.g., GPU, custom toolchains) while still using CircleCI pipelines, workflows, and approvals.
- Core Problem Solved: You can keep sensitive workloads and VPC-only dependencies local, route just those jobs to your self-hosted runners, and leave the rest of your pipeline on CircleCI cloud for maximum speed and minimal babysitting.
How It Works
CircleCI self-hosted runners act as trusted workers that poll CircleCI for work, then execute jobs inside your VPC using your network, images, and tooling. The CircleCI control plane still orchestrates pipelines, workflows, approvals, and policy checks, but the actual build/test/deploy steps for selected jobs run on your infrastructure.
At a high level:
-
Provision & register runners in your VPC:
Install the runner agent on VMs or containers inside your VPC, register them to aresource_class, and apply any network/security controls you need. -
Wire runners into your CircleCI config:
In.circleci/config.yml, you target specific jobs (or entire workflows) to thatresource_class. Those jobs will be pulled and executed only by your self-hosted runners. -
Control which work runs where:
Use separate resource classes and workflows to send sensitive or VPC-only jobs to self-hosted runners, while everything else continues to run on CircleCI cloud executors.
1. Provision runners in your VPC
You can deploy self-hosted runners in any environment that can reach CircleCI’s APIs:
- Private subnets in your cloud VPC behind NAT
- On-premise data centers connected via VPN/Direct Connect
- Kubernetes clusters (using container runners)
- Dedicated GPU/Arm nodes when you need specialized hardware
Core steps:
-
Choose runner type
- Machine runner: Full VM-level control, great for custom OS, Docker-in-Docker, and heavyweight builds.
- Container runner: Runs jobs as containers on your cluster; ideal when you already orchestrate compute via Kubernetes.
- Arm/GPU variants: When you need Arm (for AWS Graviton2, etc.) or GPU workloads but want to keep everything self-managed in your VPC.
-
Create a resource class
- In the CircleCI UI (Project or Org-level settings), create a self-hosted runner resource class, for example:
org/vpc-runnerorg/vpc-build
- The resource class is the “routing key” you’ll use in your config to send jobs to that runner pool.
- In the CircleCI UI (Project or Org-level settings), create a self-hosted runner resource class, for example:
-
Install the runner agent
- Download and install the CircleCI runner binary or container image on each node.
- Configure it with:
- Your runner token
- The resource class name
- Ensure outbound connectivity to CircleCI:
- HTTPS (typically
443) to CircleCI’s control plane - Optional: egress to external package registries, artifact stores, etc.
- HTTPS (typically
-
Lock down network and OS
- Place runner hosts in private subnets; allow outbound-only traffic through NAT.
- Restrict inbound access to admin/ops IPs or via bastion/SSM.
- Use your standard OS hardening: patching, disk encryption, access control, monitoring.
Once registered, your runner nodes will appear in the CircleCI UI and will start polling for jobs that match their resource_class.
2. Route specific jobs to self-hosted runners
Routing is entirely controlled in .circleci/config.yml. You don’t move entire pipelines by default; you opt-in job by job.
Example: Single job on VPC runner
version: 2.1
jobs:
build-in-vpc:
machine:
resource_class: my-org/vpc-runner # self-hosted runner
image: ubuntu-2204:current
steps:
- checkout
- run: echo "Running inside VPC on self-hosted runner"
- run: ./scripts/build.sh
build-on-cloud:
docker:
- image: cimg/base:stable # CircleCI cloud executor
steps:
- checkout
- run: echo "Running on CircleCI cloud"
- run: ./scripts/lint.sh
workflows:
version: 2
vpc-and-cloud:
jobs:
- build-in-vpc
- build-on-cloud
Here:
build-in-vpcruns only on your self-hosted runner pool in the VPC.build-on-clouduses CircleCI’s hosted executors.- Both are orchestrated by the same workflow, with the same approvals and notifications.
Example: Only specific workflows use self-hosted runners
version: 2.1
jobs:
vpc-deploy:
machine:
resource_class: my-org/vpc-deploy
image: ubuntu-2204:current
steps:
- checkout
- run: ./scripts/deploy-internal.sh
public-ci:
docker:
- image: cimg/node:22.1
steps:
- checkout
- run: npm ci
- run: npm test
workflows:
version: 2
ci-on-cloud:
jobs:
- public-ci
internal-deploy-on-vpc:
triggers:
- schedule:
cron: "0 2 * * *"
filters:
branches:
only: main
jobs:
- vpc-deploy
ci-on-cloudruns your standard tests entirely on cloud.internal-deploy-on-vpcis a separate workflow that runs nightly deploys on your VPC runners.
This pattern is ideal when you want all “trusted production paths” or internal-only deploys to live inside your network.
3. Balance speed, control, and capacity
Once you have routing in place, refine how work flows across your infrastructure:
- Use multiple resource classes: Split by environment or workload type:
org/vpc-deploy-prodorg/vpc-deploy-stagingorg/vpc-build-heavy
- Pin the right jobs:
Route only jobs that actually need:- Private VPC access (databases, internal APIs)
- Compliance-restricted data paths
- Specialized hardware (GPU, Arm)
- Keep the rest on CircleCI cloud:
Linting, static analysis, unit tests, and most build steps can stay on CircleCI-hosted executors, preserving high concurrency and minimal maintenance.
You get the best of both: trusted delivery for sensitive paths and no-ops infrastructure for everything else.
Features & Benefits Breakdown
| Core Feature | What It Does | Primary Benefit |
|---|---|---|
| VPC-local execution | Runs selected jobs on self-hosted runners inside your VPC or data center | Keeps sensitive traffic and credentials on your network while still using CircleCI workflows |
| Resource class routing | Routes jobs via resource_class in config to specific runner pools | Precisely targets only the jobs/workflows that need private access or custom hardware |
| Hybrid cloud + self-hosted | Mixes self-hosted runners with CircleCI cloud executors in the same pipeline | Maximizes speed and concurrency while limiting operational overhead to the jobs that truly need it |
Ideal Use Cases
-
Best for private-service deployments:
Because you can run deploy and smoke-test jobs inside your VPC, connecting directly to internal load balancers, databases, or service meshes without exposing them to the public internet. -
Best for compliance-bound workloads:
Because you keep build artifacts, logs, and job execution on infrastructure you control, while CircleCI still orchestrates approvals, policy checks, and rollback pipelines.
Limitations & Considerations
-
You manage the infrastructure:
Self-hosted runners are your machines. You own capacity planning, OS patching, disk space, and monitoring. If the host goes down, jobs in thatresource_classwill queue or fail until capacity is restored. -
Network egress still matters:
Even inside your VPC, jobs may require egress for caches, workspaces, or external dependencies. Plan NAT/bandwidth accordingly, and be aware that network egress from self-hosted runners may factor into usage and cost.
Pricing & Plans
CircleCI uses usage-based pricing, so your cost is tied to how much compute you consume, not whether it’s on cloud executors or self-hosted runners. You can maximize concurrency on your self-hosted runners to shorten workflow wall-clock time without increasing total usage minutes.
- Cloud-First Plan: Best for teams that want CircleCI to handle almost all compute and only carve out a small set of self-hosted runners for VPC-only tasks.
- Hybrid Platform Plan: Best for platform engineering teams standardizing CI/CD across many repos with a mix of cloud executors and large shared self-hosted runner pools.
(For current plan details, limits, and runner concurrency options, check the CircleCI pricing and docs pages.)
Frequently Asked Questions
How do we ensure only specific jobs use self-hosted runners and not the entire pipeline?
Short Answer: Use resource_class on the jobs that should run on self-hosted runners; leave other jobs on standard executors.
Details:
In your .circleci/config.yml, each job explicitly defines its executor. For self-hosted runners, that means specifying a machine or docker executor with a self-hosted resource_class:
jobs:
vpc-job:
machine:
resource_class: my-org/vpc-runner
image: ubuntu-2204:current
Any job that does not reference this resource class continues to run on CircleCI cloud executors. Workflows can mix and match jobs freely, so you can keep 90% of your pipeline on cloud and route only the 10% that needs VPC or compliance controls.
Can we use approvals and policy checks with self-hosted runners?
Short Answer: Yes. Approvals and policy checks live in the CircleCI control plane, so they work across both cloud and self-hosted jobs.
Details:
CircleCI’s workflows, approvals, and policy decisions run before jobs execute, regardless of where the job ultimately runs. You can:
- Require manual approval before a
deploy-prodjob on a self-hosted runner - Enforce policy checks (via CircleCI’s Platform Toolkit) that run before a VPC runner even pulls the job
- Combine self-hosted deploy jobs with rollback pipelines that can revert quickly if something goes wrong
The result is a governed “golden path” that spans both CircleCI cloud and your VPC infrastructure.
Summary
Setting up CircleCI self-hosted runners in your VPC and routing only specific jobs or workflows to them is a configuration exercise, not a migration project. You:
- Install and register runners inside your VPC to a
resource_class. - Target only the sensitive or VPC-dependent jobs in
.circleci/config.ymlto that resource class. - Keep everything else on CircleCI cloud for scale, speed, and less operational overhead.
You get trusted, VPC-local execution where you need control, and AI-speed delivery everywhere else.