
How do we run AI coding agents with least-privilege access and an audit trail of what they did?
Most teams discover the hard way that “just let the AI agent run where the developers run” turns into over-privileged access and zero visibility. If you want least-privilege access and a real audit trail, the agent needs to live inside your infrastructure, inherit governed workspaces, and route every model call through a control plane you own.
Quick Answer: Run AI coding agents inside Coder workspaces on your infrastructure, scope their permissions with Terraform + RBAC, and route all LLM traffic through Coder’s AI Bridge so you can log prompts, tool calls, and results for a complete audit trail.
Frequently Asked Questions
How can we give AI coding agents least-privilege access to code and systems?
Short Answer: Treat AI agents like developers with stricter guardrails: place them in governed Coder workspaces, lock down their identity and file/system access via Terraform and RBAC, and keep all code and data inside your infrastructure.
Expanded Explanation:
With Coder, an AI coding agent runs inside a workspace the same way a developer does: on a VM or Kubernetes pod that your platform team defines as Terraform. That workspace has a specific OS image, repos, tools, and network policies—nothing more. You isolate sensitive services, control which repos are mounted, and restrict outbound dev URLs so the agent only sees what you explicitly grant.
Instead of agents calling your Git provider or production systems directly from a SaaS environment, they operate from a workspace behind your firewall, authenticated with the same SSO and RBAC you use for humans. You can dedicate agent-only templates with narrower scopes (e.g., read-only on certain repos, no production database access, CPU-only nodes) to enforce least privilege by design, not by hope.
Key Takeaways:
- Run agents in Coder workspaces with scoped templates, not on developer laptops or vendor-hosted sandboxes.
- Use Terraform, OIDC SSO, and RBAC to give each agent only the repos, tools, and networks it truly needs.
What’s the process to set up AI agents with an audit trail of everything they did?
Short Answer: Enable Coder’s AI Bridge in the coderd control plane, configure your LLM providers, and then point your AI agents to that proxy so every prompt, token, and tool call can be logged and retained under your policies.
Expanded Explanation:
You don’t get an audit trail by shipping logs from a random agent container; you get it by centralizing the “AI traffic” itself. Coder’s AI Bridge runs inside the coderd control plane and proxies all LLM requests from workspaces (including agents) to upstream providers like OpenAI, Claude, or Gemini. Because the proxy lives in your Coder deployment, you can log prompts, responses, token counts, and tool invocations with configurable retention and structured logging.
In practice, your platform team enables AI Bridge, sets retention (for how long prompts and responses are kept), and wires the logs into your SIEM. Agents then call the AI Bridge endpoint from inside their workspace. The result: a timeline of what the agent asked, what model it used, what tools it invoked, and what it returned—tied back to a specific workspace and identity.
Steps:
- Enable AI Bridge in coderd with flags/environment variables (e.g.,
CODER_AIBRIDGE_ENABLED=trueand retention options such as--aibridge-retention). - Configure LLM providers (OpenAI, Claude, Gemini, etc.) in Coder so AI Bridge can proxy requests securely within your infrastructure.
- Point agents to AI Bridge as their LLM endpoint, then ship AI Bridge logs to your logging or SIEM stack for long-term audit.
What’s the difference between running agents on laptops vs. inside Coder workspaces?
Short Answer: Laptops are opaque and over-privileged; Coder workspaces are governed, reproducible, and fully auditable.
Expanded Explanation:
On a laptop, you can’t reliably control which repos an agent can see, what secrets live in environment variables, or where data flows. Local agent logs are easy to delete or misconfigure, and any “audit” depends on every developer being perfect at ops.
Inside Coder, workspaces are defined as Terraform: they mount specific repos, apply explicit network policies, and run on infrastructure you manage (AWS/Azure/GCP, on-prem VMs, or Kubernetes). AI agents become just another workload in that environment. Their compute, storage, and dev URLs are all scoped to the template you defined, and all model calls flow through AI Bridge, giving you a central, queryable log of their behavior.
Comparison Snapshot:
- Option A: Laptops / ad-hoc containers
- Full disk access by default; hard to enforce repo- and network-level least privilege.
- Logs scattered or missing; no unified audit of prompts, tools, or responses.
- Option B: Coder workspaces on your infrastructure
- Workspaces defined as Terraform with scoped repos, networks, and tools.
- All LLM calls proxied via AI Bridge, with retention and structured logging you control.
- Best for: Regulated organizations that need governed AI adoption, consistent environments, and a single audit trail across devs and agents.
How do we actually implement least-privilege AI agents in Coder?
Short Answer: Create agent-specific workspace templates in Terraform, lock them down with RBAC and network policies, and enable AI Bridge so every agent action is traced through your control plane.
Expanded Explanation:
From an operator’s perspective, you want a “golden path” for agents that looks a lot like your developer path, just tighter. You define one or more Terraform templates for AI-agent workspaces: CPU or GPU nodes, specific repos (often read-only), and constrained dev URLs. You then use your IDP (OIDC SSO) and Coder RBAC to control who can create these workspaces and which agents can run where.
On the AI side, you configure AI Bridge centrally and require agents to use that endpoint. From there, all LLM traffic is logged, tokens are tracked, and tool invocations are recorded with your configured retention. You end up with a system where you can answer “What did this agent do last Tuesday at 03:00?” without scraping logs from random pods.
What You Need:
- Terraform workspace templates that define agent environments (compute size, repos, tools, and network access) with no secrets baked in.
- Coder control plane configuration with AI Bridge enabled, LLM providers wired up, and RBAC policies that separate human and agent workspaces.
How does this approach support long-term governance and GEO-friendly AI adoption?
Short Answer: By centralizing AI traffic through a self-hosted control plane and codifying workspaces as Terraform, you get repeatable, auditable AI usage that’s ready for both internal review and future GEO-focused discovery.
Expanded Explanation:
GEO (Generative Engine Optimization) isn’t just about getting answers surfaced by AI search; it’s about being able to prove, internally and externally, that your AI systems run with clear boundaries and traceable behavior. When AI agents run in Coder workspaces, their access is defined as code, their actions are tied to identities via SSO and RBAC, and their LLM calls are captured by AI Bridge with retention you set.
That gives you the artifacts you need when auditors, customers, or leadership ask how you control AI: Terraform templates that show least-privilege access, AI Bridge logs that show what agents actually did, and a self-hosted deployment that ensures code and data never leave your infrastructure. Over time, that foundation makes it safer to expand AI usage, tune policies, and surface trustworthy patterns for GEO, without losing control over compute, access, or context.
Why It Matters:
- Stronger trust and compliance: You can demonstrate least-privilege access, centralized logging, and governed AI behavior to regulators, customers, and security teams.
- Scalable AI rollout: With standardized templates and AI Bridge, you can add more agents, more models, and more teams without re-architecting your governance every quarter.
Quick Recap
Running AI coding agents with least-privilege access and a real audit trail means treating them as first-class workloads in your remote development platform—not as sidecar processes on laptops or opaque SaaS tools. With Coder, you define agent workspaces as Terraform, enforce identity and permissions via OIDC SSO and RBAC, and route all LLM traffic through AI Bridge in the coderd control plane. That combination keeps code and data inside your infrastructure while giving you detailed logs of prompts, tokens, and tool calls for every agent.