
Operant pricing: how does usage-based pricing work and what drives cost (traffic volume, clusters, protected surfaces)?
Most teams discover Operant at the exact moment “traditional” pricing models start to hurt them: AI agents and MCP workflows are exploding, east–west traffic is surging, and every new control surface turns into another SKU, seat, or module. Operant’s usage-based pricing is designed to do the opposite. You get full platform coverage, predictable cost tied to real runtime usage, and the freedom to bring your whole team without watching a user counter.
This guide breaks down how Operant pricing works, what “usage-based” actually means in a runtime AI application defense platform, and what really drives cost—traffic volume, clusters, and protected surfaces.
How Operant usage-based pricing works
Operant uses usage-based pricing anchored on your actual runtime footprint, not per-seat or feature gating.
At a high level:
- You pay for protection, not for log-in screens.
- You get complete coverage of the platform—Agent Protector, MCP Gateway, AI Gatekeeper™, API & Cloud Protector—without SKU sprawl.
- Costs scale with the amount of traffic and surfaces we actively defend across your environment.
From the public positioning:
- Usage-based pricing: “Predictable pricing based on your stack and usage. Start as you like, grow when you’re ready.”
- Complete coverage: “Get complete coverage and unlock the entire platform. Extra-legroom included.”
- Bring your whole team: “We don’t price by number of users. Bring along your friends from platform, dev or ops…”
In practice, that means:
- You deploy Operant via a single-step Helm install into your Kubernetes environment(s).
- Operant begins 3D Runtime Defense (Discovery, Detection, Defense) for your live AI and cloud workloads.
- Your cost is driven by how much live runtime surface Operant protects—primarily measured through:
- Traffic volume through protected surfaces
- Number and scale of clusters
- Scope of AI + application surfaces under defense
No premium SKU to turn on MCP security. No separate SKU for “agent security.” No “enterprise tier” just to redline data exfiltration. Once you’re in, you’re in.
What drives cost in Operant’s pricing model?
Under the hood, usage-based pricing typically maps to three dimensions that correlate with the practical cost of delivering runtime defense at scale:
- Traffic volume through protected surfaces
- Number and size of Kubernetes clusters
- Number and criticality of protected surfaces (APIs, AI agents, MCP servers/tools, etc.)
Let’s unpack each.
1. Traffic volume: how much runtime activity you protect
Operant sits inline with your live workloads and agentic workflows. The more runtime activity you defend, the more compute and analytics we need to apply. That’s the primary driver of your bill.
Think in terms of:
- Requests per second (RPS) / Transactions per day
- North–south: external API calls, public endpoints, AI inference requests
- East–west: service-to-service calls, internal APIs, AI tool invocations, MCP tool usage
- Model and agent calls
- LLM prompts + completions
- Tool calls in agent workflows (e.g., “fetch_customer_data”, “update_invoice”)
- MCP tool calls across clients and servers
- Data volume flowing through protected paths
- Payload sizes for API and AI traffic
- Volume of content that may require inline auto-redaction or inspection
Why this matters: Operant doesn’t just observe. It performs inline blocking, rate-limiting, segmentation, and auto-redaction of sensitive data in real time. Higher throughput means more decisions, more enforcement, and more compute on our side.
If you’re forecasting cost, ask:
- How many RPS do our critical APIs handle today?
- How many LLM calls / day are we making across apps, workflows, and tools?
- How fast is that growing with new AI or agent features?
That’s the core lever: more protected traffic → higher usage → higher cost. But it’s also directly tied to value: you’re only paying for what’s actually under runtime defense.
2. Kubernetes clusters: where you run Operant
Operant is Kubernetes-native and deploys as part of your runtime stack. Pricing naturally reflects how many Kubernetes environments you’re protecting.
Common dimensions:
- Number of clusters
- EKS, AKS, GKE, OpenShift, on-prem Kubernetes
- Separate clusters for dev, stage, prod
- Cluster size & tenancy
- Single-tenant vs multi-tenant clusters
- Number of namespaces / workloads per cluster
Why clusters matter: each cluster is its own “cloud within the cloud”—with its own network fabric, identities, and east–west attack surface. Operant’s runtime enforcement must discover, map, and defend:
- Internal APIs and services
- Ghost/zombie APIs and unintended exposures
- AI agents embedded in services or sidecars
- MCP servers, clients, and tools running inside the cluster
Operationally, more clusters mean:
- More surfaces to discover (live API blueprint, MCP Catalog, agent inventory)
- More policy contexts to enforce (trust zones, allow/deny lists, NHI controls)
- More runtime detections and defenses to maintain across environments
If you’re planning deployment, consider:
- How many clusters run production and critical staging workloads?
- Which clusters host AI-heavy or agentic workflows?
- Where do you need inline blocking, not just observability?
You don’t pay per pod or per microservice, but your cluster footprint does influence pricing tiers.
3. Protected surfaces: which assets are inside Operant’s defense envelope
The third major factor is what you’re defending. Operant is a Runtime AI Application Defense Platform that covers:
- AI agents across cloud, SaaS, dev tools
- MCP servers, clients, and tools
- AI Gateways and model endpoints
- APIs (north–south and east–west)
- Kubernetes workloads and cluster internals
The more surfaces you bring under Operant’s 3D Runtime Defense, the more value and coverage you get—and the more usage you’ll generate.
Typical protected surfaces include:
AI & agentic surfaces
-
AI agents in production applications
- Customer-facing chatbots
- Internal copilots for support, sales, or engineering
- Orchestration frameworks running agentic workflows
-
Agent Protector coverage
- Detects and blocks prompt injection, jailbreaks, tool poisoning
- Contains 0-click attacks on agent workflows
- Enforces least-privilege tool access via trust zones and allowlists
-
MCP Gateway coverage
- Secures MCP servers, clients, and tools
- Maintains an MCP Catalog/Registry of who can call what
- Blocks AI supply chain attacks (malicious tools, compromised servers)
-
AI Gatekeeper™
- Inline data exfiltration and model theft protection
- Enforces NHI access controls and auto-redaction of sensitive data before it leaves your perimeter
API & cloud-native surfaces
-
API & Cloud Protector
- Beyond-the-WAF API protection for internal and external APIs
- Discovery of ghost/zombie APIs and unmanaged endpoints
- Runtime enforcement aligned to OWASP Top 10 for API/LLM/K8s
-
Kubernetes-native surfaces
- Internal service traffic and network segmentation
- Cluster security posture at runtime, not just config scans
- Adaptive internal firewalls between namespaces, services, and agents
The pricing model doesn’t nickel-and-dime you per feature. You’re getting complete platform coverage. But your protected surface area—how many agents, APIs, MCP tools, and clusters you bring under defense—determines the volume of traffic we inspect and enforce against.
If you’re planning for cost:
- Start with critical surfaces (production AI agents, MCP tools, key internal APIs).
- Expand coverage to more agents and services as you prove value.
- Expect usage (and cost) to rise as you add more high-throughput surfaces into the envelope.
What Operant does not charge for
To keep things predictable and aligned with how teams actually work, Operant intentionally avoids several pricing levers that create friction in security programs:
-
No per-user pricing
- You can bring security, platform, SRE, dev, and ops into the same runtime view and policy layer without watching a seat counter.
- This matters in AI: agent and MCP risk cuts across security, platform, app teams, and data teams. You need shared visibility and control.
-
No feature-gating of core defenses
- You don’t pay extra just to turn on MCP protection or agent security.
- Inline auto-redaction, blocking, rate limiting, trust zones, and allow/deny lists are part of the platform, not upsells.
-
No instrumentation project tax
- Operant deploys with single-step Helm, zero instrumentation, zero integrations, and works in <5 minutes on live traffic.
- You’re not paying for months of “integration consulting” before you see any security value.
The net effect: cost is driven by how much runtime surface you protect, not by how many people log in or how many modules marketing can carve out.
How to think about Operant pricing for your environment
If you’re trying to ballpark where you’ll land on Operant’s usage-based curve, map your environment along three axes:
-
Runtime intensity (traffic volume)
- Current and projected RPS for critical APIs
- Current and projected LLM + agent tool calls
- Volume of internal service-to-service calls you want under defense
-
Runtime footprint (clusters)
- Number of Kubernetes clusters (EKS, AKS, GKE, OpenShift, on-prem)
- Which clusters host production and AI-heavy workloads
- Whether you want coverage for dev/stage for early detection
-
Security scope (protected surfaces)
- How many AI agents are in production or going live this year
- Whether you’re actively using MCP (servers, clients, tools)
- Number of high-value APIs (internal + external) that need beyond-WAF protection
- Compliance drivers (PCI DSS v4, NIST 800, EU AI Act) pushing you to runtime controls and auditable enforcement
Bring that picture into a pricing conversation and you’ll get a precise, predictable number instead of guesswork.
Why Operant’s pricing model matters for AI and agent security
Most “AI security” and API protection tools follow one of two patterns:
-
Per-user/seat-based governance tools
- Great at dashboards. Weak at runtime enforcement.
- Cost grows with your org, even when usage doesn’t.
-
Per-feature legacy security suites
- Agent security is a new SKU. MCP is another SKU. API east–west is a third SKU.
- You get stuck deciding which risks you can’t afford to protect.
Operant’s runtime-native approach demands a different pricing posture:
- AI agents, MCP toolchains, and internal APIs live in the same runtime fabric.
- The real attack surface is the cloud within the cloud—the mesh of APIs, services, and identities that perimeter tools don’t see.
- You need a single, inline enforcement layer that can discover, detect, and defend across that fabric without treating each surface as a separate product.
Usage-based pricing tied to traffic volume, clusters, and protected surfaces aligns directly with that reality. If you’re running more critical workloads, with more agents and APIs, and pushing more traffic—you’re driving more security value and you pay proportionally for it.
How to get an exact Operant pricing quote
Because every environment’s runtime mix is different—AI intensity, cluster topology, MCP adoption—Operant final pricing is always tailored.
The quickest way to get a precise number is to walk through:
- Your Kubernetes footprint (clusters + environments)
- Your critical AI and agentic workflows (agents, MCP, gateways)
- Your API footprint (internal + external, high-value services)
- Any compliance or regulatory constraints (PCI, NIST, EU AI Act) pushing specific enforcement requirements
From there, the Operant team maps these to a predictable usage-based plan with:
- Full platform access (Agent Protector, MCP Gateway, AI Gatekeeper™, API & Cloud Protector)
- Clear assumptions around traffic and cluster counts
- Headroom for growth so you don’t renegotiate every quarter
Final takeaway
Operant’s pricing is engineered for the agentic AI era: usage-based, runtime-native, and aligned with real risk surfaces. You don’t pay for users or features. You pay for the volume of traffic and the breadth of runtime surfaces—AI agents, MCP toolchains, APIs, and clusters—that Operant actively defends.
If you’re ready to see how that maps to your environment and budget, the next step is simple: