What are the exact Helm install steps to deploy Operant on EKS, and what permissions does it need?
AI Application Security

What are the exact Helm install steps to deploy Operant on EKS, and what permissions does it need?

11 min read

Most teams ask this question at the exact moment they’re ready to move beyond slideware: “What are the precise Helm commands I need to get Operant onto EKS, and what will it actually do in my cluster?” This guide walks through the end‑to‑end Helm install path on Amazon EKS, what permissions Operant needs (and why), and how to validate that 3D Runtime Defense is live on your workload traffic in minutes—not weeks of “instrumentation projects.”

Operant is built to respect a simple constraint: security has to be runtime‑native and low‑friction. That’s why the core deployment story is:

Single step helm install. Zero instrumentation. Zero integrations. Works in <5 minutes.

Below, I’ll break that “single step” into explicit, production‑grade steps for EKS and map each permission to concrete runtime controls.


Prerequisites for deploying Operant on EKS

Before you run helm install, make sure the basics are in place.

1. EKS cluster and access

You’ll need:

  • A running Amazon EKS cluster (v1.23+ recommended).
  • kubectl configured against the cluster:
    aws eks update-kubeconfig \
      --region <AWS_REGION> \
      --name <EKS_CLUSTER_NAME>
    
  • Cluster access with permissions to:
    • create namespaces
    • create/update cluster‑scoped resources (ClusterRoles, ClusterRoleBindings, CRDs)
    • create ServiceAccounts and RoleBindings
    • install mutating/validating webhooks and admission controllers

In AWS IAM terms, this usually means you’re using an IAM principal mapped to the system:masters group or equivalent via aws-auth configmap.

2. Tooling: kubectl and Helm

Install:

  • kubectl (matching your cluster version)
  • helm v3.x

Verify:

kubectl version --short
helm version

3. Namespace and network considerations

By default, most customers deploy Operant into its own namespace, e.g., operant-system. Operant is Kubernetes‑native and supports all major platforms, including AWS EKS, Azure AKS, GKE, Rancher RKE2, OpenShift, and standalone Kubernetes, but this guide assumes:

  • You’re on EKS with standard CNI (or a compatible CNI).
  • Outbound egress to Operant’s control plane (if used) is allowed from the Operant namespace.
  • You’re not blocking admission webhooks or cluster‑wide RBAC changes via PSPs or restrictive OPA/Gatekeeper constraints.

Step‑by‑step Helm install on EKS

This is the “exact Helm install” flow most teams follow to get Operant live in <5 minutes.

Step 1: Create the Operant namespace

kubectl create namespace operant-system

If you use a different namespace, remember to apply it consistently in the later commands.

Step 2: Add the Operant Helm repo

You’ll receive the exact repository URL and chart name from Operant (or your account rep). It typically looks like:

helm repo add operant https://charts.operant.ai
helm repo update

Check that the repo is visible:

helm search repo operant

You should see the primary Operant chart (for example, operant/operant-runtime or similar, depending on your subscription).

Step 3: Create a values file for your EKS cluster

While you can deploy with defaults, production teams usually configure a minimal values.yaml that sets:

  • Cluster identity
  • Environment tags
  • Network/proxy details (if any)
  • Optional: specific protections to enable by default

Example operant-values.yaml:

cluster:
  name: "<EKS_CLUSTER_NAME>"
  environment: "prod"          # or "staging", "dev"
  provider: "eks"

operant:
  apiKey: "<YOUR_OPERANT_API_KEY>"   # if using Operant SaaS control plane
  region: "us"                       # or "eu", etc., depending on your tenancy

# Namespace where Operant is installed
global:
  namespace: "operant-system"

# Webhook settings (used for inline runtime policy enforcement)
webhook:
  enabled: true
  failurePolicy: "Ignore"            # or "Fail" in strict environments

# Network / proxy (if needed)
network:
  outboundProxy:
    enabled: false

# Resource requests (tune based on cluster size)
resources:
  limits:
    cpu: "1"
    memory: "2Gi"
  requests:
    cpu: "250m"
    memory: "512Mi"

Your Operant team will typically give you a baseline values file tuned to your cluster size and threat model.

Step 4: Install Operant with Helm

Now deploy Operant’s Runtime AI Application Defense Platform into the cluster:

helm install operant-runtime operant/operant-runtime \
  --namespace operant-system \
  --create-namespace \
  -f operant-values.yaml

If your environment does not allow Helm to create the namespace:

kubectl create namespace operant-system
helm install operant-runtime operant/operant-runtime \
  --namespace operant-system \
  -f operant-values.yaml

This single Helm command:

  • Deploys Operant’s controllers, runtime agents, and webhooks.
  • Sets up K8s‑native policy control planes.
  • Begins building a live blueprint of your APIs, agents, MCP connections, and identities from real runtime telemetry.

Step 5: Verify that Operant is running

Check pods:

kubectl get pods -n operant-system

You should see all core components in Running or Completed states. If any are CrashLoopBackOff or Error, fetch logs:

kubectl logs -n operant-system <pod-name>

Check webhooks:

kubectl get validatingwebhookconfigurations | grep operant
kubectl get mutatingwebhookconfigurations | grep operant

Check CRDs (exact names may vary):

kubectl get crds | grep operant

At this point, Operant is already:

  • Discovering APIs, agents, MCP connections, and internal traffic.
  • Mapping identities and permissions.
  • Ready to enforce runtime policies (block, rate-limit, auto‑redact) as you turn on protections.

Optional: Upgrading and uninstalling Operant via Helm

Upgrade

To roll out a new chart or updated values:

helm upgrade operant-runtime operant/operant-runtime \
  --namespace operant-system \
  -f operant-values.yaml

Uninstall

To remove Operant from the cluster:

helm uninstall operant-runtime -n operant-system
kubectl delete namespace operant-system

(You may choose to keep the namespace for logs or CRD clean‑up; align with your change‑management practices.)


What permissions does Operant need on EKS—and why?

This is the part that usually gets scrutinized in design reviews, and it should. Operant’s value comes from being runtime‑native and inline. That means it needs enough permissions to:

  • Discover and map the “cloud within the cloud” (APIs, agents, MCP connections, K8s identities).
  • Detect high‑risk patterns (prompt injection, lateral movement, ghost APIs, 0‑click agent exploits).
  • Actively defend: block flows, auto‑redact data, enforce trust zones, and constrain over‑privileged workloads.

Below I’ll walk through the key permission categories you’ll see in the Helm‑installed RBAC, with rationale.

1. Cluster‑level read: discovery and live blueprints

Scope: get, list, watch for cluster‑scoped objects such as:

  • namespaces
  • nodes
  • customresourcedefinitions
  • clusterroles, clusterrolebindings

Why it’s needed:

  • To build the live API and service blueprint across namespaces and nodes.
  • To understand cross‑namespace dependencies and potential east–west attack paths.
  • To map cluster‑wide RBAC and identity relationships (e.g., over‑privileged service accounts) that underpin K8s Identity and Entitlement Management.

What Operant does with it:

  • Populates runtime discovery catalogs (managed and unmanaged agents, MCP Catalog, live API blueprint).
  • Identifies insecure defaults—like cluster‑admin service accounts attached to workloads.
  • Ties runtime behavior back to concrete identities and permissions.

2. Namespace‑level read: workloads, services, and traffic paths

Scope: get, list, watch on namespaced resources such as:

  • pods, deployments, statefulsets, daemonsets, replicasets
  • services, endpoints, ingresses
  • configmaps, secrets (often limited or ‘namespaced’ to the Operant components)
  • serviceaccounts, roles, rolebindings

Why it’s needed:

  • To see which workloads are talking to which APIs and services.
  • To map east–west traffic and agentic workflows across your microservices.
  • To understand how RBAC is wired at the namespace level and where privilege boundaries are broken.

What Operant does with it:

  • Builds the live service graph used for microsegmentation, trust zones, and API‑to‑API controls.
  • Detects ghost/zombie APIs and unmanaged agents.
  • Flags lateral movement paths, misconfigured identities, and shadow services.

3. Creating and managing Operant CRDs and controllers

Scope:

  • create, update, patch, delete on Operant’s own:
    • CRDs (e.g., policy definitions, trust zones, runtime defense objects)
    • Deployments/DaemonSets/Pods in the operant-system namespace
    • ConfigMaps and Secrets used by Operant components

Why it’s needed:

  • To install and manage the runtime defense control plane Operant brings into the cluster.
  • To persist 3D Runtime Defense objects (Discovery, Detection, Defense) as native K8s resources.
  • To allow configuration through GitOps and policy‑as‑code in your existing pipelines.

What Operant does with it:

  • Runs the controllers that enforce policies on live traffic.
  • Stores policies as CRDs so they are auditable, versionable, and GitOps‑friendly.
  • Keeps Operant’s own components healthy and resilient to failures or node rotations.

4. Admission webhooks: inline enforcement at runtime

Scope:

  • create, update, patch, delete on:
    • MutatingWebhookConfiguration
    • ValidatingWebhookConfiguration
  • The ability to intercept K8s API server calls for selected resources (e.g., pods, ingresses) based on your configuration.

Why it’s needed:

  • To move beyond “observe and alert” and actually block risky changes before they hit the cluster.
  • To enforce policies such as:
    • No workloads running with privileged escalation.
    • No agents or MCP components deployed without proper trust‑zone assignments.
    • No ingress exposing sensitive API paths without Operant protections.

What Operant does with it:

  • Implements adaptive internal firewalls beyond the WAF, applied at the K8s control plane.
  • Prevents known‑bad configurations from ever being admitted into the cluster.
  • Ensures that devs and agents can move fast without silently breaking your security posture.

You can choose how strict this is (e.g., failurePolicy: Ignore vs Fail) based on environment (dev vs prod).

5. Network and traffic controls: blocking, rate‑limiting, and segmentation

Depending on your architecture and CNI, Operant may leverage:

  • Sidecar or DaemonSet agents to observe and control traffic.
  • K8s objects such as:
    • networkpolicies
    • API Gateway or Ingress objects (via annotations or CRDs)
    • Service meshes (where present)

Scope:

  • get, list, watch on relevant network objects.
  • create, update, patch on Operant‑managed network policies or traffic rules.

Why it’s needed:

  • To implement API‑to‑API microsegmentation and rate limiting.
  • To enforce trust zones between services, agents, and tools (especially MCP‑backed agentic workflows).
  • To apply inline defenses to live API calls (beyond the edge WAF), including:
    • Blocking prompt injection flows.
    • Preventing data exfiltration from sensitive services.
    • Containing rogue or unmanaged agents.

What Operant does with it:

  • Dynamically adjusts network policies based on runtime context.
  • Implements allow/deny lists and NHI access controls at the API and service level.
  • Enforces least‑privilege communication between services and agents without requiring application code changes.

6. Logs, events, and telemetry: live detection signals

Scope:

  • get, list, watch on:
    • events
    • Resource status fields
  • Optional integration with:
    • CloudWatch / logging sinks (config‑driven)
    • In‑cluster telemetry sources

Why it’s needed:

  • To correlate runtime anomalies with resource changes (e.g., suspicious restarts, failing probes).
  • To detect 0‑click agent chains and Shadow Escape‑style lateral movement patterns from live telemetry.
  • To prioritize real attacks over noisy misconfigurations.

What Operant does with it:

  • Surfaces detections mapped to modern taxonomies:
    • OWASP Top 10 for API, LLM, and K8s.
  • Feeds runtime detections into inline defenses:
    • Block or quarantine offending flows.
    • Auto‑redact sensitive data in‑flight.
    • Tighten trust zones on the fly.

7. Optional: Integration permissions (GitHub, CI/CD, etc.)

Operant starts working without any integrations, but when you want to shift left and extend policy‑as‑code, you may grant:

  • GitHub access: for policy‑as‑code and GitOps workflows that keep runtime and repo policies in sync.
  • CI/CD integration: to enforce DevSecOps guardrails in pipelines before changes hit production.

These are not required for the core Helm install on EKS. They’re additive: you choose when and where to connect.


How to review and constrain Operant’s permissions for your org

Security teams usually ask two final questions:

  1. Can we review the exact RBAC you’re installing?
  2. Can we scope it down or phase it in?

The answer to both is yes.

1. Review the RBAC from the chart

Before installing, you can render the YAML to inspect every ClusterRole and RoleBinding:

helm template operant-runtime operant/operant-runtime \
  --namespace operant-system \
  -f operant-values.yaml > operant-rendered.yaml

Then:

grep -n "ClusterRole" operant-rendered.yaml
grep -n "ClusterRoleBinding" operant-rendered.yaml

Your security team can validate:

  • Which verbs (get/list/watch/create/update/patch/delete) apply to which resources.
  • Which ServiceAccounts are bound to which roles.

2. Phase‑in enforcement

Many teams start with:

  • Discovery + Detection only in non‑prod:
    • Webhooks in Ignore mode.
    • No blocking, only visibility and detections.
  • Then move to Defense:
    • Turn on blocking for specific threats (e.g., data exfiltration, agent tooling abuse).
    • Gradually tighten trust zones and microsegmentation.

This aligns with the goal: better protection, lower cost, more control—without breaking production on day one.


From Helm install to runtime impact—what to expect in minutes

Once Operant is installed on EKS with Helm, you should see:

  • A live API and service blueprint, including ghost/zombie APIs.
  • A catalog of agents, MCP servers/tools/clients, and their runtime behavior.
  • Immediate detections mapped to OWASP Top 10 for API/LLM/K8s and agentic risks.
  • The ability to:
    • Block prompt injection and jailbreak attempts inline.
    • Auto‑redact sensitive data as it flows through your applications.
    • Rate‑limit and segment API‑to‑API traffic beyond the WAF.
    • Enforce K8s Identity and Entitlement controls across pods, namespaces, and clusters.

There’s no instrumentation project. No multi‑month integration backlog. It’s a single Helm install that turns your EKS cluster into a runtime‑enforced environment where AI apps, MCP, APIs, and agents are actually defended—not just observed.


Next Step

Get Operant running on your EKS cluster and see 3D Runtime Defense on live traffic in minutes:
Get Started