Operant security review: where can I find SOC 2 Type II info and details on data flow/what gets logged?
AI Application Security

Operant security review: where can I find SOC 2 Type II info and details on data flow/what gets logged?

10 min read

Security reviews shouldn’t require guesswork. If you’re evaluating Operant for production use, you need clear answers: where to find SOC 2 Type II information, what data flows through the platform, and exactly what gets logged.

This guide walks through how Operant handles those concerns and where to go for deeper, customer-ready documentation.


Where to find Operant’s SOC 2 Type II information

Operant is built for organizations that live under strict security and compliance regimes. As part of that, we maintain formal security documentation and third‑party attestations that are available under NDA.

Here’s how to access them.

1. Request the SOC 2 Type II report

Operant’s SOC 2 Type II attestation and related security due‑diligence documents are provided directly by our team as part of a security review or procurement cycle.

To request them:

  • Use the sales/demo flow

    • Visit: https://operant.ai
    • Click any “Get a Demo” / “Talk to Us” CTA.
    • Note that you’re requesting SOC 2 Type II documentation for a security review.
  • Or go straight to a meeting

    • Use this direct link: Get Started
    • In the meeting notes, include:
      • Your company name
      • Your role (security, platform, privacy, procurement, etc.)
      • That you need SOC 2 Type II, data flow, and logging/telemetry details for vendor review.

What you can expect to receive under NDA typically includes:

  • SOC 2 Type II report (latest period)
  • High‑level security architecture overview for the Runtime AI Application Defense Platform
  • Data flow diagrams for core products (Agent Protector, MCP Gateway, AI Gatekeeper™, API & Cloud Protector)
  • Logging and retention policy summary
  • Standard security questionnaire responses (CAIQ / SIG‑style, or your own spreadsheet)

If your organization uses a vendor security portal (e.g., Whistic, Vanta, Drata, OneTrust), Operant can also upload documents there as part of the review.


How Operant handles data: high‑level data flow

Operant is a runtime AI application defense platform, not a data lake or analytics warehouse. The platform is designed to:

  • Sit inline with live traffic (APIs, LLM calls, MCP interactions, agent workflows)
  • Build a runtime blueprint of your “cloud within the cloud” (internal APIs, MCP servers/tools, agents, identities)
  • Perform 3D Runtime Defense: Discovery, Detection, Defense
  • Enforce controls as data flows, rather than storing raw customer payloads for later use whenever possible.

Below is a high‑level view of how data moves through Operant. Use this as a conceptual model; the formal diagrams will be in the security packet.

1. Data sources Operant touches

Operant operates at the runtime layer for:

  • APIs and services
    • Internal east–west APIs (microservices, “cloud within the cloud”)
    • North–south APIs exposed to the internet
    • Ghost/zombie APIs you may not realize are still reachable
  • AI & LLM traffic
    • LLM prompts and responses (including RAG and custom model endpoints)
    • AI app backends and inference APIs
  • MCP and agents
    • MCP servers, clients, and tools across your stack
    • Agentic workflows in dev tools, SaaS, and internal apps
    • Tool calls and “0‑click” automated actions
  • Cloud‑native runtime
    • Kubernetes clusters (EKS, AKS, GKE, OpenShift)
    • Service identities, pods, namespaces, and network flows

This runtime visibility is what allows Operant to detect and block attacks like prompt injection, jailbreaks, tool poisoning, data exfiltration, AI supply chain abuse, and rogue agents.

2. Categories of data observed

In a typical deployment, Operant may see:

  • Metadata about traffic and identities

    • Source/destination service
    • Path/endpoint, HTTP method
    • Timestamps, response codes, latency
    • Auth mechanism (e.g., OAuth2/OIDC presence, identity claims)
    • MCP client/server identifiers, agent IDs, tool names
  • Selective request/response content (runtime only)

    • API parameters or payload fragments the policy engine needs to evaluate risk
    • LLM prompts/responses when you enable inline scanning (for injection, exfiltration, policy violations)
    • MCP tool inputs/outputs for threat detection and enforcement
  • Derived security signals

    • Anomaly scores and policy evaluation outcomes
    • OWASP Top 10 (API / LLM / K8s) violation flags
    • Agentic risk labels (e.g., “0‑click escalation,” “Shadow Escape” patterns)
    • Trust zone crossings (service X calling service Y across a boundary)

The default design principle is: collect only the data needed for runtime enforcement and security analytics, and keep sensitive content at the edge whenever possible, relying on redaction and tokenization instead of full payload storage.


What Operant logs (and what it doesn’t)

Security teams typically ask two specific questions:

  1. What exactly does Operant log about my traffic?
  2. How is that logging controlled, stored, and shared?

Here’s how to think about those in the context of Operant’s runtime‑first architecture.

1. Operational and security events (primary logs)

These are the core events used to drive 3D Runtime Defense:

  • Discovery events

    • Newly observed APIs/endpoints, MCP servers, and tools
    • Detection of ghost/zombie APIs and unmanaged agents
    • Changes in the live blueprint (new services, new identities, new connections)
  • Detection events

    • Policy violations mapped to OWASP Top 10 for API/LLM/K8s
    • Prompt injection and jailbreak attempts
    • Tool poisoning or malicious tool chains in MCP and agents
    • Data exfiltration attempts (e.g., SSNs, secrets, PHI in outbound payloads)
    • Model theft or scraping indicators
    • Misuse of AI Natural Human Interfaces (NHIs) and agent overreach
  • Defense actions

    • Inline blocks and allow/deny list hits
    • Rate limiting or shaping applied to abusive flows
    • Auto-redaction or tokenization of sensitive fields
    • Policy updates applied (e.g., new trust zone boundaries, identity‑aware controls)

These logs are designed to be sent to your SIEM/observability tools (Datadog, Grafana, etc.) so that Operant becomes the runtime enforcement engine, while your existing stack provides long‑term storage, correlation, and dashboards.

2. Traffic content and payloads

This is usually the most sensitive concern: how much of the actual request/response content is logged?

Operant’s stance:

  • Runtime examination, minimal retention.
    Operant inspects payloads inline at runtime to detect prompt injection, data exfiltration, or policy violations, but does not need to permanently store full payloads to enforce controls.

  • Inline auto‑redaction by default where possible.
    The platform supports Inline Auto-Redaction of Sensitive Data:

    • SSNs, credit cards, tokens, secrets, and other sensitive fields can be masked or removed before logs are emitted to your SIEM or observability stack.
    • You control whether redacted data is stored at all.
  • Configurable log detail levels.
    You can tune whether logs include:

    • Only metadata and detection summaries
    • Select fields (with redaction)
    • Heavier forensic payloads for specific high‑risk zones or short‑term investigations

In practice, most customers run Operant with:

  • Full runtime inspection, combined with
  • Redacted logs for production environments, and
  • Strict scoping of any unredacted payload logging to narrow, short‑lived investigation windows.

Exact defaults and configurable options are documented in the deployment and security guides you receive during evaluation.

3. Identity, access, and audit logging

Because Operant enforces policies across identities, you also get:

  • Admin and user activity logs

    • Logins and role assignments
    • Policy changes, allow/deny list updates
    • Trust zone creation/modification
    • Manual approvals of remediation or enforcement changes
  • API and integration logs

    • Calls from your CI/CD, IaC, or policy‑as‑code pipelines into Operant
    • MCP Catalog/Registry operations (tools registered, tools revoked)
    • Webhooks and outbound notifications

This audit layer is what supports compliance mapping to frameworks like CIS Benchmarks, PCI DSS v4, NIST 800, and EU AI Act requirements around traceability and runtime controls, not just static documentation.


How Operant uses, stores, and retains your data

Exact retention defaults are provided in the SOC 2 packet, but the architectural principles are consistent:

  • Customer‑owned logs.
    Wherever possible, Operant pushes security events to your logging stack (SIEM / observability). This keeps long‑term retention and log access under your control.

  • Scoped retention inside Operant.
    Operant may maintain a bounded window of telemetry internally to drive:

    • Behavior baselining for anomaly detection
    • Short‑term forensic investigations
    • Policy tuning and recommended remediations
      Retention periods are configurable within typical enterprise guardrails and documented as part of the security review.
  • No training on your production data.
    Operant is not an analytics product that mines customer payloads for product training. Runtime data is used to:

    • Improve threat detection within your tenant
    • Provide you with insights and recommended policies
      Any cross‑customer detection improvements rely on abstracted, de‑identified signals rather than raw payload content.
  • Encryption in transit and at rest.
    Standard modern controls apply:

    • TLS for data in transit
    • Strong encryption for data at rest
    • Strict access controls for operational personnel, audited and limited on a need‑to‑know basis

The SOC 2 Type II report and accompanying documents spell out encryption standards, key management, access review processes, and change management in detail.


Why Operant’s data model looks different from “observability” tools

If you’ve been burned by instrumentation projects that sprayed raw payloads into a data lake, it’s worth emphasizing: Operant is not another dashboard. It is a Runtime AI Application Defense Platform built to act inline:

  • It discovers APIs, agents, MCP tools, and identities in minutes (single‑step Helm install, zero instrumentation).
  • It detects attacks using modern taxonomies (OWASP Top 10 for API/LLM/K8s, plus agentic risks like “0‑click”).
  • It defends by blocking, redacting, and containing flows inside the application perimeter, not just at the edge.

That design naturally constrains data collection:

  • Operant needs enough context to make real‑time enforcement decisions.
  • It doesn’t need to hoard every byte of every request for eternity.
  • You maintain control over where logs live, how long they’re kept, and how much payload content is visible.

This is the same model that has led security leaders—like the CTO of Juniper Networks, the former NIST Chief of Cybersecurity, and security heads at companies like Cohere and ClickHouse—to trust Operant for runtime enforcement, especially for AI agents and MCP traffic.


How to structure your Operant security review

If you’re running a formal vendor review, here’s a practical checklist to move quickly while getting the depth you need.

1. Request formal documentation

Via this booking link or the “Get a Demo” routes on operant.ai, ask for:

  • Latest SOC 2 Type II report
  • Security whitepaper / architecture for:
    • Agent Protector
    • MCP Gateway
    • AI Gatekeeper™
    • API & Cloud Protector
  • Data flow and logging diagrams
  • Data processing / DPA documentation and standard security questionnaire responses

2. Validate data boundaries and logging behavior

In your evaluation call, specifically ask:

  • Which data fields Operant processes in your expected deployment mode
  • What log redaction options are recommended for your environment
  • How logs are exported to your SIEM/observability stack
  • Default retention periods inside Operant and how they can be tuned
  • Where MCP, agent, and LLM payloads are examined and how that content is handled

This ensures the answers are grounded in your actual architecture, not theoretical.

3. Map Operant to your compliance and risk program

Work with your security team to map Operant’s runtime controls to:

  • SOC 2 controls around access, logging, and incident response
  • PCI DSS v4, if you process payment data (focus on inline redaction and API traffic segmentation)
  • NIST 800 and EU AI Act expectations around monitoring, risk management, and technical safeguards for AI and agents
  • Your internal policies for data residency, log retention, and PII handling

Operant’s team can provide standard mappings to OWASP Top 10 (API/LLM/K8s) and other frameworks that often feed into your internal control catalog.


Bottom line: where to go from here

If you’ve read this far, you’re likely doing real due diligence—not just buying a dashboard. To recap:

  • SOC 2 Type II and detailed data‑flow docs are available under NDA directly from Operant.
  • Operant focuses on runtime enforcement, not bulk data collection. It inspects traffic inline, redacts sensitive content, and pushes security events into your existing SIEM/observability stack.
  • Logging behavior is configurable, with strong default controls for high‑sensitivity environments and clear support for your compliance programs.

To get the formal artifacts and walk through them live with someone who’s lived these operational challenges, book time here:

Get Started

Bring your security questionnaire, data‑flow questions, and logging requirements. We’ll map them directly to how Operant runs in your environment, on real traffic, in minutes—not months of instrumentation work.