How do I enable audit logging in Redpanda and export logs to our SIEM?
Data Streaming Platforms

How do I enable audit logging in Redpanda and export logs to our SIEM?

10 min read

If you’re moving agents and streaming workloads into production, audit logging in Redpanda isn’t optional—it’s the control surface that lets you see, explain, and export every critical action before compliance teams start asking hard questions. The good news: Redpanda ships with enterprise-grade audit logging and is straightforward to wire into your SIEM or observability stack.

Quick Answer: Enable audit logging in Redpanda at the cluster level, configure which events and resources to capture, then ship those logs to your SIEM using standard log shipping (filebeat/vector/Fluent Bit) or Kafka-compatible connectors. This gives you a permanent, queryable record of who did what, when, and from where—essential for regulated environments and agentic workloads.


The Quick Overview

  • What It Is: Redpanda’s audit logging is a built-in capability that records security- and governance-relevant events (authentication, authorization, API calls, configuration changes, and data access patterns) from your Redpanda clusters.
  • Who It Is For: Platform teams, security engineers, and compliance owners who need provable traceability across Kafka-compatible streaming traffic, including AI agents reading and mutating operational data.
  • Core Problem Solved: You need an immutable, centralized record of all critical Redpanda actions and access decisions, exported to your SIEM for correlation and alerting—without bolting on fragile sidecar scripts or custom logging code.

How It Works

Redpanda audit logging sits alongside your normal broker logs, but its purpose is governance, not debugging. When enabled, the cluster emits structured audit events whenever:

  • A principal authenticates (OIDC, Kerberos, TLS).
  • An action is authorized or denied (via ACLs/RBAC).
  • Sensitive operations occur: topic/ACL changes, configuration updates, and (depending on config) produce/consume activity on key topics.
  • Console or API users perform privileged actions (backed by SSO/RBAC).

From there, the flow is simple:

  1. Generate: Turn on audit logging in the Redpanda configuration, choosing which event categories and components to log.
  2. Collect & Transform: Use your preferred log shipper (Filebeat, Fluent Bit, Vector, etc.) or Kafka-compatible connector to read audit logs from brokers or the logging backend.
  3. Export & Correlate: Send audit events into your SIEM (Splunk, Datadog, Elastic, Sumo Logic, etc.), map fields to your detection rules, and build dashboards/alerts around Redpanda activities.

This lets you connect Redpanda’s internal identity and policy decisions—OIDC login, ACL checks, RBAC roles—with broader enterprise context in your SIEM.


Step 1: Plan Your Audit Scope

Before flipping the switch, decide what you actually need to log. Over-logging can hurt both storage and signal quality.

Typical scope in production environments:

  • Must-log:
    • Authentication successes/failures (OIDC, Kerberos, SASL, mTLS).
    • Authorization allows/denies for admin operations (topic create/delete, ACL changes, config updates).
    • Redpanda Console SSO logins and privileged UI actions.
  • Often-log:
    • Admin API calls (cluster settings, partition reassignment, tiered storage settings).
    • Produce/consume attempts on sensitive topics (payments, PII, security events).
  • Sometimes-log (with caution):
    • High-volume produce/consume on non-sensitive topics (for usage analytics or capacity planning), usually sampled or filtered.

Align this with your security policies and regulatory requirements (SOX, PCI, HIPAA, etc.).


Step 2: Enable Audit Logging on the Cluster

Cluster-level audit logging is controlled through Redpanda’s configuration (typically redpanda.yaml or the equivalent when you’re using Helm/Kubernetes). The exact keys and syntax may vary by version, but the pattern looks like this:

# redpanda.yaml (conceptual example)
redpanda:
  # … existing cluster config …

  audit_logging:
    enabled: true                # Turn on audit logging
    log_file: /var/log/redpanda/audit.log
    level: info                  # info or more specific if supported
    include:
      - auth                     # Authentication events
      - acl                      # Authorization checks/ACL changes
      - api                      # Admin & management API calls
      - topic                    # Topic lifecycle events
    exclude:
      - metrics                  # Example: noisy/non-governance events
    redact:
      headers: true              # Optionally redact message headers
      keys: true                 # Optionally redact message keys for sensitive topics

Key concepts:

  • Enable flag: Turns audit logging on for the broker.
  • Destination: Often a dedicated audit log file, separate from broker/system logs.
  • Categories: Event types you include or exclude (auth, ACL, topic, admin API).
  • Redaction: Prevents sensitive payload content from leaking into logs while still retaining the who/what/when/where metadata.

After editing the config:

# Restart the Redpanda service (Linux example)
/bin/systemctl restart redpanda

# Or in Kubernetes, let the StatefulSet roll out changes
kubectl rollout restart statefulset redpanda

Verify that the audit log file is created and is being written to:

tail -f /var/log/redpanda/audit.log

You should see structured lines representing events like successful OIDC logins or ACL changes.


Step 3: Decide on a Collection Strategy

You have two primary ways to get audit logs into your SIEM:

  1. Log-file shipping (most common and straightforward).
  2. Streaming via Kafka-compatible connectors (useful if your SIEM ingests from Kafka).

Option A: File-based Log Shipping

Use a log shipper on each Redpanda node that tails the audit log file and forwards entries to your SIEM or to a centralized log aggregator.

Example with Filebeat (Elastic stack):

# filebeat.yml
filebeat.inputs:
  - type: filestream
    id: redpanda-audit
    enabled: true
    paths:
      - /var/log/redpanda/audit.log
    parsers:
      - ndjson:
          overwrite_keys: true
          add_error_key: true
    fields:
      app: redpanda
      log_type: audit
    fields_under_root: true

output.elasticsearch:
  hosts: ["https://your-es-endpoint:9200"]
  username: "${ES_USER}"
  password: "${ES_PASSWORD}"

Example with Fluent Bit (for Splunk/Datadog/SIEMs via HTTP):

[INPUT]
    Name              tail
    Tag               redpanda.audit
    Path              /var/log/redpanda/audit.log
    Parser            json
    Read_from_Head    False

[FILTER]
    Name              modify
    Match             redpanda.audit
    Add               app redpanda
    Add               log_type audit

[OUTPUT]
    Name              http
    Match             redpanda.audit
    Host              your-siem-endpoint
    Port              443
    URI               /collector
    Format            json
    tls               On
    tls.verify        On
    Header            Authorization Bearer your-token

Tune parsers to match the actual log format (JSON/NDJSON recommended for SIEMs).

Option B: Kafka-Compatible Streaming to SIEM

If your SIEM can ingest from Kafka or you already use Redpanda as the landing zone for logs:

  1. Write audit logs to a topic (design pattern):

    • Either configure Redpanda to send audit events into an internal topic (if supported in your version), or
    • Use a sidecar/log agent that reads from the log file and produces into a redpanda-audit-events topic using a Kafka producer or connector.
  2. Use a connector to push to SIEM:

    • Kafka Connect → Splunk, Datadog, or HTTP Sink.
    • Vector → Kafka source → SIEM sink.

Conceptually:

Redpanda brokers
   └─> audit.log
       └─> log shipper / producer
               └─> topic: redpanda-audit-events
                       └─> connector / consumer
                               └─> SIEM index / source type

This approach keeps everything in the streaming ecosystem and allows you to reuse Redpanda’s durability and buffering capabilities.


Step 4: Normalize Audit Events in Your SIEM

Once events land in your SIEM, the value comes from consistent fields you can query and alert on. Aim for a schema like:

  • timestamp – event time
  • cluster – Redpanda cluster name/ID
  • node – broker hostname
  • principal – user, service account, or agent identity (OIDC subject, Kerberos principal)
  • source_ip – requester address
  • action – e.g., topic.create, acl.alter, auth.login, produce, consume
  • resource_typetopic, cluster, group, transaction, etc.
  • resource_name – topic name, group id, etc.
  • auth_resultsuccess / failure / denied
  • reason – error or policy reason if denied
  • client_id – Kafka/Redpanda client identifier
  • correlation_id – if available, to join with other telemetry

Configure field extraction and mapping in your SIEM (e.g.:

  • Splunk: sourcetype + props.conf / transforms.conf.
  • Datadog: log pipelines + Grok/JSON parsers.
  • Elastic: ingest pipelines with JSON processor.

Step 5: Build Detection Rules and Dashboards

With normalized audit events in place, create SIEM content that reflects how Redpanda is being used in your environment—especially by agents and privileged services.

Common rules:

  • Excessive authorization failures:

    • Multiple auth.login failures for the same principal or IP in a short window.
    • Multiple acl.denied events for a normally quiet user/service account.
  • Sensitive topic misuse:

    • Produce or consume to topics tagged as PII/payments from unexpected principals or locations.
    • Writes to “immutable” or compliance log topics.
  • Privilege escalation and configuration drift:

    • New ACLs granting ALL or WRITE on broad resource patterns (e.g., *).
    • Topic retention or tiered storage changes on regulated datasets.
    • New RBAC role assignments via Redpanda Console SSO.
  • Agent behavior anomalies:

    • An agent identity suddenly touching topics outside its declared toolset.
    • High-volume writes from an AI agent after a configuration change.

Dashboards should highlight:

  • Top principals by admin actions.
  • Denied vs. allowed admin operations over time.
  • Changes to ACLs, RBAC roles, and critical topic configs.
  • Authentication trends (success/fail) by IDP (OIDC, Kerberos).

This closes the loop: Redpanda enforces identity/authorization; your SIEM observes patterns and flags misuse.


Features & Benefits Breakdown

Core FeatureWhat It DoesPrimary Benefit
Built-in audit loggingRecords security- and governance-relevant events at the brokerCentral, tamper-evident trace of every critical Redpanda action
OIDC/Kerberos + RBACTies audit events to real identities and rolesClear “who did what, when” for compliance and forensics
Kafka compatibilityStreams audit data via familiar Kafka APIs and toolingReuse existing connectors, pipelines, and SIEM integrations

Ideal Use Cases

  • Best for regulated, audit-heavy workloads: Because it gives you a durable, queryable log of all security-relevant activity—perfect for SOX, PCI, HIPAA, and internal audit teams who require replayable evidence.
  • Best for AI/agentic data planes: Because it lets you observe and export every agent action—reads and writes—into your SIEM, so you can enforce “govern before it happens” policies and still keep a permanent record and kill switch.

Limitations & Considerations

  • Log volume: In high-throughput environments, especially if you log produce/consume events, audit logs can grow quickly. Use category filters, redaction, and sampling to keep volume and storage in check.
  • Payload sensitivity: Even though Redpanda supports redaction, you must design your schema to avoid writing sensitive payloads or secrets into logs. Treat audit logs as sensitive and secure them with TLS, ACLs, and restricted SIEM access.

Pricing & Plans

Audit logging is part of Redpanda’s enterprise-grade security posture, alongside:

  • Role-based access control (RBAC).
  • OIDC and Kerberos authentication.
  • TLS encryption and fine-grained ACLs.
  • FIPS-compliant binary and 24x7 commercial support with SLAs.
  • Audit logging designed to meet compliance requirements.

Deployment and pricing options:

  • Self-Managed / Enterprise: Best for organizations needing strict data sovereignty (own VPC, on-prem, air-gapped) with deep SIEM integration and compliance mandates.
  • Managed (including BYOC): Best for teams that want Kafka-compatible streaming and audit logging without owning the underlying infrastructure, but still need logs exported into their central SIEM.

For exact pricing and feature availability, see Redpanda’s pricing page or talk to Redpanda sales.


Frequently Asked Questions

Do I need a separate plugin or sidecar to enable audit logging in Redpanda?

Short Answer: No. Audit logging is a built-in capability of Redpanda.

Details: You enable audit logging at the cluster/broker configuration level—there’s no need for extra in-cluster plugins or custom broker extensions. You might use a log shipper or connector to export logs to your SIEM, but the generation of audit events themselves is native to Redpanda and is covered by its enterprise security feature set.


Can I control which actions and topics are included in audit logs?

Short Answer: Yes. You can scope audit logging by event category and, in many setups, by resource.

Details: Redpanda’s audit logging configuration allows you to choose which categories to log (auth, ACL, topic, admin API, etc.) and to use filters or redaction to avoid capturing unnecessary or sensitive details. This lets you focus on high-value events—like authentication attempts, ACL changes, and admin operations—while keeping both log volume and privacy risk under control. For extremely sensitive topics, you can log access attempts and outcomes while redacting keys/headers so that your SIEM sees behavior, not raw data.


Summary

Enabling audit logging in Redpanda and exporting logs to your SIEM turns your Kafka-compatible streaming layer into an auditable, governed system—ready for agents, compliance reviews, and incident response. You:

  • Turn on audit logging in the Redpanda cluster.
  • Tune which events are recorded and how much detail is captured.
  • Ship those logs into your SIEM via file-based shipping or Kafka-compatible connectors.
  • Normalize fields, build detections, and get dashboards that tell you exactly who did what, when, and where.

This is how you move from “we hope our streaming stack is behaving” to “we can replay every critical action, correlate it in our SIEM, and shut down anything that misbehaves.”


Next Step

Get Started