How do we connect Datadog (or OpenTelemetry) to LaunchDarkly so we can monitor rollouts and alert on regressions?
Feature Management Platforms

How do we connect Datadog (or OpenTelemetry) to LaunchDarkly so we can monitor rollouts and alert on regressions?

8 min read

Quick Answer: Connect Datadog or OpenTelemetry to LaunchDarkly so every rollout is monitored in real time, and regressions can automatically trigger alerts—or even roll back the offending feature flag—without a redeploy.

The Quick Overview

  • What It Is: A way to wire your observability stack (Datadog or OpenTelemetry) into LaunchDarkly so rollouts are tracked against live metrics, and issues can trigger alerts or flag rollbacks in production.
  • Who It Is For: Engineering, SRE, and platform teams who ship frequently, care about blast radius, and want rollouts that are both fast and reversible.
  • Core Problem Solved: You stop guessing which release caused a spike. Flags, metrics, and alerts are connected, so you can see rollout health and recover instantly when something goes wrong.

How It Works

You deploy your app with LaunchDarkly feature flags in place, then configure Datadog or OpenTelemetry to emit metrics and events that map directly to those flags. LaunchDarkly consumes those signals—either through native triggers (Datadog) or observability SDKs/custom events (OpenTelemetry)—to monitor each rollout. When metrics cross a threshold (error rate, latency, failed transactions), your observability tool can alert humans, and in supported flows, can call back into LaunchDarkly to disable or pause the feature flag automatically.

In practice, this breaks down into three phases:

  1. Instrument:

    • Wrap changes in LaunchDarkly feature flags.
    • Instrument your app with Datadog or OpenTelemetry.
    • Attach metrics and events to flags via LaunchDarkly SDKs/APIs.
  2. Connect & Configure:

    • For Datadog: set up a LaunchDarkly trigger so Datadog alerts can automatically toggle the associated flag.
    • For OpenTelemetry: configure exporters (to Datadog or another backend) and use LaunchDarkly observability plugins / custom events to tie signals back to flags.
  3. Monitor & React:

    • Watch rollout health in LaunchDarkly.
    • Use alerts from Datadog (or your OTel backend) to respond fast.
    • Optionally use automatic rollbacks: if a metric goes out of bounds, the flag is turned off—no redeploy required.

Features & Benefits Breakdown

Core FeatureWhat It DoesPrimary Benefit
Flag-aware monitoringLinks Datadog / OpenTelemetry metrics directly to specific LaunchDarkly flags and rollouts.You immediately know which feature caused a regression, instead of hunting through deploy logs.
Alert-driven flag triggersUses Datadog alerts (and generic triggers) to automatically disable or adjust flags when thresholds are breached.Shrinks MTTR with automated rollbacks—no 2am redeploy, no manual hotfix.
Unified runtime viewSurfaces observability data, metrics, and flag status in LaunchDarkly’s UI and via SDK-powered events.Gives engineers a single control surface for “release / observe / iterate” in production.

Ideal Use Cases

  • Best for progressive rollouts: Because you can monitor a 1%, 10%, or 50% rollout in Datadog or your OTel backend, and automatically pause or roll back the flag if latency, error rate, or business KPIs degrade.
  • Best for incident response: Because on-call engineers can correlate a spike directly to a flag change, flip a kill switch, and confirm metrics recover—without waiting on another deploy.

Limitations & Considerations

  • Integration surface depends on your stack: Datadog has native flag triggers; OpenTelemetry usually flows through an APM backend (Datadog, Honeycomb, etc.) or custom pipelines. Plan whether you’ll use native triggers, webhooks, or custom automation.
  • Metrics design matters: If you don’t define clear, flag-level metrics (e.g., error rate per feature, p95 latency on a flagged endpoint), auto-rollback logic and alerts won’t be actionable. Invest in metrics naming and tagging up front.

Pricing & Plans

LaunchDarkly’s Datadog and OpenTelemetry connections rely on core platform capabilities: feature flags, observability integrations, and triggers. There’s no separate “Datadog integration SKU,” but which automation options you can use will depend on your plan’s access to advanced release and observability features.

  • Team / Growth-style plans: Best for product and engineering teams needing solid flagging, progressive rollouts, and basic integration with Datadog or OTel-backed observability to manually correlate metrics with flags.
  • Enterprise-style plans: Best for organizations needing automated triggers, broad governance (policies, approvals, audit logs), and deeper observability/AI control, where rollouts are guarded by formal thresholds and automated flag behavior.

(For exact pricing and feature availability, talk to the LaunchDarkly team.)

Frequently Asked Questions

How do we set up Datadog so it can automatically turn off a LaunchDarkly flag when errors spike?

Short Answer: Create a Datadog monitor on your key metric, configure its alert to call a LaunchDarkly trigger, and associate that trigger with the feature flag you want to protect.

Details:
At a high level, the workflow looks like this:

  1. Instrument your app:

    • Wrap the risky change in a LaunchDarkly feature flag (for example, checkout-new-pricing).
    • Use one of LaunchDarkly’s 25+ SDKs (plus MCP/CLI/IDE support) to evaluate the flag in your service.
    • Ensure your service is already sending metrics to Datadog (APM, logs, or custom metrics).
  2. Define the metric to guard:

    • In Datadog, identify or create a metric that represents “health” for this feature—e.g., service.error_rate, api.checkout.latency.p95, or a custom business metric.
    • Tag it with enough context to filter down to the flagged path if needed (service, endpoint, environment).
  3. Create a monitor in Datadog:

    • Build a Datadog monitor that triggers when your threshold is crossed, for example:
      • “Alert if error rate > 2% for 5 minutes”
      • “Alert if p95 latency > 800ms”
    • Configure the alerting conditions to avoid flapping (use appropriate windows and evaluation periods).
  4. Configure a LaunchDarkly trigger:

    • In LaunchDarkly, create a Datadog trigger and connect it to your Datadog account.
    • Associate this trigger with the specific feature flag. You’ll choose the action: often “turn flag off” or “revert to previous variation.”
    • LaunchDarkly supports triggers for Datadog, Dynatrace, Honeycomb, New Relic One, SignalFX, and a generic trigger for other tools.
  5. Wire the monitor to the trigger:

    • In the Datadog monitor’s notification settings, use the LaunchDarkly-provided webhook or integration endpoint.
    • Datadog will now call LaunchDarkly when the monitor fires, and LaunchDarkly will execute the configured action on the flag.
  6. Test in a safe environment:

    • In a non-production environment, deliberately trip the threshold (or temporarily lower it) to confirm:
      • Datadog monitor fires.
      • LaunchDarkly receives the trigger.
      • The flag switches off in <200ms globally (LaunchDarkly propagates changes worldwide in sub-200ms).

Now, when a rollout causes trouble, Datadog can effectively “hit the kill switch” on your behalf—no manual redeploy, no hotfix.

How does OpenTelemetry fit into monitoring LaunchDarkly-driven rollouts?

Short Answer: Use OpenTelemetry to generate traces, metrics, and logs tagged with feature/flag context, export those to your observability backend (Datadog or others), and use LaunchDarkly’s observability configuration and custom events to tie those signals back to specific flags.

Details:
OpenTelemetry acts as the instrumentation layer; LaunchDarkly is the runtime control plane:

  1. Instrument with OpenTelemetry:

    • Add OpenTelemetry SDKs to your services and configure exporters (e.g., to Datadog, Honeycomb, or another APM).
    • Capture spans and metrics around the code paths gated by LaunchDarkly flags (e.g., a new checkout flow, AI agent behavior, or a backend optimization).
  2. Attach flag context to telemetry:

    • When you evaluate a LaunchDarkly flag in code, include the flag key or variation as attributes/tags on your spans or metrics.
    • This gives you per-flag views in your observability backend: you can filter or break down by feature_flag:checkout-new-pricing or similar.
  3. Configure LaunchDarkly observability:

    • Initialize LaunchDarkly SDKs with observability plugins where available (for example, to send errors, logs, metrics, and traces as custom events).
    • Once configured, your application automatically starts sending observability data back to LaunchDarkly. You can review this in the LaunchDarkly UI under observability views.
  4. Monitor rollouts by flag:

    • In your OTel-backed APM, create dashboards and alerts that slice key metrics by flag attributes.
    • In LaunchDarkly, monitor rollout health and see how changes in flag status line up with observed regressions.
  5. Close the loop with triggers or automation:

    • If your OTel data flows into Datadog, you can still use LaunchDarkly’s Datadog triggers for auto-rollback.
    • If you use a different backend, use LaunchDarkly’s generic triggers or your own automation (webhooks, functions, or pipelines) to call LaunchDarkly’s API and disable or adjust flags when alerts fire.

By treating OpenTelemetry as the source of truth for performance and LaunchDarkly as the runtime control plane, you get a loop: flag on → metrics change → alert → flag off—without touching the codebase or redeploying.

Summary

Connecting Datadog (or OpenTelemetry) to LaunchDarkly turns feature flags into guarded releases. You still move fast—deploy daily, roll out progressively—but now every change is observable and reversible in production. Metrics from Datadog or your OTel pipeline watch each rollout; when something crosses a threshold, you can alert humans, flip a kill switch, or let an automated trigger roll back the flag. No redeploys required, fewer 2am fire drills, and a much smaller blast radius when things go wrong.

Next Step

Get Started