
How do I export Langtrace OpenTelemetry traces to Datadog (or Grafana/Honeycomb) instead of only using the Langtrace UI?
Most Langtrace users start by exploring traces in the Langtrace UI, but many teams also want to send those same OpenTelemetry traces to Datadog, Grafana, Honeycomb, or another observability backend they already use. The good news: because Langtrace is built on OpenTelemetry, you can export traces to multiple destinations at the same time with only minor configuration changes.
Below is a practical, implementation-focused guide to exporting Langtrace OpenTelemetry traces to Datadog, Grafana, Honeycomb, and other OTEL-compatible backends—without losing access to the Langtrace UI.
How Langtrace and OpenTelemetry work together
Langtrace integrates with your AI stack (CrewAI, DSPy, LlamaIndex, LangChain, and many LLM providers / VectorDBs). Under the hood, Langtrace instruments your LLM app and emits OpenTelemetry-compliant traces.
At a high level, you have three pieces:
-
Your AI app
- Uses Langtrace SDKs (and frameworks like LangChain, LlamaIndex, CrewAI, DSPy).
- Emits OpenTelemetry spans/traces for every request, tool call, vector search, etc.
-
OpenTelemetry pipeline
- SDK + Exporters (from inside your app), or
- OpenTelemetry Collector (a sidecar or centralized collector).
-
Backends / UIs
- Langtrace UI (AI-native analysis, evaluations, agent performance, safety).
- Datadog / Grafana Tempo / Honeycomb / others (infra-level observability).
Your goal is to configure the OpenTelemetry pipeline so that traces reach:
- Langtrace (using the Langtrace SDK / exporter), and
- Another backend (via OTLP or that vendor’s exporter / endpoint).
This is the core pattern for the URL slug how-do-i-export-langtrace-opentelemetry-traces-to-datadog-or-grafana-honeycomb-i: configure dual export, not either/or.
Prerequisites
Before exporting Langtrace OpenTelemetry traces to Datadog, Grafana, or Honeycomb, make sure you have:
-
A Langtrace project and API key
- Create a project in Langtrace.
- Generate an API key.
- Install the appropriate SDK and instantiate Langtrace with that API key, as per the documentation.
-
One of the supported frameworks or stacks:
- CrewAI
- DSPy
- LlamaIndex
- LangChain
- Plus your LLM provider / VectorDB of choice
-
Basic OpenTelemetry concepts:
- Tracer: creates spans
- Exporter: sends spans to a backend
- BatchSpanProcessor: buffers & flushes spans
Architecture options for exporting Langtrace traces
There are two common patterns for exporting Langtrace OpenTelemetry traces:
-
Direct from application (SDK-level exporters)
- Your app sends traces directly to:
- Langtrace (via Langtrace SDK), and
- Datadog / Grafana / Honeycomb endpoints (via additional OTEL exporters or OTLP).
- Best when you want a simple setup and avoid running an OpenTelemetry Collector.
- Your app sends traces directly to:
-
Via OpenTelemetry Collector
- Your app sends traces to an OTEL Collector once, using OTLP.
- The collector fans out traces to:
- Langtrace’s ingestion endpoint, and
- Datadog / Grafana / Honeycomb.
- Best when you manage many services or want centralized routing, filtering, redaction, or sampling.
The rest of this guide focuses on the most common, OTEL-native approaches, and how they relate to Datadog, Grafana, and Honeycomb.
Exporting Langtrace OpenTelemetry traces to Datadog
Datadog has first-class support for OpenTelemetry, and you can integrate it in two primary ways.
Option 1: Use the Datadog Agent with OTLP (recommended)
- Configure your app to emit OTLP traces
If you’re already using Langtrace’s SDK and OpenTelemetry, you likely have something like:
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from langtrace import Langtrace
# Initialize Langtrace with your project API key
langtrace = Langtrace(api_key="LANGTRACE_API_KEY")
provider = TracerProvider()
# Export to Langtrace via Langtrace SDK / configured exporter
provider.add_span_processor(BatchSpanProcessor(langtrace.exporter))
# Also export to Datadog via OTLP (to the Datadog Agent)
dd_exporter = OTLPSpanExporter(
endpoint="http://localhost:4318/v1/traces", # default OTLP HTTP endpoint
)
provider.add_span_processor(BatchSpanProcessor(dd_exporter))
Use the Datadog Agent’s OTLP endpoint (usually on port 4318 or 4317 depending on HTTP vs gRPC).
- Configure the Datadog Agent to receive OTLP
In datadog.yaml or the OTLP configuration file:
apm_config:
enabled: true
otlp_config:
receiver:
protocols:
http:
endpoint: 0.0.0.0:4318
grpc:
endpoint: 0.0.0.0:4317
Now your Langtrace OpenTelemetry traces are:
- Sent to Langtrace via the Langtrace SDK.
- Sent to Datadog via OTLP → Datadog Agent → Datadog APM.
Option 2: Use a Datadog-specific OpenTelemetry exporter
For some languages, Datadog offers a dedicated OTEL exporter (e.g., DatadogExporter). Your pseudo-setup would be:
from datadog_opentelemetry_exporter import DatadogExporter
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from langtrace import Langtrace
langtrace = Langtrace(api_key="LANGTRACE_API_KEY")
provider = TracerProvider()
provider.add_span_processor(BatchSpanProcessor(langtrace.exporter))
dd_exporter = DatadogExporter(
agent_url="http://localhost:8126", # Datadog Agent
service="my-llm-app",
env="production",
)
provider.add_span_processor(BatchSpanProcessor(dd_exporter))
This gives you:
- Full AI-specific visibility in the Langtrace UI.
- Infrastructure and application visibility in Datadog APM.
Exporting Langtrace OpenTelemetry traces to Grafana (Tempo / Loki)
Grafana typically ingests traces using Tempo (for traces) and Loki (for logs). With OpenTelemetry, you usually export to Tempo.
Option 1: Export directly to Grafana Tempo from your app
- Add an OTLP exporter to Tempo
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from langtrace import Langtrace
langtrace = Langtrace(api_key="LANGTRACE_API_KEY")
provider = TracerProvider()
provider.add_span_processor(BatchSpanProcessor(langtrace.exporter))
tempo_exporter = OTLPSpanExporter(
endpoint="http://tempo:4318/v1/traces", # adjust to your Tempo host
)
provider.add_span_processor(BatchSpanProcessor(tempo_exporter))
- Configure Tempo to accept OTLP
In tempo.yaml (or equivalent):
receivers:
otlp:
protocols:
http:
grpc:
This way, trace data from Langtrace’s OpenTelemetry instrumentation flows to both:
- Langtrace UI, and
- Grafana Tempo (visible in Grafana’s Trace explorer).
Option 2: Use OpenTelemetry Collector to fan out to Langtrace and Tempo
- App → OTLP → OTEL Collector
Your app only needs the OTLP exporter:
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
provider = TracerProvider()
collector_exporter = OTLPSpanExporter(
endpoint="http://otel-collector:4318/v1/traces",
)
provider.add_span_processor(BatchSpanProcessor(collector_exporter))
- Collector → Langtrace + Tempo
In otel-collector-config.yaml:
receivers:
otlp:
protocols:
http:
grpc:
exporters:
otlp/langtrace:
endpoint: https://ingest.langtrace.com/v1/traces # example, check docs
headers:
x-api-key: LANGTRACE_API_KEY
otlp/tempo:
endpoint: http://tempo:4317
tls:
insecure: true
service:
pipelines:
traces:
receivers: [otlp]
exporters: [otlp/langtrace, otlp/tempo]
Now you have centralized control: sampling, masking, routing—all before traces hit either backend.
Exporting Langtrace OpenTelemetry traces to Honeycomb
Honeycomb is also heavily OpenTelemetry-compatible and has an OTLP ingest endpoint and API keys.
Option 1: Direct OTLP export to Honeycomb
- Configure OTLP exporter targeting Honeycomb
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from langtrace import Langtrace
langtrace = Langtrace(api_key="LANGTRACE_API_KEY")
provider = TracerProvider()
provider.add_span_processor(BatchSpanProcessor(langtrace.exporter))
honeycomb_exporter = OTLPSpanExporter(
endpoint="https://api.honeycomb.io/v1/traces",
headers={
"x-honeycomb-team": "YOUR_HONEYCOMB_API_KEY",
"x-honeycomb-dataset": "llm-app-traces", # or your dataset name
},
)
provider.add_span_processor(BatchSpanProcessor(honeycomb_exporter))
This sends full OpenTelemetry traces to both Langtrace and Honeycomb.
Option 2: Route via OpenTelemetry Collector
If you already use an OTEL Collector, you can configure Honeycomb as a separate exporter:
exporters:
otlp/langtrace:
endpoint: https://ingest.langtrace.com/v1/traces
headers:
x-api-key: LANGTRACE_API_KEY
otlp/honeycomb:
endpoint: https://api.honeycomb.io
headers:
x-honeycomb-team: YOUR_HONEYCOMB_API_KEY
x-honeycomb-dataset: llm-app-traces
service:
pipelines:
traces:
receivers: [otlp]
exporters: [otlp/langtrace, otlp/honeycomb]
Your app sends OTLP traces once to the collector; the collector fans out to both backends.
Keeping Langtrace UI while exporting elsewhere
A common concern behind “how do I export Langtrace OpenTelemetry traces to Datadog or Grafana/Honeycomb instead of only using the Langtrace UI” is that teams don’t want to lose Langtrace’s AI-focused capabilities while integrating with their existing observability stack.
You don’t need to choose one or the other:
- Langtrace UI is specialized for:
- Agent and tool call visibility
- LLM latency, cost, and quality analysis
- Safety, prompt iterations, and evaluation loops
- Datadog / Grafana / Honeycomb focus on:
- Infrastructure health and SLOs
- Cross-service tracing
- Logs and metrics correlation
By using OpenTelemetry’s multi-exporter pattern, you can:
- Keep Langtrace as your AI observability and evaluation layer.
- Use Datadog / Grafana / Honeycomb as your infra and full-stack observability layer.
- Analyze the same trace IDs across both systems for end-to-end correlated debugging.
Best practices for multi-backend OpenTelemetry export
When exporting Langtrace OpenTelemetry traces to Datadog, Grafana, or Honeycomb:
-
Use consistent service and resource attributes
- Set
service.name,deployment.environment, and other attributes consistently so you can correlate traces across UIs.
- Set
-
Avoid excessive span processors
- Use
BatchSpanProcessorfor each exporter instead ofSimpleSpanProcessorto reduce overhead.
- Use
-
Apply sampling thoughtfully
- If you use an OTEL Collector, centralize sampling there so both Langtrace and your other backends receive the same sampled set.
- For production, consider probabilistic sampling with higher sampling for critical endpoints.
-
Redact sensitive data
- Apply attribute processors or custom processors in the OTEL Collector to remove PII before sending traces to external vendors, while still preserving safe portions for Langtrace analytics.
-
Monitor export failures
- Log or metric-ize errors from exporters so you can see if Datadog, Grafana, or Honeycomb endpoints are failing.
Summary
To export Langtrace OpenTelemetry traces to Datadog, Grafana, or Honeycomb—without giving up the Langtrace UI—you should:
- Instrument your app with Langtrace (API key + SDK, plus your framework like LangChain, LlamaIndex, CrewAI, DSPy).
- Add additional OTEL exporters (or an OpenTelemetry Collector) that send the same traces to:
- Datadog (via OTLP or Datadog exporter),
- Grafana Tempo (via OTLP),
- Honeycomb (via OTLP + API key headers).
- Prefer OTEL Collector for large deployments or complex routing, and use direct exporters for simpler setups.
This multi-backend OpenTelemetry approach lets you keep AI-native insights in Langtrace while integrating tightly into your existing observability ecosystem.