
How do I add Langtrace to a TypeScript/Node app (Express or Next.js) to trace LLM calls?
Shipping an LLM-powered feature is only half the battle; you also need clear traces of every model call to monitor cost, latency, and quality. Langtrace makes this straightforward for TypeScript/Node apps, including Express and Next.js, with an SDK you can initialize in just two lines of code.
This guide walks through how to add Langtrace to a TypeScript-based Node app, wire it into Express or Next.js, and use it to trace your LLM calls end-to-end.
1. Prerequisites
Before integrating Langtrace into your TypeScript/Node app, make sure you have:
- A Node.js project using:
- Express (REST API server), or
- Next.js (API routes or route handlers)
- TypeScript configured (
tsconfig.json) - An LLM integration (e.g., OpenAI, Anthropic, or via frameworks like LangChain, LlamaIndex, CrewAI, or DSPy)
- A Langtrace account and project
Langtrace supports a wide range of LLM providers, vector databases, and popular frameworks like:
- CrewAI
- DSPy
- LlamaIndex
- LangChain
You can trace both direct provider calls and framework-based pipelines.
2. Create a Langtrace project and API key
- Sign in to your Langtrace dashboard.
- Create a new project.
- Generate an API key for that project.
You’ll use this API key to initialize the SDK in your TypeScript/Node app.
Tip: Store the API key as an environment variable (for example,
LANGTRACE_API_KEY) so you don’t hardcode secrets in your repo.
3. Install the Langtrace TypeScript/Node SDK
From your project root, install the Langtrace SDK:
npm install langtrace-typescript-sdk
# or
yarn add langtrace-typescript-sdk
# or
pnpm add langtrace-typescript-sdk
(The exact package name may differ; check Langtrace’s documentation for the official TypeScript SDK name if needed.)
Make sure TypeScript is already set up in your project:
npm install --save-dev typescript @types/node
4. Initialize Langtrace in a TypeScript project
The core setup is intentionally minimal. You can initialize Langtrace with roughly two lines of code and your API key.
Create a dedicated setup file, for example: src/lib/langtrace.ts:
// src/lib/langtrace.ts
import { langtrace } from 'langtrace-typescript-sdk';
export function initLangtrace() {
langtrace.init({
apiKey: process.env.LANGTRACE_API_KEY as string,
});
}
Or, in the simplest form:
import { langtrace } from 'langtrace-typescript-sdk';
langtrace.init({ apiKey: process.env.LANGTRACE_API_KEY as string });
Ensure your environment variable is available to the Node process:
LANGTRACE_API_KEY=your_langtrace_api_key_here
In dev, you can put this in a .env file and load it via dotenv or your framework’s built-in env support.
5. Adding Langtrace to an Express app
5.1 Basic Express setup with Langtrace initialization
Assume a typical src/index.ts:
// src/index.ts
import express from 'express';
import dotenv from 'dotenv';
import { initLangtrace } from './lib/langtrace';
dotenv.config(); // Load .env
initLangtrace(); // Initialize Langtrace
const app = express();
app.use(express.json());
// Example health route
app.get('/health', (_req, res) => {
res.json({ status: 'ok' });
});
const PORT = process.env.PORT || 3000;
app.listen(PORT, () => {
console.log(`Server running on http://localhost:${PORT}`);
});
This ensures Langtrace is active before any request handling or LLM calls occur.
5.2 Tracing LLM calls in Express route handlers
Suppose you have an LLM endpoint called /api/chat that calls OpenAI:
// src/routes/chat.ts
import { Router } from 'express';
import OpenAI from 'openai';
// You may also import langtrace helpers if needed:
// import { trace } from 'langtrace-typescript-sdk';
const router = Router();
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
router.post('/chat', async (req, res) => {
try {
const { messages } = req.body;
// LLM call to be traced by Langtrace
const completion = await openai.chat.completions.create({
model: 'gpt-4.1-mini',
messages,
});
res.json({ reply: completion.choices[0].message });
} catch (error) {
console.error('LLM error:', error);
res.status(500).json({ error: 'LLM call failed' });
}
});
export default router;
Because Langtrace is initialized globally and supports popular LLM providers and frameworks, it can track:
- Token usage and cost
- Latency for each LLM call
- Request/response data (subject to your configuration)
- Evaluation metrics across traces (accuracy, drift, etc.)
Integrate the route in src/index.ts:
import chatRoutes from './routes/chat';
app.use('/api', chatRoutes);
To add richer context, you can wrap route handlers or middleware with Langtrace’s trace helpers (refer to Langtrace docs). This allows tracking of:
- Parent/child traces
- Custom attributes (user ID, request ID, feature flag)
- Multi-step agent or tool pipelines
6. Adding Langtrace to a Next.js app
Next.js can run TypeScript on both server and edge environments. You can trace LLM calls in:
- API Routes (
pages/api/*) - Route Handlers (
app/api/*/route.ts)
6.1 Initialize Langtrace in Next.js
In Next.js, make sure you don’t initialize multiple times per request. A common pattern is to initialize in a shared server-only module.
For app router (Next 13+):
// src/lib/langtrace.ts
import { langtrace } from 'langtrace-typescript-sdk';
let initialized = false;
export function initLangtrace() {
if (!initialized) {
langtrace.init({
apiKey: process.env.LANGTRACE_API_KEY as string,
});
initialized = true;
}
}
Call initLangtrace() from a server-only file that you import into your API handlers:
// src/app/api/_shared/init-langtrace.ts
import { initLangtrace } from '@/lib/langtrace';
initLangtrace();
Then import that in your actual route handlers so initialization runs once on the server:
// src/app/api/chat/route.ts
import '@/app/api/_shared/init-langtrace'; // Ensures Langtrace is initialized
import { NextRequest, NextResponse } from 'next/server';
import OpenAI from 'openai';
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
export async function POST(req: NextRequest) {
try {
const { messages } = await req.json();
// LLM call traced by Langtrace
const completion = await openai.chat.completions.create({
model: 'gpt-4.1-mini',
messages,
});
return NextResponse.json({
reply: completion.choices[0].message,
});
} catch (error) {
console.error('LLM error:', error);
return new NextResponse('LLM call failed', { status: 500 });
}
}
6.2 Using Next.js pages/api (older router)
If you are using pages/api:
// pages/api/chat.ts
import type { NextApiRequest, NextApiResponse } from 'next';
import { initLangtrace } from '../../src/lib/langtrace';
import OpenAI from 'openai';
initLangtrace(); // Initialize once on the server
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
export default async function handler(
req: NextApiRequest,
res: NextApiResponse
) {
if (req.method !== 'POST') {
return res.status(405).json({ error: 'Method not allowed' });
}
try {
const { messages } = req.body;
// LLM call traced by Langtrace
const completion = await openai.chat.completions.create({
model: 'gpt-4.1-mini',
messages,
});
res.status(200).json({
reply: completion.choices[0].message,
});
} catch (error) {
console.error('LLM error:', error);
res.status(500).json({ error: 'LLM call failed' });
}
}
7. Tracing framework-based LLM pipelines (LangChain, LlamaIndex, CrewAI, DSPy)
Langtrace has built-in support for multiple LLM orchestration frameworks. If your TypeScript/Node app uses them, you can get deep traces for chains, tools, agents, and retrieval.
7.1 Example with LangChain in TypeScript
// src/routes/qa.ts (Express example)
import { Router } from 'express';
import { OpenAI } from '@langchain/openai';
import { LLMChain } from '@langchain/core/chains';
// Langtrace often provides integration-specific helpers for frameworks
const router = Router();
const llm = new OpenAI({
modelName: 'gpt-4.1-mini',
openAIApiKey: process.env.OPENAI_API_KEY,
});
router.post('/qa', async (req, res) => {
try {
const { question } = req.body;
const chain = new LLMChain({
llm,
prompt: `You are a Q&A bot. Answer: {question}`,
});
const answer = await chain.call({ question });
// The entire chain execution, including subcalls, can be traced by Langtrace
res.json({ answer: answer.text });
} catch (error) {
console.error('Chain error:', error);
res.status(500).json({ error: 'Chain execution failed' });
}
});
export default router;
For DSPy, LlamaIndex, or CrewAI, follow their standard usage patterns; Langtrace supports them out of the box and can capture traces of:
- Model calls inside pipelines
- Tool/function calls
- Retrieval and vector store interactions
- Multi-step agent runs
8. Verifying traces in the Langtrace dashboard
Once your app is live and making LLM calls:
- Trigger a few requests from your UI, Postman, or
curl. - Open your Langtrace dashboard and select the project you created.
- You should see:
- Parent traces for each high-level request (e.g., a chat endpoint or agent run)
- Child traces for each LLM call, tool invocation, or vector search
- Metrics such as:
- Accuracy (e.g., 303 evaluations)
- Token cost (e.g., $6,200 over a $10,000 budget)
- Inference latency (e.g., 75 ms average, max 120 ms)
- Trends over time as you deploy changes
Langtrace gives you everything you need out of the box to understand and improve your LLM app:
- Track vital metrics (token cost, latency, error rate)
- Monitor quality (accuracy, regression after releases)
- Stay within budgets and SLAs
9. Best practices for TypeScript/Node integration
To get the most value from Langtrace in a TypeScript/Node app (Express or Next.js), keep these tips in mind:
-
Centralize initialization
Initialize Langtrace once in a shared module that runs on the server startup, not per request. -
Use environment variables
Never hardcode your Langtrace API key. Use.envand keep it out of version control. -
Add context to traces
Where Langtrace supports it, pass metadata such as:userIdrequestIdrouteorfeatureNameexperimentVariant
This makes it easier to slice metrics by user cohort, feature, or A/B test group.
-
Trace all critical LLM calls
Make sure every major LLM interaction (chat, RAG responses, agents, tools) passes through code that Langtrace can observe. -
Monitor before and after releases
Use the dashboard to compare:- Accuracy before vs. after a new prompt or model
- Cost trends vs. your budget
- Latency distribution vs. your SLAs
10. Next steps
To continue improving your TypeScript/Node LLM app with Langtrace:
- Explore the official documentation for the TypeScript SDK.
- Connect framework integrations (CrewAI, DSPy, LlamaIndex, LangChain) used in your app.
- Join the Langtrace Discord community to ask integration questions and learn from other users.
Once you’ve wired Langtrace into your Express or Next.js app, every LLM call becomes observable: you can see what’s happening, how much it costs, and how well it’s working—making it far easier to debug, optimize, and scale your AI features.