---
title: "Langfuse Exporter | Tracing | Observability"
description: "Send traces to Langfuse for LLM observability and analytics"
---

# Langfuse Exporter

[Langfuse](https://langfuse.com/) is an open-source observability platform specifically designed for LLM applications. The Langfuse exporter sends your traces to Langfuse, providing detailed insights into model performance, token usage, and conversation flows.

## Installation

```bash npm2yarn
npm install @mastra/langfuse@beta
```

## Configuration

### Prerequisites

1. **Langfuse Account**: Sign up at [cloud.langfuse.com](https://cloud.langfuse.com) or deploy self-hosted
2. **API Keys**: Create public/secret key pair in Langfuse Settings → API Keys
3. **Environment Variables**: Set your credentials

```bash title=".env"
LANGFUSE_PUBLIC_KEY=pk-lf-xxxxxxxxxxxx
LANGFUSE_SECRET_KEY=sk-lf-xxxxxxxxxxxx
LANGFUSE_BASE_URL=https://cloud.langfuse.com  # Or your self-hosted URL
```

### Basic Setup

```typescript title="src/mastra/index.ts"
import { Mastra } from "@mastra/core";
import { Observability } from "@mastra/observability";
import { LangfuseExporter } from "@mastra/langfuse";

export const mastra = new Mastra({
  observability: new Observability({
    configs: {
      langfuse: {
        serviceName: "my-service",
        exporters: [
          new LangfuseExporter({
            publicKey: process.env.LANGFUSE_PUBLIC_KEY!,
            secretKey: process.env.LANGFUSE_SECRET_KEY!,
            baseUrl: process.env.LANGFUSE_BASE_URL,
            options: {
              environment: process.env.NODE_ENV,
            },
          }),
        ],
      },
    },
  }),
});
```

## Configuration Options

### Realtime vs Batch Mode

The Langfuse exporter supports two modes for sending traces:

#### Realtime Mode (Development)

Traces appear immediately in Langfuse dashboard, ideal for debugging:

```typescript
new LangfuseExporter({
  publicKey: process.env.LANGFUSE_PUBLIC_KEY!,
  secretKey: process.env.LANGFUSE_SECRET_KEY!,
  realtime: true, // Flush after each event
});
```

#### Batch Mode (Production)

Better performance with automatic batching:

```typescript
new LangfuseExporter({
  publicKey: process.env.LANGFUSE_PUBLIC_KEY!,
  secretKey: process.env.LANGFUSE_SECRET_KEY!,
  realtime: false, // Default - batch traces
});
```

### Complete Configuration

```typescript
new LangfuseExporter({
  // Required credentials
  publicKey: process.env.LANGFUSE_PUBLIC_KEY!,
  secretKey: process.env.LANGFUSE_SECRET_KEY!,

  // Optional settings
  baseUrl: process.env.LANGFUSE_BASE_URL, // Default: https://cloud.langfuse.com
  realtime: process.env.NODE_ENV === "development", // Dynamic mode selection
  logLevel: "info", // Diagnostic logging: debug | info | warn | error

  // Langfuse-specific options
  options: {
    environment: process.env.NODE_ENV, // Shows in UI for filtering
    version: process.env.APP_VERSION, // Track different versions
    release: process.env.GIT_COMMIT, // Git commit hash
  },
});
```

## Prompt Linking

You can link LLM generations to prompts stored in [Langfuse Prompt Management](https://langfuse.com/docs/prompt-management). This enables version tracking and metrics for your prompts.

### Using the Helper (Recommended)

Use `withLangfusePrompt` with `buildTracingOptions` for the cleanest API:

```typescript title="src/agents/support-agent.ts"
import { Agent } from "@mastra/core/agent";
import { openai } from "@ai-sdk/openai";
import { buildTracingOptions } from "@mastra/observability";
import { withLangfusePrompt } from "@mastra/langfuse";
import { Langfuse } from "langfuse";

const langfuse = new Langfuse({
  publicKey: process.env.LANGFUSE_PUBLIC_KEY!,
  secretKey: process.env.LANGFUSE_SECRET_KEY!,
});

// Fetch the prompt from Langfuse Prompt Management
const prompt = await langfuse.getPrompt("customer-support");

export const supportAgent = new Agent({
  name: "support-agent",
  instructions: prompt.prompt, // Use the prompt text from Langfuse
  model: openai("gpt-4o"),
  defaultGenerateOptions: {
    tracingOptions: buildTracingOptions(withLangfusePrompt(prompt)),
## Using Tags

Tags help you categorize and filter traces in the Langfuse dashboard. Add tags when executing agents or workflows:

```typescript
const result = await agent.generate({
  messages: [{ role: "user", content: "Hello" }],
  tracingOptions: {
    tags: ["production", "experiment-v2", "user-request"],
  },
});
```

The `withLangfusePrompt` helper automatically extracts `name`, `version`, and `id` from the Langfuse prompt object.

### Manual Fields

You can also pass manual fields if you're not using the Langfuse SDK:

```typescript
const tracingOptions = buildTracingOptions(
  withLangfusePrompt({ name: "my-prompt", version: 1 }),
);

// Or with just an ID
const tracingOptions = buildTracingOptions(
  withLangfusePrompt({ id: "prompt-uuid-12345" }),
);
```

### Prompt Object Fields

The prompt object supports these fields:

| Field | Type | Description |
|-------|------|-------------|
| `name` | string | The prompt name in Langfuse |
| `version` | number | The prompt version number |
| `id` | string | The prompt UUID for direct linking |

You can link prompts using either:
- `id` alone (the UUID uniquely identifies a prompt version)
- `name` + `version` together
- All three fields

When set on a `MODEL_GENERATION` span, the Langfuse exporter automatically links the generation to the corresponding prompt.
Tags appear in Langfuse's trace view and can be used to filter and search traces. Common use cases include:

- Environment labels: `"production"`, `"staging"`
- Experiment tracking: `"experiment-v1"`, `"control-group"`
- Priority levels: `"priority-high"`, `"batch-job"`
- User segments: `"beta-user"`, `"enterprise"`

## Related

- [Tracing Overview](/docs/v1/observability/tracing/overview)
- [Langfuse Documentation](https://langfuse.com/docs)
- [Langfuse Prompt Management](https://langfuse.com/docs/prompt-management)
