---
title: "Tracing | Observability"
description: "Set up Tracing for Mastra applications"
---

# Tracing

Tracing provides specialized monitoring and debugging for the AI-related operations in your application. When enabled, Mastra automatically creates traces for agent runs, LLM generations, tool calls, and workflow steps with AI-specific context and metadata.

Unlike traditional application tracing, Tracing focuses specifically on understanding your AI pipeline — capturing token usage, model parameters, tool execution details, and conversation flows. This makes it easier to debug issues, optimize performance, and understand how your AI systems behave in production.

## How It Works

Traces are created by:

- **Configure exporters** → send trace data to observability platforms
- **Set sampling strategies** → control which traces are collected
- **Run agents and workflows** → Mastra auto-instruments them with Tracing

## Configuration

### Basic Config

```ts title="src/mastra/index.ts" showLineNumbers copy
import { Mastra } from "@mastra/core";
import { Observability } from "@mastra/observability";

export const mastra = new Mastra({
  // ... other config
  observability: new Observability({
    default: { enabled: true }, // Enables DefaultExporter and CloudExporter
  }),
  storage: new LibSQLStore({
    id: 'mastra-storage',
    url: "file:./mastra.db", // Storage is required for tracing
  }),
});
```

When enabled, the default configuration automatically includes:

- **Service Name**: `"mastra"`
- **Sampling**: `"always"`- Sample (100% of traces)
- **Exporters**:
  - `DefaultExporter` - Persists traces to your configured storage
  - `CloudExporter` - Sends traces to Mastra Cloud (requires `MASTRA_CLOUD_ACCESS_TOKEN`)
- **Span Output Processors**: `SensitiveDataFilter` - Automatically redacts sensitive fields

### Expanded Basic Config

This default configuration is a minimal helper that equates to this more verbose configuration:

```ts title="src/mastra/index.ts" showLineNumbers copy
import {
  Observability,
  CloudExporter,
  DefaultExporter,
  SensitiveDataFilter,
} from "@mastra/observability";

export const mastra = new Mastra({
  // ... other config
  observability: new Observability({
    configs: {
      default: {
        serviceName: "mastra",
        sampling: { type: "always" },
        spanOutputProcessors: [new SensitiveDataFilter()],
        exporters: [new CloudExporter(), new DefaultExporter()],
      },
    },
  }),
  storage: new LibSQLStore({
    id: 'mastra-storage',
    url: "file:./mastra.db", // Storage is required for tracing
  }),
});
```

## Exporters

Exporters determine where your trace data is sent and how it's stored. Choosing the right exporters allows you to integrate with your existing observability stack, comply with data residency requirements, and optimize for cost and performance. You can use multiple exporters simultaneously to send the same trace data to different destinations — for example, storing detailed traces locally for debugging while sending sampled data to a cloud provider for production monitoring.

### Internal Exporters

Mastra provides two built-in exporters that work out of the box:

- **[Default](/docs/v1/observability/tracing/exporters/default)** - Persists traces to local storage for viewing in Studio
- **[Cloud](/docs/v1/observability/tracing/exporters/cloud)** - Sends traces to Mastra Cloud for production monitoring and collaboration

### External Exporters

In addition to the internal exporters, Mastra supports integration with popular observability platforms. These exporters allow you to leverage your existing monitoring infrastructure and take advantage of platform-specific features like alerting, dashboards, and correlation with other application metrics.

- **[Arize](/docs/v1/observability/tracing/exporters/arize)** - Exports traces to Arize Phoenix or Arize AX using OpenInference semantic conventions
- **[Braintrust](/docs/v1/observability/tracing/exporters/braintrust)** - Exports traces to Braintrust's eval and observability platform
- **[Langfuse](/docs/v1/observability/tracing/exporters/langfuse)** - Sends traces to the Langfuse open-source LLM engineering platform
- **[LangSmith](/docs/v1/observability/tracing/exporters/langsmith)** - Pushes traces into LangSmith's observability and evaluation toolkit
- **[OpenTelemetry](/docs/v1/observability/tracing/exporters/otel)** - Deliver traces to any OpenTelemetry-compatible observability system
  - Supports: Dash0, MLflow, Laminar, New Relic, SigNoz, Traceloop, Zipkin, and others!

## Bridges

Bridges provide bidirectional integration with external tracing systems. Unlike exporters that send trace data to external platforms, bridges create native spans in external systems and inherit context from them. This enables Mastra operations to participate in existing distributed traces.

- **[OpenTelemetry Bridge](/docs/v1/observability/tracing/bridges/otel)** - Integrate with existing OpenTelemetry infrastructure

### Bridges vs Exporters

| Feature | Bridges | Exporters |
| --- | --- | --- |
| Creates native spans in external systems | Yes | No |
| Inherits context from external systems | Yes | No |
| Sends data to backends | Via external SDK | Directly |
| Use case | Existing distributed tracing | Standalone Mastra tracing |

You can use both together — a bridge for context propagation and exporters to send traces to additional destinations.

## Sampling Strategies

Sampling allows you to control which traces are collected, helping you balance between observability needs and resource costs. In production environments with high traffic, collecting every trace can be expensive and unnecessary. Sampling strategies let you capture a representative subset of traces while ensuring you don't miss critical information about errors or important operations.

Mastra supports four sampling strategies:

### Always Sample

Collects 100% of traces. Best for development, debugging, or low-traffic scenarios where you need complete visibility.

```ts
sampling: {
  type: "always";
}
```

### Never Sample

Disables tracing entirely. Useful for specific environments where tracing adds no value or when you need to temporarily disable tracing without removing configuration.

```ts
sampling: {
  type: "never";
}
```

### Ratio-Based Sampling

Randomly samples a percentage of traces. Ideal for production environments where you want statistical insights without the cost of full tracing. The probability value ranges from 0 (no traces) to 1 (all traces).

```ts
sampling: {
  type: 'ratio',
  probability: 0.1  // Sample 10% of traces
}
```

### Custom Sampling

Implements your own sampling logic based on request context, metadata, or business rules. Perfect for complex scenarios like sampling based on user tier, request type, or error conditions.

```ts
sampling: {
  type: 'custom',
  sampler: (options) => {
    // Sample premium users at higher rate
    if (options?.metadata?.userTier === 'premium') {
      return Math.random() < 0.5; // 50% sampling
    }

    // Default 1% sampling for others
    return Math.random() < 0.01;
  }
}
```

### Complete Example

```ts title="src/mastra/index.ts" showLineNumbers copy
export const mastra = new Mastra({
  observability: new Observability({
    configs: {
      "10_percent": {
        serviceName: "my-service",
        // Sample 10% of traces
        sampling: {
          type: "ratio",
          probability: 0.1,
        },
        exporters: [new DefaultExporter()],
      },
    },
  }),
});
```

## Multi-Config Setup

Complex applications often require different tracing configurations for different scenarios. You might want detailed traces with full sampling during development, sampled traces sent to external providers in production, and specialized configurations for specific features or customer segments. The `configSelector` function enables dynamic configuration selection at runtime, allowing you to route traces based on request context, environment variables, feature flags, or any custom logic.

This approach is particularly valuable when:

- Running A/B tests with different observability requirements
- Providing enhanced debugging for specific customers or support cases
- Gradually rolling out new tracing providers without affecting existing monitoring
- Optimizing costs by using different sampling rates for different request types
- Maintaining separate trace streams for compliance or data residency requirements

:::info

Note that only a single config can be used for a specific execution. But a single config can send data to multiple exporters simultaneously.

:::

### Dynamic Configuration Selection

Use `configSelector` to choose the appropriate tracing configuration based on request context:

```ts title="src/mastra/index.ts" showLineNumbers copy
export const mastra = new Mastra({
  observability: new Observability({
    configs: {
      langfuse: {
        serviceName: "langfuse-service",
        exporters: [langfuseExporter],
      },
      braintrust: {
        serviceName: "braintrust-service",
        exporters: [braintrustExporter],
      },
      debug: {
        serviceName: "debug-service",
        sampling: { type: "always" },
        exporters: [new DefaultExporter()],
      },
    },
    configSelector: (context, availableTracers) => {
      // Use debug config for support requests
      if (context.requestContext?.get("supportMode")) {
        return "debug";
      }

      // Route specific customers to different providers
      const customerId = context.requestContext?.get("customerId");
      if (customerId && premiumCustomers.includes(customerId)) {
        return "braintrust";
      }

      // Route specific requests to langfuse
      if (context.requestContext?.get("useExternalTracing")) {
        return "langfuse";
      }

      throw new Error('no config found')
    },
  }),
});
```

### Environment-Based Configuration

A common pattern is to select configurations based on deployment environment:

```ts title="src/mastra/index.ts" showLineNumbers copy
export const mastra = new Mastra({
  observability: new Observability({
    configs: {
      development: {
        serviceName: "my-service-dev",
        sampling: { type: "always" },
        exporters: [new DefaultExporter()],
      },
      staging: {
        serviceName: "my-service-staging",
        sampling: { type: "ratio", probability: 0.5 },
        exporters: [langfuseExporter],
      },
      production: {
        serviceName: "my-service-prod",
        sampling: { type: "ratio", probability: 0.01 },
        exporters: [cloudExporter, langfuseExporter],
      },
    },
    configSelector: (context, availableTracers) => {
      const env = process.env.NODE_ENV || "development";
      return env;
    },
  }),
});
```

### Common Configuration Patterns & Troubleshooting

#### Default & Custom Configs

Having both the default config enabled and adding custom configs is an invalid configuration of Observability. Use either the default or custom config, but not both.

```ts title="src/mastra/index.ts" showLineNumbers copy
export const mastra = new Mastra({
  observability: new Observability({
    default: { enabled: true }, // This will always be used!
    configs: {
      langfuse: {
        serviceName: "my-service",
        exporters: [langfuseExporter], // This won't be reached
      },
    },
  }),
});
```

#### Maintaining Studio and Cloud Access

When creating a custom config with external exporters, you might lose access to Studio and Cloud. To maintain access while adding external exporters, include the default exporters in your custom config:

```ts title="src/mastra/index.ts" showLineNumbers copy
import { DefaultExporter, CloudExporter } from "@mastra/observability";
import { ArizeExporter } from "@mastra/arize";

export const mastra = new Mastra({
  observability: new Observability({
    default: { enabled: false }, // Disable default to use custom
    configs: {
      production: {
        serviceName: "my-service",
        exporters: [
          new ArizeExporter({
            // External exporter
            endpoint: process.env.PHOENIX_ENDPOINT,
            apiKey: process.env.PHOENIX_API_KEY,
          }),
          new DefaultExporter(), // Keep Studio access
          new CloudExporter(), // Keep Cloud access
        ],
      },
    },
  }),
});
```

This configuration sends traces to all three destinations simultaneously:

- **Arize Phoenix/AX** for external observability
- **DefaultExporter** for Studio
- **CloudExporter** for Mastra Cloud dashboard

:::info

Remember: A single trace can be sent to multiple exporters. You don't need separate configs for each exporter unless you want different sampling rates or processors.

:::

## Adding Custom Metadata

Custom metadata allows you to attach additional context to your traces, making it easier to debug issues and understand system behavior in production. Metadata can include business logic details, performance metrics, user context, or any information that helps you understand what happened during execution.

You can add metadata to any span using the tracing context:

```ts showLineNumbers copy
execute: async ({ inputData, tracingContext }) => {
  const startTime = Date.now();
  const response = await fetch(inputData.endpoint);

  // Add custom metadata to the current span
  tracingContext.currentSpan?.update({
    metadata: {
      apiStatusCode: response.status,
      endpoint: inputData.endpoint,
      responseTimeMs: Date.now() - startTime,
      userTier: inputData.userTier,
      region: process.env.AWS_REGION,
    },
  });

  return await response.json();
};
```

Metadata set here will be shown in all configured exporters.

### Automatic Metadata from RequestContext

Instead of manually adding metadata to each span, you can configure Mastra to automatically extract values from RequestContext and attach them as metadata to all spans in a trace. This is useful for consistently tracking user identifiers, environment information, feature flags, or any request-scoped data across your entire trace.

#### Configuration-Level Extraction

Define which RequestContext keys to extract in your tracing configuration. These keys will be automatically included as metadata for all spans created with this configuration:

```ts title="src/mastra/index.ts" showLineNumbers copy
export const mastra = new Mastra({
  observability: new Observability({
    configs: {
      default: {
        serviceName: "my-service",
        requestContextKeys: ["userId", "environment", "tenantId"],
        exporters: [new DefaultExporter()],
      },
    },
  }),
});
```

Now when you execute agents or workflows with a RequestContext, these values are automatically extracted:

```ts showLineNumbers copy
const requestContext = new RequestContext();
requestContext.set("userId", "user-123");
requestContext.set("environment", "production");
requestContext.set("tenantId", "tenant-456");

// All spans in this trace automatically get userId, environment, and tenantId metadata
const result = await agent.generate({
  messages: [{ role: "user", content: "Hello" }],
  requestContext,
});
```

#### Per-Request Additions

You can add trace-specific keys using `tracingOptions.requestContextKeys`. These are merged with the configuration-level keys:

```ts showLineNumbers copy
const requestContext = new RequestContext();
requestContext.set("userId", "user-123");
requestContext.set("environment", "production");
requestContext.set("experimentId", "exp-789");

const result = await agent.generate({
  messages: [{ role: "user", content: "Hello" }],
  requestContext,
  tracingOptions: {
    requestContextKeys: ["experimentId"], // Adds to configured keys
  },
});

// All spans now have: userId, environment, AND experimentId
```

#### Nested Value Extraction

Use dot notation to extract nested values from RequestContext:

```ts showLineNumbers copy
export const mastra = new Mastra({
  observability: new Observability({
    configs: {
      default: {
        requestContextKeys: ["user.id", "session.data.experimentId"],
        exporters: [new DefaultExporter()],
      },
    },
  }),
});

const requestContext = new RequestContext();
requestContext.set("user", { id: "user-456", name: "John Doe" });
requestContext.set("session", { data: { experimentId: "exp-999" } });

// Metadata will include: { user: { id: 'user-456' }, session: { data: { experimentId: 'exp-999' } } }
```

#### How It Works

1. **TraceState Computation**: At the start of a trace (root span creation), Mastra computes which keys to extract by merging configuration-level and per-request keys
2. **Automatic Extraction**: Root spans (agent runs, workflow executions) automatically extract metadata from RequestContext
3. **Child Span Extraction**: Child spans can also extract metadata if you pass `requestContext` when creating them
4. **Metadata Precedence**: Explicit metadata passed to span options always takes precedence over extracted metadata

### Adding Tags to Traces

Tags are string labels that help you categorize and filter traces. Unlike metadata (which contains structured key-value data), tags are simple strings designed for quick filtering and organization.

Use `tracingOptions.tags` to add tags when executing agents or workflows:

```ts showLineNumbers copy
// With agents
const result = await agent.generate({
  messages: [{ role: "user", content: "Hello" }],
  tracingOptions: {
    tags: ["production", "experiment-v2", "user-request"],
  },
});

// With workflows
const run = await mastra.getWorkflow("myWorkflow").createRun();
const result = await run.start({
  inputData: { data: "process this" },
  tracingOptions: {
    tags: ["batch-processing", "priority-high"],
  },
});
```

#### How Tags Work

- **Root span only**: Tags are applied only to the root span of a trace (the agent run or workflow run span)
- **Widely supported**: Tags are supported by most exporters for filtering and searching traces:
  - **Braintrust** - Native `tags` field
  - **Langfuse** - Native `tags` field on traces
  - **ArizeExporter** - `tag.tags` OpenInference attribute
  - **OtelExporter** - `mastra.tags` span attribute
  - **OtelBridge** - `mastra.tags` span attribute
- **Combinable with metadata**: You can use both `tags` and `metadata` in the same `tracingOptions`

```ts showLineNumbers copy
const result = await agent.generate({
  messages: [{ role: "user", content: "Analyze this" }],
  tracingOptions: {
    tags: ["production", "analytics"],
    metadata: { userId: "user-123", experimentId: "exp-456" },
  },
});
```

#### Common Tag Patterns

- **Environment**: `"production"`, `"staging"`, `"development"`
- **Feature flags**: `"feature-x-enabled"`, `"beta-user"`
- **Request types**: `"user-request"`, `"batch-job"`, `"scheduled-task"`
- **Priority levels**: `"priority-high"`, `"priority-low"`
- **Experiments**: `"experiment-v1"`, `"control-group"`, `"treatment-a"`

#### Child Spans and Metadata Extraction

When creating child spans within tools or workflow steps, you can pass the `requestContext` parameter to enable metadata extraction:

```ts showLineNumbers copy
execute: async ({ tracingContext, requestContext }) => {
  // Create child span WITH requestContext - gets metadata extraction
  const dbSpan = tracingContext.currentSpan?.createChildSpan({
    type: "generic",
    name: "database-query",
    requestContext, // Pass to enable metadata extraction
  });

  const results = await db.query("SELECT * FROM users");
  dbSpan?.end({ output: results });

  // Or create child span WITHOUT requestContext - no metadata extraction
  const cacheSpan = tracingContext.currentSpan?.createChildSpan({
    type: "generic",
    name: "cache-check",
    // No requestContext - won't extract metadata
  });

  return results;
};
```

This gives you fine-grained control over which child spans include RequestContext metadata. Root spans (agent/workflow executions) always extract metadata automatically, while child spans only extract when you explicitly pass `requestContext`.

## Creating Child Spans

Child spans allow you to track fine-grained operations within your workflow steps or tools. They provide visibility into sub-operations like database queries, API calls, file operations, or complex calculations. This hierarchical structure helps you identify performance bottlenecks and understand the exact sequence of operations.

Create child spans inside a tool call or workflow step to track specific operations:

```ts showLineNumbers copy
execute: async ({ inputData, tracingContext }) => {
  // Create another child span for the main database operation
  const querySpan = tracingContext.currentSpan?.createChildSpan({
    type: "generic",
    name: "database-query",
    input: { query: inputData.query },
    metadata: { database: "production" },
  });

  try {
    const results = await db.query(inputData.query);
    querySpan?.end({
      output: results.data,
      metadata: {
        rowsReturned: results.length,
        queryTimeMs: results.executionTime,
        cacheHit: results.fromCache,
      },
    });
    return results;
  } catch (error) {
    querySpan?.error({
      error,
      metadata: { retryable: isRetryableError(error) },
    });
    throw error;
  }
};
```

Child spans automatically inherit the trace context from their parent, maintaining the relationship hierarchy in your observability platform.

## Span Processors

Span processors allow you to transform, filter, or enrich trace data before it's exported. They act as a pipeline between span creation and export, enabling you to modify spans for security, compliance, or debugging purposes. Mastra includes built-in processors and supports custom implementations.

### Built-in Processors

- [Sensitive Data Filter](/docs/v1/observability/tracing/processors/sensitive-data-filter) redacts sensitive information. It is enabled in the default observability config.

### Creating Custom Processors

You can create custom span processors by implementing the `SpanOutputProcessor` interface. Here's a simple example that converts all input text in spans to lowercase:

```ts title="src/processors/lowercase-input-processor.ts" showLineNumbers copy
import type { SpanOutputProcessor, AnySpan } from "@mastra/observability";

export class LowercaseInputProcessor implements SpanOutputProcessor {
  name = "lowercase-processor";

  process(span: AnySpan): AnySpan {
    span.input = `${span.input}`.toLowerCase();
    return span;
  }

  async shutdown(): Promise<void> {
    // Cleanup if needed
  }
}

// Use the custom processor
export const mastra = new Mastra({
  observability: new Observability({
    configs: {
      development: {
        spanOutputProcessors: [new LowercaseInputProcessor(), new SensitiveDataFilter()],
        exporters: [new DefaultExporter()],
      },
    },
  }),
});
```

Processors are executed in the order they're defined, allowing you to chain multiple transformations. Common use cases for custom processors include:

- Adding environment-specific metadata
- Filtering out spans based on criteria
- Normalizing data formats
- Sampling high-volume traces
- Enriching spans with business context

## Retrieving Trace IDs

When you execute agents or workflows with tracing enabled, the response includes a `traceId` that you can use to look up the full trace in your observability platform. This is useful for debugging, customer support, or correlating traces with other events in your system.

### Agent Trace IDs

Both `generate` and `stream` methods return the trace ID in their response:

```ts showLineNumbers copy
// Using generate
const result = await agent.generate({
  messages: [{ role: "user", content: "Hello" }],
});

console.log("Trace ID:", result.traceId);

// Using stream
const streamResult = await agent.stream({
  messages: [{ role: "user", content: "Tell me a story" }],
});

console.log("Trace ID:", streamResult.traceId);
```

### Workflow Trace IDs

Workflow executions also return trace IDs:

```ts showLineNumbers copy
// Create a workflow run
const run = await mastra.getWorkflow("myWorkflow").createRun();

// Start the workflow
const result = await run.start({
  inputData: { data: "process this" },
});

console.log("Trace ID:", result.traceId);

// Or stream the workflow
const { stream, getWorkflowState } = run.stream({
  inputData: { data: "process this" },
});

// Get the final state which includes the trace ID
const finalState = await getWorkflowState();
console.log("Trace ID:", finalState.traceId);
```

### Using Trace IDs

Once you have a trace ID, you can:

1. **Look up traces in Studio**: Navigate to the traces view and search by ID
2. **Query traces in external platforms**: Use the ID in Langfuse, Braintrust, MLflow, or your observability platform
3. **Correlate with logs**: Include the trace ID in your application logs for cross-referencing
4. **Share for debugging**: Provide trace IDs to support teams or developers for investigation

The trace ID is only available when tracing is enabled. If tracing is disabled or sampling excludes the request, `traceId` will be `undefined`.

## Integrating with External Tracing Systems

When running Mastra agents or workflows within applications that have existing distributed tracing (OpenTelemetry, Datadog, etc.), you can connect Mastra traces to your parent trace context. This creates a unified view of your entire request flow, making it easier to understand how Mastra operations fit into the broader system.

### Passing External Trace IDs

Use the `tracingOptions` parameter to specify the trace context from your parent system:

```ts showLineNumbers copy
// Get trace context from your existing tracing system
const parentTraceId = getCurrentTraceId(); // Your tracing system
const parentSpanId = getCurrentSpanId(); // Your tracing system

// Execute Mastra operations as part of the parent trace
const result = await agent.generate("Analyze this data", {
  tracingOptions: {
    traceId: parentTraceId,
    parentSpanId: parentSpanId,
  },
});

// The Mastra trace will now appear as a child in your distributed trace
```

### OpenTelemetry Integration

Integration with OpenTelemetry allows Mastra traces to appear seamlessly in your existing observability platform:

```ts showLineNumbers copy
import { trace } from "@opentelemetry/api";

// Get the current OpenTelemetry span
const currentSpan = trace.getActiveSpan();
const spanContext = currentSpan?.spanContext();

if (spanContext) {
  const result = await agent.generate(userMessage, {
    tracingOptions: {
      traceId: spanContext.traceId,
      parentSpanId: spanContext.spanId,
    },
  });
}
```

### Workflow Integration

Workflows support the same pattern for trace propagation:

```ts showLineNumbers copy
const workflow = mastra.getWorkflow("data-pipeline");
const run = await workflow.createRun();

const result = await run.start({
  inputData: { data: "..." },
  tracingOptions: {
    traceId: externalTraceId,
    parentSpanId: externalSpanId,
  },
});
```

### ID Format Requirements

Mastra validates trace and span IDs to ensure compatibility:

- **Trace IDs**: 1-32 hexadecimal characters (OpenTelemetry uses 32)
- **Span IDs**: 1-16 hexadecimal characters (OpenTelemetry uses 16)

Invalid IDs are handled gracefully — Mastra logs an error and continues:

- Invalid trace ID → generates a new trace ID
- Invalid parent span ID → ignores the parent relationship

This ensures tracing never crashes your application, even with malformed input.

### Example: Express Middleware

Here's a complete example showing trace propagation in an Express application:

```ts showLineNumbers copy
import { trace } from "@opentelemetry/api";
import express from "express";

const app = express();

app.post("/api/analyze", async (req, res) => {
  // Get current OpenTelemetry context
  const currentSpan = trace.getActiveSpan();
  const spanContext = currentSpan?.spanContext();

  const result = await agent.generate(req.body.message, {
    tracingOptions: spanContext
      ? {
          traceId: spanContext.traceId,
          parentSpanId: spanContext.spanId,
        }
      : undefined,
  });

  res.json(result);
});
```

This creates a single distributed trace that includes both the HTTP request handling and the Mastra agent execution, viewable in your observability platform of choice.

## What Gets Traced

Mastra automatically creates spans for:

### Agent Operations

- **Agent runs** - Complete execution with instructions and tools
- **LLM calls** - Model interactions with tokens and parameters
- **Tool executions** - Function calls with inputs and outputs
- **Memory operations** - Thread and semantic recall

### Workflow Operations

- **Workflow runs** - Full execution from start to finish
- **Individual steps** - Step processing with inputs/outputs
- **Control flow** - Conditionals, loops, parallel execution
- **Wait operations** - Delays and event waiting

## See Also

### Reference Documentation

- [Configuration API](/reference/v1/observability/tracing/configuration) - ObservabilityConfig details
- [Tracing Classes](/reference/v1/observability/tracing/instances) - Core classes and methods
- [Span Interfaces](/reference/v1/observability/tracing/spans) - Span types and lifecycle
- [Type Definitions](/reference/v1/observability/tracing/interfaces) - Complete interface reference

### Exporters

- [DefaultExporter](/reference/v1/observability/tracing/exporters/default-exporter) - Storage persistence
- [CloudExporter](/reference/v1/observability/tracing/exporters/cloud-exporter) - Mastra Cloud integration
- [ConsoleExporter](/reference/v1/observability/tracing/exporters/console-exporter) - Debug output
- [Arize](/reference/v1/observability/tracing/exporters/arize) - Arize Phoenix and Arize AX integration
- [Braintrust](/reference/v1/observability/tracing/exporters/braintrust) - Braintrust integration
- [Langfuse](/reference/v1/observability/tracing/exporters/langfuse) - Langfuse integration
- [MLflow](/reference/v1/observability/tracing/exporters/otel#mlflow) - MLflow OTLP endpoint setup
- [OpenTelemetry](/reference/v1/observability/tracing/exporters/otel) - OTEL-compatible platforms

### Bridges

- [OpenTelemetry Bridge](/reference/v1/observability/tracing/bridges/otel) - OTEL context integration

### Processors

- [Sensitive Data Filter](/docs/v1/observability/tracing/processors/sensitive-data-filter) - Data redaction
