---
description: Learn how to create task span metrics for evaluating the detailed execution information of your LLM tasks
toc_max_heading_level: 4
---

# Task Span Metrics

Task span metrics are a powerful type of evaluation metric in Opik that can analyze the detailed execution information of your LLM tasks. Unlike traditional metrics that only evaluate input-output pairs, task span metrics have access to the complete execution context, including intermediate steps, metadata, timing information, and hierarchical structure.

**Important:** only spans created with `@track` decorators and native OPIK integrations are available for task span metrics.

## What are Task Span Metrics?

Task span metrics are evaluation metrics that include a `task_span` parameter in their `score` method. The Opik evaluation engine automatically detects that.

When a metric has a `task_span` parameter, it receives a [`SpanModel`](https://www.comet.com/docs/opik/python-sdk-reference/message_processing_emulation/SpanModel.html) object containing the complete execution context of your task.


The `task_span` parameter provides:

- **Execution Details**: Input, output, start/end times, and execution metadata
- **Nested Operations**: Hierarchical structure of sub-operations and function calls
- **Performance Data**: Timing, cost, usage statistics, and resource consumption
- **Error Information**: Detailed error context and diagnostic information
- **Provider Metadata**: Model information, API provider details, and configuration

## When to Use Task Span Metrics

Task span metrics are particularly valuable for:

- **Performance Analysis**: Evaluating execution speed, resource usage, and efficiency
- **Quality Assessment**: Analyzing the quality of intermediate steps and decision-making
- **Cost Optimization**: Tracking and optimizing API costs and resource consumption
- **Agent Evaluation**: Assessing agent trajectories and decision-making patterns
- **Debugging**: Understanding execution flows and identifying performance bottlenecks
- **Compliance**: Ensuring tasks execute within expected parameters and constraints

## Creating Task Span Metrics

To create a task span metric, define a class that inherits from `BaseMetric` and implements a `score` method that accepts a `task_span` parameter (you can still add other parameters as in regular metrics, Opik will perform a separate check for `task_span` argument presence):

```python
from typing import Any, Dict, Optional
from opik.evaluation.metrics import BaseMetric, score_result
from opik.message_processing.emulation.models import SpanModel

class TaskExecutionQualityMetric(BaseMetric):
    def __init__(
        self,
        name: str = "task_execution_quality",
        track: bool = True,
        project_name: Optional[str] = None,
    ):
        super().__init__(name=name, track=track, project_name=project_name)

    def _check_execution_success_recursively(self, span: SpanModel) -> Dict[str, Any]:
        """Recursively check execution success across the span tree."""
        execution_stats = {
            'has_errors': False,
            'error_count': 0,
            'failed_spans': [],
            'total_spans_checked': 0
        }

        # Check current span for errors
        execution_stats['total_spans_checked'] += 1
        if span.error_info:
            execution_stats['has_errors'] = True
            execution_stats['error_count'] += 1
            execution_stats['failed_spans'].append(span.name)

        # Recursively check nested spans
        for nested_span in span.spans:
            nested_stats = self._check_execution_success_recursively(nested_span)
            execution_stats['has_errors'] = execution_stats['has_errors'] or nested_stats['has_errors']
            execution_stats['error_count'] += nested_stats['error_count']
            execution_stats['failed_spans'].extend(nested_stats['failed_spans'])
            execution_stats['total_spans_checked'] += nested_stats['total_spans_checked']

        return execution_stats

    def score(self, task_span: SpanModel, **ignored_kwargs: Any) -> score_result.ScoreResult:
        # Check execution success across the entire span tree.
        # Only for illustrative purposes.
        # Please adjust for your specific use case!
        execution_stats = self._check_execution_success_recursively(task_span)
        execution_successful = not execution_stats['has_errors']

        # Check output availability
        has_output = task_span.output is not None

        # Calculate execution time
        execution_time = None
        if task_span.start_time and task_span.end_time:
            execution_time = (task_span.end_time - task_span.start_time).total_seconds()

        # Custom scoring logic based on execution characteristics
        if not execution_successful:
            error_count = execution_stats['error_count']
            failed_spans_count = len(execution_stats['failed_spans'])
            total_spans = execution_stats['total_spans_checked']

            if error_count == 1 and total_spans > 5:
                score_value = 0.4
                reason = f"Minor execution issues: 1 error in {total_spans} spans ({execution_stats['failed_spans'][0]})"
            elif failed_spans_count <= 2:
                score_value = 0.2
                reason = f"Limited execution failures: {failed_spans_count} failed spans out of {total_spans}"
            else:
                score_value = 0.0
                reason = f"Major execution failures: {failed_spans_count} failed spans across {total_spans} operations"
        elif not has_output:
            score_value = 0.3
            reason = f"Task completed without errors across {execution_stats['total_spans_checked']} spans but produced no output"
        elif execution_time and execution_time > 30.0:
            score_value = 0.6
            reason = f"Task executed successfully across {execution_stats['total_spans_checked']} spans but took too long: {execution_time:.2f}s"
        else:
            score_value = 1.0
            span_count = execution_stats['total_spans_checked']
            reason = f"Task executed successfully across all {span_count} spans with good performance"

        return score_result.ScoreResult(
            value=score_value,
            name=self.name,
            reason=reason
        )
```

## Accessing Span Properties

The `SpanModel` object provides rich information about task execution:

### Basic Properties

```python
class BasicSpanAnalysisMetric(BaseMetric):
    def score(self, task_span: SpanModel, **ignored_kwargs: Any) -> score_result.ScoreResult:
        # Basic span information
        span_id = task_span.id
        span_name = task_span.name
        span_type = task_span.type  # "general", "llm", "tool", etc.

        # Input/Output analysis
        input_data = task_span.input
        output_data = task_span.output

        # Metadata and tags
        metadata = task_span.metadata
        tags = task_span.tags

        # Your scoring logic here
        return score_result.ScoreResult(value=1.0, name=self.name)
```

### Performance Metrics

```python
class PerformanceMetric(BaseMetric):
    def _find_model_and_provider_recursively(self, span: SpanModel, model_found: str = None, provider_found: str = None):
        """Recursively search through span tree to find model and provider information."""
        # Check current span
        if not model_found and span.model:
            model_found = span.model
        if not provider_found and span.provider:
            provider_found = span.provider

        # If both found, return early
        if model_found and provider_found:
            return model_found, provider_found

        # Recursively search nested spans
        for nested_span in span.spans:
            model_found, provider_found = self._find_model_and_provider_recursively(
                nested_span, model_found, provider_found
            )
            # If both found, return early
            if model_found and provider_found:
                return model_found, provider_found

        return model_found, provider_found

    def _calculate_usage_recursively(self, span: SpanModel, usage_summary: dict = None):
        """Recursively calculate usage statistics from the entire span tree."""
        if usage_summary is None:
            usage_summary = {
                'total_prompt_tokens': 0,
                'total_completion_tokens': 0,
                'total_tokens': 0,
                'total_spans_count': 0,
                'llm_spans_count': 0,
                'tool_spans_count': 0
            }

        # Count current span
        usage_summary['total_spans_count'] += 1

        # Count span types
        if span.type == 'llm':
            usage_summary['llm_spans_count'] += 1
        elif span.type == 'tool':
            usage_summary['tool_spans_count'] += 1

        # Add usage from current span
        if span.usage and isinstance(span.usage, dict):
            usage_summary['total_prompt_tokens'] += span.usage.get('prompt_tokens', 0)
            usage_summary['total_completion_tokens'] += span.usage.get('completion_tokens', 0)
            usage_summary['total_tokens'] += span.usage.get('total_tokens', 0)

        # Recursively process nested spans
        for nested_span in span.spans:
            self._calculate_usage_recursively(nested_span, usage_summary)

        return usage_summary

    def score(self, task_span: SpanModel, **ignored_kwargs: Any) -> score_result.ScoreResult:
        # Timing analysis
        # Only for illustrative purposes.
        # Please adjust for your specific use case!
        start_time = task_span.start_time
        end_time = task_span.end_time
        duration = (end_time - start_time).total_seconds() if start_time and end_time else None

        # Get model and provider from anywhere in the span tree
        model_used, provider = self._find_model_and_provider_recursively(
            task_span, task_span.model, task_span.provider
        )

        # Calculate comprehensive usage statistics from entire span tree
        usage_info = self._calculate_usage_recursively(task_span)

        # Performance-based scoring with enhanced analysis
        if duration and duration < 2.0:
            score_value = 1.0
            reason = f"Excellent performance: {duration:.2f}s"
            if model_used:
                reason += f" using {model_used}"
            if provider:
                reason += f" ({provider})"
            if usage_info['total_tokens'] > 0:
                reason += f", {usage_info['total_tokens']} total tokens across {usage_info['llm_spans_count']} LLM calls"
        elif duration and duration < 10.0:
            score_value = 0.7
            reason = f"Good performance: {duration:.2f}s"
            if usage_info['total_spans_count'] > 1:
                reason += f" with {usage_info['total_spans_count']} operations"
        else:
            score_value = 0.5
            reason = "Performance could be improved"
            if duration:
                reason += f" (took {duration:.2f}s)"
            if usage_info['llm_spans_count'] > 5:
                reason += f" - consider optimizing {usage_info['llm_spans_count']} LLM calls"

        return score_result.ScoreResult(
            value=score_value,
            name=self.name,
            reason=reason
        )
```

## Error Analysis

Task span metrics can analyze execution failures and errors:

```python
class ErrorAnalysisMetric(BaseMetric):
    def _collect_errors_recursively(self, span: SpanModel, errors: list = None):
        """Recursively collect all errors from the span tree."""
        if errors is None:
            errors = []

        # Check current span for errors
        if span.error_info:
            error_entry = {
                'span_id': span.id,
                'span_name': span.name,
                'span_type': span.type,
                'error_info': span.error_info
            }
            errors.append(error_entry)

        # Recursively check nested spans
        for nested_span in span.spans:
            self._collect_errors_recursively(nested_span, errors)

        return errors

    def score(self, task_span: SpanModel, **ignored_kwargs: Any) -> score_result.ScoreResult:
        # Collect all errors from the entire span tree
        all_errors = self._collect_errors_recursively(task_span)

        if not all_errors:
            return score_result.ScoreResult(
                value=1.0,
                name=self.name,
                reason="No errors detected in any span"
            )

        reason = f"Found {len(all_errors)} error(s) across multiple spans"
        return score_result.ScoreResult(
            value=0.0,
            name=self.name,
            reason=reason
        )
```

## Using Task Span Metrics in Evaluation

Task span metrics work seamlessly with regular evaluation metrics. The Opik evaluation engine automatically detects task span metrics by checking if the `score` method includes a `task_span` parameter, and handles them appropriately:

```python
from opik import evaluate
from opik.evaluation.metrics import Equals

# Mix regular and task span metrics
equals_metric = Equals()
quality_metric = TaskExecutionQualityMetric()
performance_metric = PerformanceMetric()

evaluation = evaluate(
    dataset=dataset,
    task=evaluation_task,
    scoring_metrics=[
        equals_metric,      # Regular metric (input/output)
        quality_metric,     # Task span metric (execution analysis)
        performance_metric, # Task span metric (performance analysis)
    ],
    experiment_name="Comprehensive Task Analysis"
)
```

### Quickly testing task span metrics locally

You can validate a task span metric without running a full evaluation by recording spans locally. The SDK provides a context manager that captures all spans/traces created inside its block and exposes them in-memory.

```python
import opik
from opik import track
from opik.evaluation.metrics import score_result
from opik.message_processing.emulation.models import SpanModel

# Example metric under test
class ExecutionTimeMetric:
    def __init__(self, name: str = "execution_time_metric"):
        self.name = name

    def score(self, task_span: SpanModel, **_):
        if task_span.start_time and task_span.end_time:
            duration = (task_span.end_time - task_span.start_time).total_seconds()
            value = 1.0 if duration < 2.0 else 0.5
            reason = f"Duration: {duration:.2f}s"
        else:
            value = 0.0
            reason = "Missing timing information"
        return score_result.ScoreResult(value=value, name=self.name, reason=reason)

@track
def my_tracked_function(question: str) -> str:
    # Your LLM/tool code here that produces spans
    return f"Answer to: {question}"

with opik.record_traces_locally() as storage:
    # Execute tracked code that creates spans
    _ = my_tracked_function("What is the capital of France?")

    # Access the in-memory span tree (flush is automatic before reading)
    span_trees = storage.span_trees
    assert len(span_trees) > 0, "No spans recorded"
    root_span = span_trees[0]

    # Evaluate your task span metric directly
    metric = ExecutionTimeMetric()
    result = metric.score(task_span=root_span)
    print(result)
```

Note:
- Local recording cannot be nested. If a recording block is already active, entering another will raise an error.
- See the Python SDK reference for more details: [Local Recording Context Manager](https://www.comet.com/docs/opik/python-sdk-reference/message_processing_emulation/local_recording.html)

## Best Practices

### 1. Handle Missing Data Gracefully

Always check for `None` values in optional span attributes:

```python
def score(self, task_span: SpanModel, **ignored_kwargs: Any) -> score_result.ScoreResult:
    # Safe access to optional fields
    duration = None
    if task_span.start_time and task_span.end_time:
        duration = (task_span.end_time - task_span.start_time).total_seconds()

    cost = task_span.total_cost if task_span.total_cost else 0.0
    metadata = task_span.metadata or {}
```

### 2. Focus on Execution Patterns

Use task span metrics to evaluate **how** your application executes, not just the final output:

```python
# Good: Analyzing execution patterns
def _analyze_caching_efficiency_recursively(self, span: SpanModel, cache_stats: Dict[str, Any] = None) -> Dict[str, Any]:
    """Recursively analyze caching efficiency across the span tree."""
    if cache_stats is None:
        cache_stats = {
            'total_llm_calls': 0,
            'llm_cache_hits': 0,
            'llm_cache_misses': 0,
            'other_cache_hits': 0,
            'cached_llm_spans': [],
            'cached_other_spans': [],
            'llm_spans': []
        }

    # Track LLM calls and their caching status
    if span.type == "llm":
        cache_stats['total_llm_calls'] += 1
        cache_stats['llm_spans'].append(span.name)

        # Check for caching indicators in metadata
        metadata = span.metadata or {}
        tags = span.tags or []

        is_cached = (
            any(cache_key in metadata for cache_key in ["cache_hit", "cached", "from_cache"]) or
            any(cache_tag in tags for cache_tag in ["cache_hit", "cached"]) or
            metadata.get("cache_hit", False) or
            metadata.get("cached", False)
        )

        if is_cached:
            cache_stats['llm_cache_hits'] += 1
            cache_stats['cached_llm_spans'].append(span.name)
        else:
            cache_stats['llm_cache_misses'] += 1

    # Track non-LLM spans for caching indicators (e.g., database queries, API calls)
    else:
        metadata = span.metadata or {}
        tags = span.tags or []

        if (any(cache_key in metadata for cache_key in ["cache_hit", "cached", "from_cache"]) or
            any(cache_tag in tags for cache_tag in ["cache_hit", "cached"])):
            cache_stats['other_cache_hits'] += 1
            cache_stats['cached_other_spans'].append(span.name)

    # Recursively check nested spans
    for nested_span in span.spans:
        self._analyze_caching_efficiency_recursively(nested_span, cache_stats)

    return cache_stats

def score(self, task_span: SpanModel, **ignored_kwargs: Any) -> score_result.ScoreResult:
    # Analyze caching efficiency across an entire span tree.
    # Only for illustrative purposes.
    # Please adjust for your specific use case!
    cache_stats = self._analyze_caching_efficiency_recursively(task_span)

    llm_cache_hits = cache_stats['llm_cache_hits']
    total_llm_calls = cache_stats['total_llm_calls']
    other_cache_hits = cache_stats['other_cache_hits']

    # Calculate a cache hit ratio specifically for LLM calls
    llm_cache_hit_ratio = llm_cache_hits / max(1, total_llm_calls) if total_llm_calls > 0 else 0

    # Score based on LLM caching efficiency and total call volume
    if total_llm_calls == 0:
        # Consider other cache hits for non-LLM operations
        if other_cache_hits > 0:
            return score_result.ScoreResult(
                value=0.7,
                name=self.name,
                reason=f"No LLM calls, but {other_cache_hits} other operations cached"
            )
        else:
            return score_result.ScoreResult(
                value=0.5,
                name=self.name,
                reason="No LLM calls detected"
            )
    elif llm_cache_hit_ratio >= 0.8:
        reason = f"Excellent LLM caching: {llm_cache_hits}/{total_llm_calls} LLM calls cached ({llm_cache_hit_ratio:.1%})"
        if other_cache_hits > 0:
            reason += f" + {other_cache_hits} other cached operations"
        return score_result.ScoreResult(
            value=1.0,
            name=self.name,
            reason=reason
        )
    elif llm_cache_hit_ratio >= 0.5:
        reason = f"Good LLM caching: {llm_cache_hits}/{total_llm_calls} LLM calls cached ({llm_cache_hit_ratio:.1%})"
        if other_cache_hits > 0:
            reason += f" + {other_cache_hits} other cached operations"
        return score_result.ScoreResult(
            value=0.9,
            name=self.name,
            reason=reason
        )
    elif llm_cache_hit_ratio > 0:
        reason = f"Some LLM caching: {llm_cache_hits}/{total_llm_calls} LLM calls cached ({llm_cache_hit_ratio:.1%})"
        if other_cache_hits > 0:
            reason += f" + {other_cache_hits} other cached operations"
        return score_result.ScoreResult(
            value=0.7,
            name=self.name,
            reason=reason
        )
    elif total_llm_calls > 5:
        return score_result.ScoreResult(
            value=0.2,
            name=self.name,
            reason=f"No caching with {total_llm_calls} LLM calls - high cost/latency risk"
        )
    elif total_llm_calls > 3:
        return score_result.ScoreResult(
            value=0.4,
            name=self.name,
            reason=f"No caching with {total_llm_calls} LLM calls - consider adding cache"
        )
    else:
        return score_result.ScoreResult(
            value=0.8,
            name=self.name,
            reason=f"Efficient execution: {total_llm_calls} LLM calls (caching not critical)"
        )
```

### 3. Combine with Regular Metrics

Task span metrics provide the most value when combined with traditional output-based metrics:

```python
# Comprehensive evaluation approach
scoring_metrics = [
    # Output quality metrics
    Equals(),
    Hallucination(),

    # Execution analysis metrics
    TaskExecutionQualityMetric(),
    PerformanceMetric(),

    # Cost optimization metrics
    CostEfficiencyMetric(),
]
```

### 4. Security Considerations

Be mindful of sensitive data in span information:

```python
def score(self, task_span: SpanModel, **ignored_kwargs: Any) -> score_result.ScoreResult:
    # Avoid logging sensitive input data
    input_size = len(str(task_span.input)) if task_span.input else 0

    # Use aggregated information instead of raw data
    return score_result.ScoreResult(
        value=1.0 if input_size < 1000 else 0.5,
        name=self.name,
        reason=f"Input size: {input_size} characters"
    )
```

## Complete Example: Agent Trajectory Analysis metric

Here's a comprehensive example that analyzes agent decision-making:

```python
class AgentTrajectoryMetric(BaseMetric):
    def __init__(self, max_steps: int = 10, name: str = "agent_trajectory_quality"):
        super().__init__(name=name)
        self.max_steps = max_steps

    def _analyze_trajectory_recursively(self, span: SpanModel, trajectory_stats: Dict[str, Any] = None) -> Dict[str, Any]:
        """Recursively analyze agent trajectory across the span tree."""
        if trajectory_stats is None:
            trajectory_stats = {
                'total_steps': 0,
                'tool_uses': 0,
                'llm_reasoning': 0,
                'other_steps': 0,
                'tool_spans': [],
                'llm_spans': [],
                'step_names': [],
                'max_depth': 0,
                'current_depth': 0
            }

        # Count current span as a step
        trajectory_stats['total_steps'] += 1
        trajectory_stats['step_names'].append(span.name)
        trajectory_stats['max_depth'] = max(trajectory_stats['max_depth'], trajectory_stats['current_depth'])

        # Categorize span types for agent decision analysis
        if span.type == "tool":
            trajectory_stats['tool_uses'] += 1
            trajectory_stats['tool_spans'].append(span.name)
        elif span.type == "llm":
            trajectory_stats['llm_reasoning'] += 1
            trajectory_stats['llm_spans'].append(span.name)
        else:
            trajectory_stats['other_steps'] += 1

        # Recursively analyze nested spans with depth tracking
        for nested_span in span.spans:
            trajectory_stats['current_depth'] += 1
            self._analyze_trajectory_recursively(nested_span, trajectory_stats)
            trajectory_stats['current_depth'] -= 1

        return trajectory_stats

    def score(self, task_span: SpanModel, **ignored_kwargs: Any) -> score_result.ScoreResult:
        # Analyze agent trajectory across an entire span tree
        trajectory_stats = self._analyze_trajectory_recursively(task_span)

        total_steps = trajectory_stats['total_steps']
        tool_uses = trajectory_stats['tool_uses']
        llm_reasoning = trajectory_stats['llm_reasoning']
        max_depth = trajectory_stats['max_depth']

        # Check for an efficient path
        if total_steps == 0:
            return score_result.ScoreResult(
                value=0.0, name=self.name,
                reason="No decision steps found"
            )

        # Analyze trajectory quality with enhanced metrics.
        # Only for illustrative purposes.
        # Please adjust for your specific use case!
        if tool_uses == 0 and llm_reasoning == 0:
            score = 0.1
            reason = f"Poor trajectory: {total_steps} steps with no tools or reasoning"
        elif tool_uses == 0:
            score = 0.3
            reason = f"Agent used {llm_reasoning} reasoning steps but no tools across {total_steps} operations"
        elif llm_reasoning == 0:
            score = 0.4
            reason = f"Agent used {tool_uses} tools but no reasoning across {total_steps} operations"
        elif total_steps > self.max_steps:
            # Penalize excessive steps but consider tool/reasoning balance
            efficiency_penalty = max(0.1, 1.0 - (total_steps - self.max_steps) * 0.05)
            balance_ratio = min(tool_uses, llm_reasoning) / max(tool_uses, llm_reasoning)
            score = min(0.6, efficiency_penalty * balance_ratio)
            reason = f"Excessive steps: {total_steps} > {self.max_steps} (depth: {max_depth}, tools: {tool_uses}, reasoning: {llm_reasoning})"
        else:
            # Calculate a comprehensive score based on multiple factors.
            # Only for illustrative purposes.
            # Please adjust for your specific use case!
            #
            # 1. Step efficiency (fewer steps = better)
            # 1. Step efficiency (fewer steps = better)
            step_efficiency = min(1.0, self.max_steps / total_steps)

            # 2. Tool-reasoning balance (closer to 1:1 ratio = better)
            balance_ratio = min(tool_uses, llm_reasoning) / max(tool_uses, llm_reasoning) if max(tool_uses, llm_reasoning) > 0 else 0
            balance_ratio = min(tool_uses, llm_reasoning) / max(tool_uses, llm_reasoning) if max(tool_uses, llm_reasoning) > 0 else 0

            # 3. Depth complexity (moderate depth suggests good decomposition)
            depth_score = 1.0 if max_depth <= 3 else max(0.7, 1.0 - (max_depth - 3) * 0.1)

            # 4. Decision density (good ratio of reasoning to total steps)
            decision_density = llm_reasoning / total_steps if total_steps > 0 else 0
            density_score = 1.0 if decision_density >= 0.3 else decision_density / 0.3

            # Combine all factors
            score = (step_efficiency * 0.3 + balance_ratio * 0.3 + depth_score * 0.2 + density_score * 0.2)

            if score >= 0.8:
                reason = f"Excellent trajectory: {total_steps} steps (depth: {max_depth}), {tool_uses} tools, {llm_reasoning} reasoning - well balanced"
            elif score >= 0.6:
                reason = f"Good trajectory: {total_steps} steps (depth: {max_depth}), {tool_uses} tools, {llm_reasoning} reasoning"
            else:
                reason = f"Acceptable trajectory: {total_steps} steps (depth: {max_depth}), {tool_uses} tools, {llm_reasoning} reasoning - could be optimized"

        return score_result.ScoreResult(
            value=score,
            name=self.name,
            reason=reason
        )
```

## Integration with LLM Evaluation

For a complete guide on using task span metrics in LLM evaluation workflows, see the [Using task span evaluation metrics](/evaluation/evaluate_your_llm#using-task-span-evaluation-metrics) section in the LLM evaluation guide.

## Related Documentation

- [Custom Metrics](/evaluation/metrics/custom_metric) - Creating traditional input/output evaluation metrics
- [SpanModel API Reference](https://www.comet.com/docs/opik/python-sdk-reference/message_processing_emulation/SpanModel.html) - Complete SpanModel documentation
- [Evaluation Overview](/evaluation/metrics/overview) - Understanding Opik's evaluation system