---
title: Define Evaluation Metrics
---

<div style={{
    position: 'relative',
    paddingBottom: '56.25%', // 16:9 aspect ratio
    height: 0,
    overflow: 'hidden',
    maxWidth: '100%',
    marginBottom: '20px'
}}>
    <iframe
        src="https://www.loom.com/embed/c2de2efd1b4c4d22b3a7c40dbc257572?sid=0afb6895-f476-4010-90eb-1604ced968b6"
        frameBorder="0"
        webkitallowfullscreen
        mozallowfullscreen
        allowfullscreen
        style={{
            position: 'absolute',
            top: 0,
            left: 0,
            width: '100%',
            height: '100%',
        }}
    />
</div>

## From Subjective Assessment to Quantifiable Metrics

This video explores Opik's comprehensive [metrics](https://www.comet.com/docs/opik/evaluation/metrics/overview) system that transforms subjective LLM assessment into quantifiable measurements. You'll discover the different types of automated scoring methods available, see practical examples using [Answer Relevance](https://www.comet.com/docs/opik/evaluation/metrics/answer_relevance) and [Levenshtein](https://www.comet.com/docs/opik/evaluation/metrics/heuristic_metrics) metrics, and learn how to create [custom metrics](https://www.comet.com/docs/opik/evaluation/metrics/custom_metric) when needed. The video also covers cost considerations and best practices for combining multiple [metrics](https://www.comet.com/docs/opik/evaluation/metrics/overview) to capture different dimensions of quality.

## Key Highlights

- **Comprehensive Metric Types**: Choose from heuristic metrics (exact match, contains, regex, JSON validation), hallucination detection, and LLM-as-a-judge approaches like GEVAL
- **Easy Implementation**: Import metrics directly from [`opik.evaluation.metrics`](https://www.comet.com/docs/opik/evaluation/overview#running-an-evaluation) and instantiate classes - demonstrated with Answer Relevance and Levenshtein ratio
- **Custom Metric Development**: Create your own metrics by extending the base metric class from Opik repository when built-in options don't meet your needs
- **UI Integration**: View metrics in trace overview by scrolling right or opening feedback scores section, with ability to manually add/remove scores
- **Manual Feedback Definition**: Create custom feedback definitions in Configuration section for human-applied metrics like pass/fail classifications
- **Cost-Aware Evaluation**: Consider trade-offs between evaluation speed, depth, and cost - especially with expensive thinking models for LLM-as-a-judge approaches
- **Multi-Dimensional Assessment**: Combine multiple metrics (e.g., factual accuracy + helpfulness) to get complete quality pictures rather than single-metric evaluation
- **Filtering Capabilities**: Use feedback scores to filter traces and identify patterns in model performance across different quality dimensions
