---
title: Evaluations & Metrics
sidebarTitle: Evaluations
description: Monitor performance, optimize costs, and evaluate LLM effectiveness
---

Evaluations help you understand how well your automation performs, which models work best for your use cases, and how to optimize for cost and reliability. This guide covers both monitoring your own workflows and running comprehensive evaluations.

## Why Evaluations Matter

- **Performance Optimization**: Identify which models and settings work best for your specific automation tasks
- **Cost Control**: Track token usage and inference time to optimize spending
- **Reliability**: Measure success rates and identify failure patterns
- **Model Selection**: Compare different LLMs on real-world tasks to make informed decisions

<Card
  title="Live Model Comparisons"
  icon="scale-balanced"
  href="https://www.stagehand.dev/evals"
>
  View real-time performance comparisons across different LLMs on the [Stagehand Evals Dashboard](https://www.stagehand.dev/evals)
</Card>

## Comprehensive Evaluations

Evaluations help you systematically test and improve your automation workflows. Stagehand provides both built-in evaluations and tools to create your own.

<Tip>
To run evals, you'll need to clone the [Stagehand repo](https://github.com/browserbase/stagehand) and run `npm install` to install the dependencies.
</Tip>

We have three types of evals:
1. **Deterministic Evals** - These are evals that are deterministic and can be run without any LLM inference.
2. **LLM-based Evals** - These are evals that test the underlying functionality of Stagehand's AI primitives.


### LLM-based Evals

<Tip>
To run LLM evals, you'll need a [Braintrust account](https://www.braintrust.dev/docs/).
</Tip>

To run LLM-based evals, you can run `npm run evals` from within the Stagehand repo. This will test the functionality of the LLM primitives within Stagehand to make sure they're working as expected.

Evals are grouped into three categories:
1. **Act Evals** - These are evals that test the functionality of the `act` method.
2. **Extract Evals** - These are evals that test the functionality of the `extract` method.
3. **Observe Evals** - These are evals that test the functionality of the `observe` method.
4. **Combination Evals** - These are evals that test the functionality of the `act`, `extract`, and `observe` methods together.

#### Configuring and Running Evals
You can view the specific evals in [`evals/tasks`](https://github.com/browserbase/stagehand/tree/main/evals/tasks). Each eval is grouped into eval categories based on [`evals/evals.config.json`](https://github.com/browserbase/stagehand/blob/main/evals/evals.config.json). You can specify models to run and other general task config in [`evals/taskConfig.ts`](https://github.com/browserbase/stagehand/blob/main/evals/taskConfig.ts).

To run a specific eval, you can run `npm run evals <eval>`, or run all evals in a category with `npm run evals category <category>`.


#### Viewing eval results
![Eval results](/images/evals.png)

Eval results are viewable on Braintrust. You can view the results of a specific eval by going to the Braintrust URL specified in the terminal when you run `npm run evals`.

By default, each eval will run five times per model. The "Exact Match" column shows the percentage of times the eval was correct. The "Error Rate" column shows the percentage of times the eval errored out.

You can use the Braintrust UI to filter by model/eval and aggregate results across all evals.

### Deterministic Evals

To run deterministic evals, you can just run `npm run e2e` from within the Stagehand repo. This will test the functionality of Playwright within Stagehand to make sure it's working as expected.

These tests are in [`evals/deterministic`](https://github.com/browserbase/stagehand/tree/main/evals/deterministic) and test on both Browserbase browsers and local headless Chromium browsers.

## Creating Custom Evaluations

### Step-by-Step Guide

<Steps>
<Step title="Create Evaluation File">
Create a new file in `evals/tasks/your-eval.ts`:

```typescript
import { EvalTask } from '../types';

export const customEvalTask: EvalTask = {
  name: 'custom_task_name',
  description: 'Test specific automation workflow',
  
  // Test setup
  setup: async ({ page }) => {
    await page.goto('https://example.com');
  },
  
  // The actual test
  task: async ({ stagehand, page }) => {
    // Your automation logic
    await stagehand.act({ action: 'click the login button' });
    const result = await stagehand.extract({ 
      instruction: 'Get the user name',
      schema: { username: 'string' }
    });
    return result;
  },
  
  // Validation
  validate: (result, expected) => {
    return result.username === expected.username;
  },
  
  // Test cases
  testCases: [
    {
      input: { /* test input */ },
      expected: { username: 'john_doe' }
    }
  ],
  
  // Evaluation criteria
  scoring: {
    exactMatch: true,
    timeout: 30000,
    retries: 2
  }
};
```
</Step>

<Step title="Add to Configuration">
Update `evals/evals.config.json`:

```json
{
  "categories": {
    "custom": ["custom_task_name"],
    "existing_category": ["custom_task_name"]
  }
}
```
</Step>

<Step title="Run Your Evaluation">
```bash
# Test your custom evaluation
npm run evals custom_task_name

# Run the entire custom category
npm run evals category custom
```
</Step>
</Steps>


## Best Practices for Custom Evals

<AccordionGroup>
<Accordion title="Test Design Principles">
- **Atomic**: Each test should validate one specific capability
- **Deterministic**: Tests should produce consistent results
- **Realistic**: Use real-world scenarios and websites
- **Measurable**: Define clear success/failure criteria
</Accordion>

<Accordion title="Performance Optimization">
- **Parallel Execution**: Design tests to run independently
- **Resource Management**: Clean up after each test
- **Timeout Handling**: Set appropriate timeouts for operations
- **Error Recovery**: Handle failures gracefully
</Accordion>

<Accordion title="Data Quality">
- **Ground Truth**: Establish reliable expected outcomes
- **Edge Cases**: Test boundary conditions and error scenarios
- **Statistical Significance**: Run multiple iterations for reliability
- **Version Control**: Track changes to test cases over time
</Accordion>
</AccordionGroup>

### Troubleshooting Evaluations
<AccordionGroup>
<Accordion title="Evaluation Timeouts">
**Symptoms**: Tests fail with timeout errors

**Solutions**:
- Increase timeout in `taskConfig.ts`
- Use faster models (Gemini 2.5 Flash, GPT-4o Mini)
- Optimize test scenarios to be less complex
- Check network connectivity to LLM providers
</Accordion>

<Accordion title="Inconsistent Results">
**Symptoms**: Same test passes/fails randomly

**Solutions**:
- Set temperature to 0 for deterministic outputs
- Increase repetitions for statistical significance
- Use more capable models for complex tasks
- Check for dynamic website content affecting tests
</Accordion>

<Accordion title="High Evaluation Costs">
**Symptoms**: Token usage exceeding budget

**Solutions**:
- Use cost-effective models (Gemini 2.0 Flash, GPT-4o Mini)
- Reduce repetitions for initial testing
- Focus on specific evaluation categories
- Use local browser environment to reduce Browserbase costs
</Accordion>

<Accordion title="Braintrust Integration Issues">
**Symptoms**: Results not uploading to dashboard

**Solutions**:
- Check Braintrust API key configuration
- Verify internet connectivity
- Update Braintrust SDK to latest version
- Check project permissions in Braintrust dashboard
</Accordion>
</AccordionGroup>