---
title: "Evolutionary Optimizer: Genetic Algorithms"
subtitle: "Discover optimal prompts with genetic algorithms and multi-objective optimization."
description: "Learn how to use the Evolutionary Optimizer to discover optimal prompts through genetic algorithms, with support for multi-objective optimization and LLM-driven genetic operations."
---

The `EvolutionaryOptimizer` uses genetic algorithms to refine and discover effective prompts. It
iteratively evolves a population of prompts, applying selection, crossover, and mutation operations
to find prompts that maximize a given evaluation metric. This optimizer can also perform
multi-objective optimization (e.g., maximizing score while minimizing prompt length) and leverage
LLMs for more sophisticated genetic operations.

<Note>
  `EvolutionaryOptimizer` is a great choice when you want to explore a very diverse range of prompt
  structures or when you have multiple objectives to optimize for (e.g., performance score and
  prompt length). Its strength lies in its ability to escape local optima and discover novel prompt
  solutions through its evolutionary mechanisms, especially when enhanced with LLM-driven genetic
  operators.
</Note>

## How It Works

The `EvolutionaryOptimizer` is built upon the [DEAP](https://deap.readthedocs.io/) library for
evolutionary computation. The core concept behind the optimizer is that we evolve a population of
prompts over multiple generations to find the best one.

We utilize different techniques to evolve the population of prompts:

- **Selection**: We select the best prompts from the population to be the parents of the next generation.
- **Crossover**: We crossover the parents to create the children of the next generation.
- **Mutation**: We mutate the children to create the new population of prompts.

We repeat this process for a number of generations until we find the best prompt.

<Frame>
  <img src="/img/agent_optimization/evolutionary_optimizer.png" alt="Evolutionary Optimizer" />
</Frame>

<Tip>
  The optimizer is open-source, you can check out the code in the
  [Opik repository](https://github.com/comet-ml/opik/tree/main/sdks/opik_optimizer/src/opik_optimizer/algorithms/evolutionary_optimizer).
</Tip>

## Quickstart

You can use the `EvolutionaryOptimizer` to optimize a prompt:

```python maxLines=1000
from opik_optimizer import EvolutionaryOptimizer
from opik.evaluation.metrics import LevenshteinRatio # or any other suitable metric
from opik_optimizer import datasets, ChatPrompt

# 1. Define your evaluation dataset
dataset = datasets.tiny_test() # Replace with your actual dataset

# 2. Configure the evaluation metric
def levenshtein_ratio(dataset_item, llm_output):
    return LevenshteinRatio().score(reference=dataset_item["label"], output=llm_output)

# 3. Define your base prompt and task configuration
initial_prompt = ChatPrompt(
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "{text}"}
    ]
)

# 4. Initialize the EvolutionaryOptimizer
optimizer = EvolutionaryOptimizer(
    model="openai/gpt-4o-mini",
    model_parameters={"temperature": 0.4},
    population_size=20,
    num_generations=10,
)

# 5. Run the optimization
optimization_result = optimizer.optimize_prompt(
    prompt=initial_prompt,
    dataset=dataset,
    metric=levenshtein_ratio,
    n_samples=5
)

# 6. View the results
optimization_result.display()
```

## Configuration Options

### Optimizer parameters

The optimizer has the following parameters:

<ParamField path="model" type="str" optional={true} default="gpt-4o" />
<ParamField path="model_parameters" type="dict[str, typing.Any] | None" optional={true} />
<ParamField path="population_size" type="int" optional={true} default="30" />
<ParamField path="num_generations" type="int" optional={true} default="15" />
<ParamField path="mutation_rate" type="float" optional={true} default="0.2" />
<ParamField path="crossover_rate" type="float" optional={true} default="0.8" />
<ParamField path="tournament_size" type="int" optional={true} default="4" />
<ParamField path="elitism_size" type="int" optional={true} default="3" />
<ParamField path="adaptive_mutation" type="bool" optional={true} default="True" />
<ParamField path="enable_moo" type="bool" optional={true} default="True" />
<ParamField path="enable_llm_crossover" type="bool" optional={true} default="True" />
<ParamField path="output_style_guidance" type="str | None" optional={true} />
<ParamField path="infer_output_style" type="bool" optional={true} default="False" />
<ParamField path="n_threads" type="int" optional={true} default="12" />
<ParamField path="verbose" type="int" optional={true} default="1" />
<ParamField path="seed" type="int" optional={true} default="42" />

### `optimize_prompt` parameters

The `optimize_prompt` method has the following parameters:

<ParamField path="prompt" type="ChatPrompt">The prompt to optimize</ParamField>
<ParamField path="dataset" type="Dataset">The dataset to use for evaluation</ParamField>
<ParamField path="metric" type="Callable">Metric function to optimize with, should have the arguments `dataset_item` and `llm_output`</ParamField>
<ParamField path="experiment_config" type="dict | None" optional={true}>Optional experiment configuration</ParamField>
<ParamField path="n_samples" type="int | None" optional={true}>Optional number of samples to use</ParamField>
<ParamField path="auto_continue" type="bool" optional={true} default="False">Whether to automatically continue optimization</ParamField>
<ParamField path="agent_class" type="type[opik_optimizer.optimizable_agent.OptimizableAgent] | None" optional={true}>Optional agent class to use</ParamField>
<ParamField path="project_name" type="str" optional={true} default="Optimization">Opik project name for logging traces (default: "Optimization")</ParamField>
<ParamField path="max_trials" type="int" optional={true} default="10" />
<ParamField path="mcp_config" type="opik_optimizer.mcp_utils.mcp_workflow.MCPExecutionConfig | None" optional={true}>MCP tool calling configuration (default: None)</ParamField>
<ParamField path="args" type="Any" />
<ParamField path="kwargs" type="Any" />

## Model Support

There are two models to consider when using the `EvolutionaryOptimizer`:
- `EvolutionaryOptimizer.model`: The model used for the evolution of the population of prompts.
- `ChatPrompt.model`: The model used to evaluate the prompt.

The `model` parameter accepts any LiteLLM-supported model string (e.g., `"gpt-4o"`, `"azure/gpt-4"`,
`"anthropic/claude-3-opus"`, `"gemini/gemini-1.5-pro"`). You can also pass in extra model parameters
using the `model_parameters` parameter:

```python
optimizer = EvolutionaryOptimizer(
    model="anthropic/claude-3-opus-20240229",
    model_parameters={
        "temperature": 0.7,
        "max_tokens": 4096
    }
)
```

## Next Steps

1. Explore specific [Optimizers](/agent_optimization/overview#optimization-algorithms) for algorithm details.
2. Refer to the [FAQ](/agent_optimization/faq) for common questions and troubleshooting.
3. Refer to the [API Reference](/agent_optimization/api-reference) for detailed configuration options.
