---
title: "MetaPrompt Optimizer"
subtitle: "Refine and improve LLM prompts with systematic analysis."
description: "Learn how to use the MetaPrompt Optimizer to refine and improve your LLM prompts through systematic analysis and iterative refinement."
---

The MetaPrompter is a specialized optimizer designed for meta-prompt optimization. It focuses on
improving the structure and effectiveness of prompts through systematic analysis and refinement of
prompt templates, instructions, and examples.

<Note>
  The `MetaPromptOptimizer` is a strong choice when you have an initial instruction prompt and want to
  iteratively refine its wording, structure, and clarity using LLM-driven suggestions. It excels at
  general-purpose prompt improvement where the core idea of your prompt is sound but could be
  phrased better for the LLM, or when you want to explore variations suggested by a reasoning model.
</Note>

## How it works

The `MetaPromptOptimizer` automates the process of prompt refinement by using a "reasoning" LLM to
critique and improve your initial prompt. Here's a conceptual breakdown:

<Frame>
  <img src="/img/agent_optimization/metaprompt_optimizer.png" alt="MetaPrompt Optimizer" />
</Frame>

<Tip>
  The optimizer is open-source, you can check out the code in the
  [Opik repository](https://github.com/comet-ml/opik/tree/main/sdks/opik_optimizer/src/opik_optimizer/algorithms/meta_prompt_optimizer).
</Tip>


## Quickstart

You can use the `MetaPromptOptimizer` to optimize a prompt by following these steps:

```python maxLines=1000
from opik_optimizer import MetaPromptOptimizer
from opik.evaluation.metrics import LevenshteinRatio
from opik_optimizer import datasets, ChatPrompt

# Initialize optimizer
optimizer = MetaPromptOptimizer(
    model="openai/gpt-4",
    model_parameters={
        "temperature": 0.1,
        "max_tokens": 5000
    },
    n_threads=8,
    seed=42
)

# Prepare dataset
dataset = datasets.hotpot(count=300)

# Define metric and task configuration (see docs for more options)
def levenshtein_ratio(dataset_item, llm_output):
    return LevenshteinRatio().score(reference=dataset_item['answer'], output=llm_output)

prompt = ChatPrompt(
    messages=[
        {"role": "system", "content": "Provide an answer to the question."},
        {"role": "user", "content": "{question}"}
    ]
)

# Run optimization
results = optimizer.optimize_prompt(
    prompt=prompt,
    dataset=dataset,
    metric=levenshtein_ratio,
    n_samples=100
)

# Access results
results.display()
```

## Configuration Options

### Optimizer parameters

The optimizer has the following parameters:

<ParamField path="model" type="str" optional={true} default="gpt-4o">LiteLLM model name for optimizer's internal reasoning/generation calls</ParamField>
<ParamField path="model_parameters" type="dict[str, typing.Any] | None" optional={true}>Optional dict of LiteLLM parameters for optimizer's internal LLM calls. Common params: temperature, max_tokens, max_completion_tokens, top_p.</ParamField>
<ParamField path="prompts_per_round" type="int" optional={true} default="4">Number of candidate prompts to generate per optimization round</ParamField>
<ParamField path="enable_context" type="bool" optional={true} default="True">Whether to include task-specific context when reasoning about improvements</ParamField>
<ParamField path="n_threads" type="int" optional={true} default="12">Number of parallel threads for prompt evaluation</ParamField>
<ParamField path="verbose" type="int" optional={true} default="1">Controls internal logging/progress bars (0=off, 1=on)</ParamField>
<ParamField path="seed" type="int" optional={true} default="42">Random seed for reproducibility</ParamField>

### `optimize_prompt` parameters

The `optimize_prompt` method has the following parameters:

<ParamField path="prompt" type="ChatPrompt">The ChatPrompt to optimize. Can include system/user/assistant messages, tools, and model configuration.</ParamField>
<ParamField path="dataset" type="Dataset">Opik Dataset containing evaluation examples. Each item is passed to the prompt during evaluation.</ParamField>
<ParamField path="metric" type="Callable">Evaluation function that takes (dataset_item, llm_output) and returns a score (float). Higher scores indicate better performance.</ParamField>
<ParamField path="experiment_config" type="dict | None" optional={true}>Optional metadata dictionary to log with Opik experiments. Useful for tracking experiment parameters and context.</ParamField>
<ParamField path="n_samples" type="int | None" optional={true}>Number of dataset items to use per evaluation. If None, uses full dataset. Lower values speed up optimization but may be less reliable.</ParamField>
<ParamField path="auto_continue" type="bool" optional={true} default="False">If True, optimizer may continue beyond max_trials if improvements are still being found.</ParamField>
<ParamField path="agent_class" type="type[opik_optimizer.optimizable_agent.OptimizableAgent] | None" optional={true}>Custom agent class for prompt execution. If None, uses default LiteLLM-based agent. Must inherit from OptimizableAgent.</ParamField>
<ParamField path="project_name" type="str" optional={true} default="Optimization">Opik project name for logging traces and experiments. Default: "Optimization"</ParamField>
<ParamField path="max_trials" type="int" optional={true} default="10">Maximum total number of prompts to evaluate across all rounds. Optimizer stops when this limit is reached.</ParamField>
<ParamField path="mcp_config" type="opik_optimizer.mcp_utils.mcp_workflow.MCPExecutionConfig | None" optional={true}>Optional MCP (Model Context Protocol) execution configuration for prompts that use external tools. Enables tool-calling workflows. Default: None</ParamField>
<ParamField path="candidate_generator" type="collections.abc.Callable[..., list[opik_optimizer.api_objects.chat_prompt.ChatPrompt]] | None" optional={true}>Optional custom function to generate candidate prompts. Overrides default meta-reasoning generator. Should return list[ChatPrompt].</ParamField>
<ParamField path="candidate_generator_kwargs" type="dict[str, typing.Any] | None" optional={true}>Optional kwargs to pass to candidate_generator.</ParamField>
<ParamField path="args" type="Any" />
<ParamField path="kwargs" type="Any" />


### Model Support

There are two models to consider when using the `MetaPromptOptimizer`:
- `MetaPromptOptimizer.model`: The model used for the reasoning and candidate generation.
- `ChatPrompt.model`: The model used to evaluate the prompt.

The `model` parameter accepts any LiteLLM-supported model string (e.g., `"gpt-4o"`, `"azure/gpt-4"`,
`"anthropic/claude-3-opus"`, `"gemini/gemini-1.5-pro"`). You can also pass in extra model parameters
using the `model_parameters` parameter:

```python
optimizer = MetaPromptOptimizer(
    model="anthropic/claude-3-opus-20240229",
    model_parameters={
        "temperature": 0.7,
        "max_tokens": 4096
    }
)
```

## MCP Tool Calling Support

The MetaPrompt Optimizer is the only optimizer that currently supports **MCP (Model Context
Protocol) tool calling optimization**. This means you can optimize prompts that include MCP tools
and function calls.

<Note>
  MCP tool calling optimization is a specialized feature that allows the optimizer to understand and optimize prompts
  that use external tools and functions through the Model Context Protocol. This is particularly useful for complex
  agent workflows that require tool usage.
</Note>

For comprehensive information about tool optimization, see the [Tool Optimization Guide](/agent_optimization/algorithms/tool_optimization).

## Research and References

- [Meta-Prompting for Language Models](https://arxiv.org/abs/2401.12954)
