---
title: "Chaining optimizers"
description: "Run multiple optimizers in sequence to balance exploration and fine-tuning."
---

Some projects benefit from running two or more optimizers back-to-back. For example, use MetaPrompt to improve wording, then Parameter optimizer to fine-tune sampling settings. This guide explains why you might chain runs, the trade-offs, and the APIs you use to pass prompts and metadata between stages.

## Strategy patterns

| Pipeline | Why run it | Pros | Cons | Complexity |
| --- | --- | --- | --- | --- |
| Hierarchical Reflective → Parameter | Existing long/complex prompt scenario: Reflective analysis finds failure modes; Parameter optimizer then tightens parameters on the improved prompt. | Excellent for legacy prompts with lots of existing complexity. Helps produce explainable changelog. | Requires metrics with rich `reason` strings; two stages increase cost. | Medium |
| Evolutionary → Few-Shot Bayesian | Cold-start scenario: explore many prompt architectures first, then let Few-Shot Bayesian pick the best example combination for the winning structure. | High diversity followed by precise example selection. | Evolutionary runs are expensive; Bayesian stage relies on curated datasets. | High |
| MetaPrompt → Parameter | Baseline prompts need polish plus sampling tweaks. | Quick wins with minimal configuration; can run in under an hour. | Less insight into failure modes than reflective pipelines. | Low. |
| Evolutionary → Parameter | Hunt for novel prompts, then squeeze out cost/latency by tuning temperature/top_p. | Balances creativity with production readiness. | Two heavy optimization loops; ensure budget headroom. | High. |

## Example pipeline

```python
from opik_optimizer import MetaPromptOptimizer, ParameterOptimizer, ChatPrompt
from opik_optimizer.parameter_optimizer import ParameterSearchSpace

meta = MetaPromptOptimizer(model="openai/gpt-4o")
parameter = ParameterOptimizer(model="openai/gpt-4o")

meta_result = meta.optimize_prompt(
    prompt=prompt,
    dataset=dataset,
    metric=metric,
    max_trials=4,
)

# Reuse the optimized prompt from the first stage
optimized_prompt = prompt.with_messages(meta_result.prompt)

search_space = ParameterSearchSpace(parameters=[
    {"name": "temperature", "distribution": "float", "low": 0.1, "high": 0.9},
    {"name": "top_p", "distribution": "float", "low": 0.7, "high": 1.0},
])

final_result = parameter.optimize_parameter(
    prompt=optimized_prompt,
    dataset=dataset,
    metric=metric,
    parameter_space=search_space,
    max_trials=20,
)
```

## Checklist

- **Freeze datasets and metrics** between stages to keep comparisons fair.
- **Use validation datasets consistently** – if you provide a `validation_dataset` in the first stage, use the same split in subsequent stages to ensure fair comparison and avoid overfitting.
- **Log pipeline metadata** (e.g., `experiment_config={"pipeline": "hierarchical_then_param"}`) so dashboards show lineage.
- **Budget tokens** – chained runs multiply costs; start with smaller `n_samples` and increase once results look promising.
- **Reuse OptimizationResult** – every optimizer returns an `OptimizationResult`, so you can pass `result.prompt` (and `result.details`, `result.history`) directly into the next stage without rebuilding state.

## Automation tips

- Use Makefiles or CI workflows to run stage 1 → stage 2 with clear checkpoints.
- Store intermediate prompts in version control alongside metadata (optimizer, score, dataset).
- Notify stakeholders with summary reports generated from `final_result.history`.

## Related docs

- [Optimize prompts](/agent_optimization/optimization/optimize_prompts)
- [Few-Shot Bayesian optimizer](/agent_optimization/algorithms/fewshot_bayesian_optimizer)
- [Hierarchical Reflective optimizer](/agent_optimization/algorithms/hierarchical_reflective_optimizer)
- [Parameter optimizer](/agent_optimization/algorithms/parameter_optimizer)
