Opik provides a prompt library that you can use to manage your prompts. Storing
prompts in a library allows you to version them, reuse them across projects, and
manage them in a central location.

Using a prompt library does not mean you can't store your prompt in code, we
have designed the prompt library to work seamlessly with your existing prompt
files while providing the benefits of a central prompt library.

## Managing prompts stored in code

The recommended way to create and manage prompts is using the
[`Prompt`](https://www.comet.com/docs/opik/python-sdk-reference/library/Prompt.html)
object. This will allow you to continue versioning your prompts in code while
also getting the benefit of having prompt versions managed in the Opik platform
so you can more easily keep track of your progress.

<Tabs>
    <Tab value="Prompts stored in code" title="Prompts stored in code">

        ```python
        import opik

        # Prompt text stored in a variable
        PROMPT_TEXT = "Write a summary of the following text: {{text}}"

        # Create a prompt
        prompt = opik.Prompt(
            name="prompt-summary",
            prompt=PROMPT_TEXT,
            metadata={"environment": "production"}
        )

        # Print the prompt text
        print(prompt.prompt)

        # Build the prompt
        print(prompt.format(text="Hello, world!"))
        ```

    </Tab>
    <Tab value="Prompts stored in a file" title="Prompts stored in a file">

        ```python
        import opik

        # Read the prompt from a file
        with open("prompt.txt", "r") as f:
            prompt_text = f.read()

        prompt = opik.Prompt(name="prompt-summary", prompt=prompt_text)

        # Print the prompt text
        print(prompt.prompt)

        # Build the prompt
        print(prompt.format(text="Hello, world!"))
        ```

    </Tab>

</Tabs>

The prompt will now be stored in the library and versioned:

<Frame>
  <img src="/img/prompt_engineering/prompt_library_versions.png" />
</Frame>

<Tip>
The [`Prompt`](https://www.comet.com/docs/opik/python-sdk-reference/library/Prompt.html)
object will create a new prompt in the library if this prompt doesn't already exist,
otherwise it will return the existing prompt.

This means you can safely run the above code multiple times without creating
duplicate prompts.

</Tip>

## Using the low level SDK

If you would rather keep prompts in the Opik platform and manually update / download
them, you can use the low-level Python SDK to manage your prompts.

### Creating prompts

You can create a new prompt in the library using both the SDK and the UI:

<Tabs>
    <Tab value="Using the Python SDK" title="Using the Python SDK">
        ```python
        import opik

        opik.configure()
        client = opik.Opik()

        # Create a new prompt
        prompt = client.create_prompt(name="prompt-summary", prompt="Write a summary of the following text: {{text}}", metadata={"environment": "development"})
        ```
    </Tab>
    <Tab value="Using the UI" title="Using the UI">
        You can create a prompt in the UI by navigating to the Prompt library and clicking `Create new prompt`. This will open a dialog where you can enter the prompt name, the prompt text, and optionally a description:

        <Frame>

<img src="/img/prompt_engineering/prompt_library.png" />
</Frame>

        You can also edit a prompt by clicking on the prompt name in the library and clicking `Edit prompt`.
    </Tab>

</Tabs>

### Adding prompts to traces and spans

You can associate prompts with your traces and spans using the `opik_context` module. This is useful when you want to track which prompts were used during the execution of your functions:

<Tabs>
    <Tab value="Adding prompts to traces" title="Adding prompts to traces">
        ```python
        import opik
        from opik.opik_context import update_current_trace

        # Create prompts
        system_prompt = opik.Prompt(
            name="system-prompt",
            prompt="You are a helpful assistant that provides accurate and concise answers."
        )

        # Get prompt from the Prompt library
        client = opik.Opik()
        user_prompt = client.get_prompt(name="user-prompt")

        @opik.track
        def process_user_query(question: str) -> str:
            # Add prompts to the current trace
            update_current_trace(
                name="user-query-processing",
                prompts=[system_prompt, user_prompt],
                metadata={"query_type": "general"}
            )
            
            # Your processing logic here
            formatted_prompt = user_prompt.format(question=question)
            # ... rest of your function
            return "Response to: " + question
        ```
    </Tab>
    <Tab value="Adding prompts to spans" title="Adding prompts to spans">
        ```python
        import opik
        from opik.opik_context import update_current_span

        # Create a prompt for a specific operation
        analysis_prompt = opik.Prompt(
            name="text-analysis-prompt",
            prompt="Analyze the sentiment of the following text: {{text}}"
        )

        @opik.track
        def analyze_sentiment(text: str) -> str:
            # Add prompt to the current span
            update_current_span(
                name="sentiment-analysis",
                prompts=[analysis_prompt],
                metadata={"analysis_type": "sentiment"}
            )
            
            # Your analysis logic here
            formatted_prompt = analysis_prompt.format(text=text)
            # ... rest of your function
            return "Positive"  # example result
        ```
    </Tab>
    <Tab value="Combined usage" title="Combined usage">
        ```python
        import opik
        from opik.opik_context import update_current_trace, update_current_span

        # Create different prompts for different purposes
        main_prompt = opik.Prompt(
            name="main-processing-prompt",
            prompt="Process the following data: {{data}}"
        )
        
        validation_prompt = opik.Prompt(
            name="validation-prompt",
            prompt="Validate this result: {{result}}"
        )

        @opik.track
        def validate_result(result: Dict[str, Any]) -> str:
            # Add validation prompt to span level
            update_current_span(
                name="result-validation",
                prompts=[validation_prompt],
                metadata={"validation_type": "result_check"}
            )

            # ... validation logic

            return "Valid"  # example result

        @opik.track
        def complex_processing(data: str) -> str:
            # Add main prompt to trace level
            update_current_trace(
                name="complex-data-processing",
                prompts=[main_prompt],
                metadata={"processing_type": "complex"}
            )
            
            # Process the data
            result = process_data(data)

            # Validate the result
            validated_result = validate_result(result)

            return validated_result

        complex_processing("My data")
        ```
    </Tab>
</Tabs>

You can view the prompts associated with a trace or span in the Opik UI:

<Frame>
    <img src="/img/prompt_engineering/prompt_opik_context_update.png" />
</Frame>

Further details on using prompts from the Prompt library are provided in the following sections.


### Using prompts in supported integrations

Prompts can be used in all supported third-party integrations by attaching them to traces and spans through the [`opik_context` module](/docs/opik/prompt_engineering/prompt_management#adding-prompts-to-traces-and-spans).

For instance, you can use prompts with the `Google ADK` integration, as shown in the [example here](/docs/opik/integrations/adk#prompts-integration).


### Downloading your prompts

Once a prompt is created in the library, you can download it in code using the [`Opik.get_prompt`](https://www.comet.com/docs/opik/python-sdk-reference/Opik.html#opik.Opik.get_prompt) method:

```python
import opik

opik.configure()
client = opik.Opik()

# Get a dataset
dataset = client.get_or_create_dataset("test_dataset")

# Get the prompt
prompt = client.get_prompt(name="prompt-summary")

# Create the prompt message
prompt.format(text="Hello, world!")
```

If you are not using the SDK, you can download a prompt by using the [REST API](/reference/rest-api/overview).

### Searching prompts

To discover prompts by name substring and/or filters, use [`search_prompts`](https://www.comet.com/docs/opik/python-sdk-reference/Opik.html#opik.Opik.search_prompts). Filters are written in Opik Query Language (OQL):

```python
import opik

client = opik.Opik()

# Search by name substring only
latest_versions = client.search_prompts(
    filter_string='name contains "summary"'
)

# Search by name substring and tags filter
filtered = client.search_prompts(
    filter_string='name contains "summary" AND tags contains "alpha" AND tags contains "beta"',
)

for prompt in filtered:
    print(prompt.name, prompt.commit, prompt.prompt)
```

The `filter_string` parameter uses Opik Query Language (OQL) with the format:
`"<COLUMN> <OPERATOR> <VALUE> [AND <COLUMN> <OPERATOR> <VALUE>]*"`

**Supported columns for prompts:**

| Column       | Type   | Operators                                                                   |
| ------------ | ------ | --------------------------------------------------------------------------- |
| `id`         | String | `=`, `!=`, `contains`, `not_contains`, `starts_with`, `ends_with`, `>`, `<` |
| `name`       | String | `=`, `!=`, `contains`, `not_contains`, `starts_with`, `ends_with`, `>`, `<` |
| `created_by` | String | `=`, `!=`, `contains`, `not_contains`, `starts_with`, `ends_with`, `>`, `<` |
| `tags`       | List   | `contains`                                                                  |

**Examples:**

- `tags contains "production"` - Filter by tag
- `name contains "summary"` - Filter by name substring
- `created_by = "user@example.com"` - Filter by creator
- `tags contains "alpha" AND tags contains "beta"` - Multiple tag filtering

`search_prompts` returns the **latest** version for each matching prompt.

## Working with prompt versions

### Viewing prompt history (all versions)

You can fetch the complete version history for a prompt by its exact name using [`get_prompt_history`](https://www.comet.com/docs/opik/python-sdk-reference/Opik.html#opik.Opik.get_prompt_history):

```python maxLines=1000
import opik

opik.configure()
client = opik.Opik()

# Get the complete version history for a prompt
prompt_history = client.get_prompt_history(name="prompt-summary")

# Iterate through all versions
for version in prompt_history:
    print(f"Commit: {version.commit}")
    print(f"Created at: {version.created_at}")
    print(f"Prompt text: {version.prompt}")
    print(f"Metadata: {version.metadata}")
    print("-" * 50)
```

This returns a list of Prompt objects (each representing a specific version) for the given prompt name.

You can use this information to:

- **Audit changes** to understand how prompts evolved
- **Identify the best performing version** by linking commit IDs to experiment results
- **Document prompt changes** for compliance or review purposes
- **Retrieve specific versions** by commit ID for testing or rollback

### Accessing specific prompt versions

When working with prompts, you may want to retrieve a specific version of a prompt rather than the latest version. You can do this by passing the `commit` parameter to the [`get_prompt`](https://www.comet.com/docs/opik/python-sdk-reference/Opik.html#opik.Opik.get_prompt) method:

```python maxLines=1000
import opik

opik.configure()
client = opik.Opik()

# Get a specific version of a prompt by commit ID
prompt = client.get_prompt(name="prompt-summary", commit="abc123def456")

# Use the prompt in your application
formatted_prompt = prompt.format(text="Hello, world!")
print(formatted_prompt)
```

The `commit` parameter accepts the commit ID (also called commit hash) of the specific prompt version you want to retrieve. You can find commit IDs in the prompt history in the Opik UI or by using the `get_prompt_history` method (see above).

This is particularly useful when you want to:
- **Pin to a specific version** in production to ensure consistent behavior
- **Test different versions** side by side in experiments
- **Roll back** to a previous version if issues are discovered
- **Compare results** across different prompt versions

## Using prompts in experiments

### Linking prompts to experiments

[Experiments](/evaluation/evaluate_your_llm) allow you to evaluate the performance
of your LLM application on a set of examples. When evaluating different prompts,
it can be useful to link the evaluation to a specific prompt version. This can
be achieved by passing the `prompts` parameter when creating an Experiment:

```python maxLines=1000
import opik
from opik.evaluation import evaluate
from opik.evaluation.metrics import Hallucination

opik.configure()
client = opik.Opik()

# Get a dataset
dataset = client.get_or_create_dataset("test_dataset")

# Create a prompt
prompt = opik.Prompt(name="My prompt", prompt="...")

# Create an evaluation task
def evaluation_task(dataset_item):
    return {"output": "llm_response"}

# Run the evaluation
evaluation = evaluate(
    experiment_name="My experiment",
    dataset=dataset,
    task=evaluation_task,
    prompts=[prompt],
)
```

The experiment will now be linked to the prompt allowing you to view all experiments that use a specific prompt:

<Frame>
  <img src="/img/evaluation/linked_prompt.png" />
</Frame>

### Comparing prompt versions in experiments

You can run experiments with different prompt versions to determine which performs best:

```python maxLines=1000
import opik
from opik.evaluation import evaluate

opik.configure()
client = opik.Opik()

# Get the dataset
dataset = client.get_or_create_dataset("test_dataset")

# Get different versions of the same prompt
prompt_v1 = client.get_prompt(name="prompt-summary", commit="abc123")
prompt_v2 = client.get_prompt(name="prompt-summary", commit="def456")

# Define evaluation task
def evaluation_task(dataset_item):
    return {"output": "llm_response"}

# Run experiments with different prompt versions
experiment_v1 = evaluate(
    experiment_name="My experiment - v1",
    dataset=dataset,
    task=evaluation_task,
    prompts=[prompt_v1],
)

experiment_v2 = evaluate(
    experiment_name="My experiment - v2",
    dataset=dataset,
    task=evaluation_task,
    prompts=[prompt_v2],
)

# Compare results in the Opik UI
```

This workflow allows you to systematically test and compare different prompt versions to identify the most effective one for your use case.