---
title: OpenRouter
description: Learn how to use OpenRouter with Agno.
---

OpenRouter is a platform for providing endpoints for Large Language models.

## Authentication

Set your `OPENROUTER_API_KEY` environment variable. Get your key from [here](https://openrouter.ai/settings/keys).

<CodeGroup>

```bash Mac
export OPENROUTER_API_KEY=***
```

```bash Windows
setx OPENROUTER_API_KEY ***
```

</CodeGroup>

## Example

Use `OpenRouter` with your `Agent`:

<CodeGroup>

```python agent.py
from agno.agent import Agent
from agno.models.openrouter import OpenRouter

agent = Agent(
    model=OpenRouter(id="gpt-5-mini"),
    markdown=True
)

# Print the response in the terminal
agent.print_response("Share a 2 sentence horror story.")

```

</CodeGroup>

## Params

| Parameter    | Type               | Default                        | Description                                                           |
| ------------ | ------------------ | ------------------------------ | --------------------------------------------------------------------- |
| `id`         | `str`              | `"openai/gpt-4o-mini"`         | The id of the OpenRouter model to use                                |
| `name`       | `str`              | `"OpenRouter"`                | The name of the model                                                 |
| `provider`   | `str`              | `"OpenRouter"`                | The provider of the model                                             |
| `api_key`    | `Optional[str]`    | `None`                         | The API key for OpenRouter (defaults to OPENROUTER_API_KEY env var)  |
| `base_url`   | `str`              | `"https://openrouter.ai/api/v1"` | The base URL for the OpenRouter API                                |
| `app_name`   | `Optional[str]`    | `"agno"`                       | Application name for OpenRouter request headers                       |

`OpenRouter` also supports the params of [OpenAI](/reference/models/openai).

## Prompt caching

Prompt caching will happen automatically using our `OpenRouter` model, when the used provider supports it. In other cases you can activate it via the `cache_control` header.
You can read more about prompt caching with OpenRouter in [their docs](https://openrouter.ai/docs/features/prompt-caching).
