---
title: Azure AI Foundry
description: Learn how to use Azure AI Foundry models in Agno.
---

Use various open source models hosted on Azure's infrastructure. Learn more [here](https://learn.microsoft.com/azure/ai-services/models).

Azure AI Foundry provides access to models like `Phi`, `Llama`, `Mistral`, `Cohere` and more.

## Authentication

Navigate to Azure AI Foundry on the [Azure Portal](https://portal.azure.com/) and create a service. Then set your environment variables:

<CodeGroup>

```bash Mac
export AZURE_API_KEY=***
export AZURE_ENDPOINT=***  # Of the form https://<your-host-name>.<your-azure-region>.models.ai.azure.com/models
# Optional:
# export AZURE_API_VERSION=***
```

```bash Windows
setx AZURE_API_KEY ***  # Of the form https://<your-host-name>.<your-azure-region>.models.ai.azure.com/models
setx AZURE_ENDPOINT ***
# Optional:
# setx AZURE_API_VERSION ***
```

</CodeGroup>

## Example

Use `AzureAIFoundry` with your `Agent`:

<CodeGroup>

```python agent.py
from agno.agent import Agent
from agno.models.azure import AzureAIFoundry

agent = Agent(
    model=AzureAIFoundry(id="Phi-4"),
    markdown=True
)

# Print the response on the terminal
agent.print_response("Share a 2 sentence horror story.")
```

</CodeGroup>

## Advanced Examples

View more examples [here](/examples/models/azure/ai_foundry/basic).

## Parameters

| Parameter                | Type                       | Default                        | Description                                                                                               |
| ------------------------ | -------------------------- | ------------------------------ | --------------------------------------------------------------------------------------------------------- |
| `id`                     | `str`                      | `"gpt-4o"`                     | The id of the model to use                                                                               |
| `name`                   | `str`                      | `"AzureAIFoundry"`             | The name of the model                                                                                     |
| `provider`               | `str`                      | `"Azure"`                      | The provider of the model                                                                                 |
| `temperature`            | `Optional[float]`          | `None`                         | Controls randomness in the model's output (0.0 to 2.0)                                                  |
| `max_tokens`             | `Optional[int]`            | `None`                         | Maximum number of tokens to generate in the response                                                     |
| `frequency_penalty`      | `Optional[float]`          | `None`                         | Penalizes new tokens based on their frequency in the text so far (-2.0 to 2.0)                         |
| `presence_penalty`       | `Optional[float]`          | `None`                         | Penalizes new tokens based on whether they appear in the text so far (-2.0 to 2.0)                     |
| `top_p`                  | `Optional[float]`          | `None`                         | Controls diversity via nucleus sampling (0.0 to 1.0)                                                    |
| `stop`                   | `Optional[Union[str, List[str]]]` | `None`                  | Up to 4 sequences where the API will stop generating further tokens                                     |
| `seed`                   | `Optional[int]`            | `None`                         | Random seed for deterministic sampling                                                                   |
| `model_extras`           | `Optional[Dict[str, Any]]` | `None`                         | Additional model-specific parameters                                                                     |
| `strict_output`          | `bool`                     | `True`                         | Controls schema adherence for structured outputs                                                         |
| `request_params`         | `Optional[Dict[str, Any]]` | `None`                         | Additional parameters to include in the request                                                          |
| `api_key`                | `Optional[str]`            | `None`                         | The API key for Azure AI Foundry (defaults to AZURE_API_KEY env var)                                   |
| `api_version`            | `Optional[str]`            | `None`                         | The API version to use (defaults to AZURE_API_VERSION env var)                                          |
| `azure_endpoint`         | `Optional[str]`            | `None`                         | The Azure endpoint URL (defaults to AZURE_ENDPOINT env var)                                             |
| `timeout`                | `Optional[float]`          | `None`                         | Request timeout in seconds                                                                               |
| `max_retries`            | `Optional[int]`            | `None`                         | Maximum number of retries for failed requests                                                           |
| `http_client`            | `Optional[httpx.Client]`   | `None`                         | HTTP client instance for making requests                                                                |
| `client_params`          | `Optional[Dict[str, Any]]` | `None`                         | Additional parameters for client configuration                                                           |

`AzureAIFoundry` is a subclass of the [Model](/reference/models/model) class and has access to the same params.

## Supported Models

Azure AI Foundry provides access to a wide variety of models including:

- **Microsoft Models**: `Phi-4`, `Phi-3.5-mini-instruct`, `Phi-3.5-vision-instruct`
- **Meta Models**: `Meta-Llama-3.1-405B-Instruct`, `Meta-Llama-3.1-70B-Instruct`, `Meta-Llama-3.1-8B-Instruct`
- **Mistral Models**: `Mistral-large`, `Mistral-small`, `Mistral-Nemo`
- **Cohere Models**: `Cohere-command-r-plus`, `Cohere-command-r`

For the complete list of available models, visit the [Azure AI Foundry documentation](https://learn.microsoft.com/azure/ai-services/models).
