---
title: Getting Started with Azure OpenAI Service & Azure AI Foundry
sidebarTitle: Azure
description: "Learn how to use TensorZero with Azure OpenAI Service and Azure AI Foundry LLMs: open-source gateway, observability, optimization, evaluations, and experimentation."
---

TensorZero's `azure` provider supports both **Azure OpenAI Service** and **Azure AI Foundry**. Both use the same OpenAI-compatible API, so configuration is nearly identical—just use different endpoint URLs.

This guide shows how to set up a minimal deployment to use the TensorZero Gateway with Azure OpenAI Service and Azure AI Foundry.

## Azure OpenAI Service

### Setup

For this minimal setup, you'll need just two files in your project directory:

```
- config/
  - tensorzero.toml
- docker-compose.yml
```

<Tip>

You can also find the complete code for this example on [GitHub](https://github.com/tensorzero/tensorzero/tree/main/examples/guides/providers/azure).

</Tip>

For production deployments, see our [Deployment Guide](/deployment/tensorzero-gateway/).

### Configuration

Create a minimal configuration file that defines a model and a simple chat function:

```toml title="config/tensorzero.toml"
[models.gpt_4o_mini_2024_07_18]
routing = ["azure"]

[models.gpt_4o_mini_2024_07_18.providers.azure]
type = "azure"
deployment_id = "gpt4o-mini-20240718"
endpoint = "https://your-azure-openai-endpoint.openai.azure.com"

[functions.my_function_name]
type = "chat"

[functions.my_function_name.variants.my_variant_name]
type = "chat_completion"
model = "gpt_4o_mini_2024_07_18"
```

See the [list of models available on Azure OpenAI Service](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models).

If you need to configure the endpoint at runtime, you can set it to `endpoint = "env::AZURE_OPENAI_ENDPOINT"` to read from the environment variable `AZURE_OPENAI_ENDPOINT` on startup or `endpoint = "dynamic::azure_openai_endpoint"` to read from a dynamic credential `azure_openai_endpoint` on each inference.

### Credentials

You must set the `AZURE_OPENAI_API_KEY` environment variable before running the gateway.

You can customize the credential location by setting the `api_key_location` to `env::YOUR_ENVIRONMENT_VARIABLE` or `dynamic::ARGUMENT_NAME`.
See the [Credential Management](/operations/manage-credentials/) guide and [Configuration Reference](/gateway/configuration-reference/) for more information.

### Deployment (Docker Compose)

Create a minimal Docker Compose configuration:

```yaml title="docker-compose.yml"
# This is a simplified example for learning purposes. Do not use this in production.
# For production-ready deployments, see: https://www.tensorzero.com/docs/deployment/tensorzero-gateway

services:
  gateway:
    image: tensorzero/gateway
    volumes:
      - ./config:/app/config:ro
    command: --config-file /app/config/tensorzero.toml
    environment:
      - AZURE_OPENAI_API_KEY=${AZURE_OPENAI_API_KEY:?Environment variable AZURE_OPENAI_API_KEY must be set.}
    ports:
      - "3000:3000"
    extra_hosts:
      - "host.docker.internal:host-gateway"
```

You can start the gateway with `docker compose up`.

### Inference

Make an inference request to the gateway:

```bash
curl -X POST http://localhost:3000/inference \
  -H "Content-Type: application/json" \
  -d '{
    "function_name": "my_function_name",
    "input": {
      "messages": [
        {
          "role": "user",
          "content": "What is the capital of Japan?"
        }
      ]
    }
  }'
```

### Other Features

#### Generate embeddings

The Azure OpenAI Service model provider supports generating embeddings.
You can find a [complete code example on GitHub](https://github.com/tensorzero/tensorzero/tree/main/examples/guides/embeddings/providers/azure).

#### Azure AI Foundry

Azure AI Foundry provides access to models from multiple providers (Meta Llama, Mistral, xAI Grok, Microsoft Phi, Cohere, and more). See the [list of available models](https://ai.azure.com/explore/models).

The same `azure` provider works with Azure AI Foundry.
The key difference is the endpoint URL.
All other configuration options (credentials, Docker Compose, inference) work the same as Azure OpenAI Service above.

## Call the OpenAI Responses API with Azure

You can call the OpenAI Responses API with Azure by setting `api_base` in your configuration to your Azure deployment URL.

```toml
[models.azure-gpt-5-mini-responses]
routing = ["azure"]

[models.azure-gpt-5-mini-responses.providers.azure]
type = "openai"  # CAREFUL: not `azure`!
api_base = "https://YOUR-DEPLOYMENT-HERE.openai.azure.com/openai/v1/"  # TODO: Insert your API base URL here
api_key_location = "env::AZURE_OPENAI_API_KEY"
model_name = "gpt-5-mini"
api_type = "responses"
```

<Warning>

The `azure` model provider does not support the Responses API.
You must use the `openai` provider with a custom `api_base` instead.

</Warning>
