---
title: google-ai
---


The `google-ai` provider supports the `https://generativelanguage.googleapis.com/v1beta/models/{model_id}/generateContent` and `https://generativelanguage.googleapis.com/v1beta/models/{model_id}/streamGenerateContent` endpoints.

<Tip>
The use of `v1beta` rather than `v1` aligns with the endpoint conventions established in [Google's SDKs](https://github.com/google-gemini/generative-ai-python/blob/8a29017e9120f0552ee3ad6092e8545d1aa6f803/google/generativeai/client.py#L60) and offers access to both the existing `v1` models and additional models exclusive to `v1beta`.
</Tip>

<Tip>
BAML will automatically pick `streamGenerateContent` if you call the streaming interface.
</Tip>

Example:
```baml BAML
client<llm> MyClient {
  provider google-ai
  options {
    model "gemini-2.5-flash"
  }
}
```

## BAML-specific request `options`
These unique parameters (aka `options`)  modify the API request sent to the provider.

You can use this to modify the `headers` and `base_url` for example.


<ParamField
  path="api_key"
  type="string"
>
  Will be passed as the `x-goog-api-key` header. **Default: `env.GOOGLE_API_KEY`**

  `x-goog-api-key: $api_key`
</ParamField>

<ParamField path="base_url" type="string">
  The base URL for the API. **Default: `https://generativelanguage.googleapis.com/v1beta`**
</ParamField>

<ParamField
  path="model"
  type="string"
>
  The model to use. **Default: `gemini-2.5-flash`**

  We don't have any checks for this field, you can pass any string you wish.

| Model | Use Case | Context | Key Features |
|-------|----------|---------|--------------|
| **gemini-2.5-pro** | Complex tasks, coding, STEM | 1M | Adaptive thinking, multimodal |
| **gemini-2.5-flash** | Production apps, balanced performance | 1M | Best price/performance |
| **gemini-2.5-flash-lite** | High-volume, cost-sensitive | 1M | Lowest cost, fastest |

See the [Google Model Docs](https://ai.google.dev/gemini-api/docs/models/gemini) for the latest models.
</ParamField>

<Tip>
  Some parameters, like temperature, for Gemini Models are specified in the `generationConfig` object. [See Docs](https://ai.google.dev/api/generate-content)
</Tip>

<ParamField path="headers" type="object">
  Additional headers to send with the request.

Example:
```baml BAML
client<llm> MyClient {
  provider google-ai
  options {
    model "gemini-2.5-flash"
    headers {
      "X-My-Header" "my-value"
    }
    generationConfig {
      temperature 0.5
    }
  }
}
```
</ParamField>

<Markdown src="/snippets/role-selection.mdx" />

<Markdown src="/snippets/allowed-role-metadata-basic.mdx" />

<Markdown src="/snippets/supports-streaming.mdx" />

<Markdown src="/snippets/finish-reason.mdx" />

<Markdown src="/snippets/media-url-handler.mdx" />

<Note>
  Google AI uses `send_base64_unless_google_url` by default for images, which preserves Google Cloud Storage URLs (gs://) while converting other URLs to base64.
</Note>

## Provider request parameters
These are other `options` that are passed through to the provider, without modification by BAML. For example if the request has a `temperature` field, you can define it in the client here so every call has that set.

Consult the specific provider's documentation for more information.
<ParamField
   path="contents"
   type="DO NOT USE"
>
  BAML will auto construct this field for you from the prompt
</ParamField>


For all other options, see the [official Google Gemini API documentation](https://ai.google.dev/api/rest/v1beta/models/generateContent).
