---
title: "GoogleAIGeminiGenerator"
id: googleaigeminigenerator
slug: "/googleaigeminigenerator"
description: "This component enables text generation using the Google Gemini models."
---

# GoogleAIGeminiGenerator

This component enables text generation using the Google Gemini models.

:::warning
Deprecation Notice

This integration uses the deprecated google-generativeai SDK, which will lose support after August 2025.

We recommend switching to the new [GoogleGenAIChatGenerator](googlegenaichatgenerator.mdx) integration instead.
:::

|                                        |                                                                                              |
| :------------------------------------- | :------------------------------------------------------------------------------------------- |
| **Most common position in a pipeline** | After a [`PromptBuilder`](../builders/promptbuilder.mdx)                                               |
| **Mandatory init variables**           | "api_key": A Google AI Studio API key. Can be set with `GOOGLE_API_KEY` env var.             |
| **Mandatory run variables**            | “parts”: A variadic list containing a mix of images, audio, video, and text to prompt Gemini |
| **Output variables**                   | “replies”: A list of strings or dictionaries with all the replies generated by the model     |
| **API reference**                      | [Google AI](/reference/integrations-google-ai)                                                      |
| **GitHub link**                        | https://github.com/deepset-ai/haystack-core-integrations/tree/main/integrations/google_ai  |

`GoogleAIGeminiGenerator` supports `gemini-2.5-pro-exp-03-25`, `gemini-2.0-flash`, `gemini-1.5-pro`, and `gemini-1.5-flash` models.

For available models, see https://ai.google.dev/gemini-api/docs/models/gemini.

### Parameters Overview

`GoogleAIGeminiGenerator` uses a Google AI Studio API key for authentication. You can write this key in an `api_key` parameter or as a `GOOGLE_API_KEY` environment variable (recommended).

To get an API key, visit the [Google AI Studio](https://ai.google.dev/gemini-api/docs/api-key) website.

### Streaming

This Generator supports [streaming](guides-to-generators/choosing-the-right-generator.mdx#streaming-support) the tokens from the LLM directly in output. To do so, pass a function to the `streaming_callback` init parameter.

## Usage

Start by installing the `google-ai-haystack` package to use the  `GoogleAIGeminiGenerator`:

```shell
pip install google-ai-haystack
```

### On its own

Basic usage:

```python
import os
from haystack_integrations.components.generators.google_ai import GoogleAIGeminiGenerator

os.environ["GOOGLE_API_KEY"] = "<MY_API_KEY>"

gemini = GoogleAIGeminiGenerator(model="gemini-1.5-pro")
res = gemini.run(parts = ["What is the most interesting thing you know?"])
for answer in res["replies"]:
    print(answer)

```

This is a more advanced usage that also uses text and images as input:

```python
import requests
import os
from haystack.dataclasses.byte_stream import ByteStream
from haystack_integrations.components.generators.google_ai import GoogleAIGeminiGenerator

URLS = [
    "https://raw.githubusercontent.com/silvanocerza/robots/main/robot1.jpg",
    "https://raw.githubusercontent.com/silvanocerza/robots/main/robot2.jpg",
    "https://raw.githubusercontent.com/silvanocerza/robots/main/robot3.jpg",
    "https://raw.githubusercontent.com/silvanocerza/robots/main/robot4.jpg"
]
images = [
    ByteStream(data=requests.get(url).content, mime_type="image/jpeg")
    for url in URLS
]

os.environ["GOOGLE_API_KEY"] = "<MY_API_KEY>"

gemini = GoogleAIGeminiGenerator(model="gemini-1.5-pro")
result = gemini.run(parts = ["What can you tell me about this robots?", *images])
for answer in result["replies"]:
    print(answer)

```

### In a pipeline

In a RAG pipeline:

```python
import os
from haystack.components.retrievers.in_memory import InMemoryBM25Retriever
from haystack.components.builders import PromptBuilder
from haystack import Pipeline
from haystack.document_stores.in_memory import InMemoryDocumentStore
from haystack_integrations.components.generators.google_ai import GoogleAIGeminiGenerator

os.environ["GOOGLE_API_KEY"] = "<MY_API_KEY>"

docstore = InMemoryDocumentStore()

template = """
Given the following information, answer the question.

Context:
{% for document in documents %}
    {{ document.content }}
{% endfor %}

Question: What's the official language of {{ country }}?
"""
pipe = Pipeline()

pipe.add_component("retriever", InMemoryBM25Retriever(document_store=docstore))
pipe.add_component("prompt_builder", PromptBuilder(template=template))
pipe.add_component("gemini", GoogleAIGeminiGenerator(model="gemini-pro"))
pipe.connect("retriever", "prompt_builder.documents")
pipe.connect("prompt_builder", "gemini")

pipe.run({
    "prompt_builder": {
        "country": "France"
    }
})
```
