---
title: Observability for Google Gemini (Python) with Opik
description: Start here to integrate Opik into your Google Gemini-based genai application for end-to-end LLM observability, unit testing, and optimization.
---

[Gemini](https://aistudio.google.com/welcome) is a family of multimodal large language models developed by Google DeepMind.

## VertexAI Support

Opik also supports [Google VertexAI](https://cloud.google.com/vertex-ai?hl=en), Google's fully-managed AI development platform that provides access to Gemini models through the `google-genai` package. When using VertexAI, you can leverage the same `track_genai` wrapper with the `google-genai` client configured for VertexAI, allowing you to trace and monitor your Gemini model calls whether you're using the direct Google AI API or through VertexAI's enterprise platform.

## Account Setup

[Comet](https://www.comet.com/site?from=llm&utm_source=opik&utm_medium=colab&utm_content=gemini&utm_campaign=opik) provides a hosted version of the Opik platform, [simply create an account](https://www.comet.com/signup?from=llm&utm_source=opik&utm_medium=colab&utm_content=gemini&utm_campaign=opik) and grab your API Key.

> You can also run the Opik platform locally, see the [installation guide](https://www.comet.com/docs/opik/self-host/overview/?from=llm&utm_source=opik&utm_medium=colab&utm_content=gemini&utm_campaign=opik) for more information.

## Getting Started

### Installation

First, ensure you have both `opik` and `google-genai` packages installed:

```bash
pip install opik google-genai
```

### Configuring Opik

Configure the Opik Python SDK for your deployment type. See the [Python SDK Configuration guide](/tracing/sdk_configuration) for detailed instructions on:

- **CLI configuration**: `opik configure`
- **Code configuration**: `opik.configure()`
- **Self-hosted vs Cloud vs Enterprise** setup
- **Configuration files** and environment variables

### Configuring Gemini

In order to configure Gemini, you will need to have your Gemini API Key. See the [following documentation page](https://ai.google.dev/gemini-api/docs/api-key) how to retrieve it.

You can set it as an environment variable:

```bash
export GOOGLE_API_KEY="YOUR_API_KEY"
```

Or set it programmatically:

```python
import os
import getpass

if "GOOGLE_API_KEY" not in os.environ:
    os.environ["GOOGLE_API_KEY"] = getpass.getpass("Enter your Gemini API key: ")
```

## Logging LLM calls

In order to log the LLM calls to Opik, you will need to wrap the Gemini client with `track_genai`. When making calls with that wrapped client, all calls will be logged to Opik:

```python
from google import genai
from opik.integrations.genai import track_genai

os.environ["OPIK_PROJECT_NAME"] = "gemini-integration-demo"

client = genai.Client()
gemini_client = track_genai(client)

prompt = """
Write a short two sentence story about Opik.
"""

response = gemini_client.models.generate_content(
    model="gemini-2.0-flash-001", contents=prompt
)
print(response.text)
```

<Frame>
  <img src="/img/cookbook/gemini_trace_cookbook.png" />
</Frame>

## Using with VertexAI

To use Opik with VertexAI, configure the `google-genai` client for VertexAI and wrap it with `track_genai`:

```python
from google import genai
from opik.integrations.genai import track_genai

# Configure for VertexAI
PROJECT_ID = "your-project-id"
LOCATION = "us-central1"

client = genai.Client(vertexai=True, project=PROJECT_ID, location=LOCATION)
vertexai_client = track_genai(client)

# Set project name for organization
os.environ["OPIK_PROJECT_NAME"] = "vertexai-integration-demo"

# Use the wrapped client
response = vertexai_client.models.generate_content(
    model="gemini-2.0-flash-001",
    contents="Write a short story about AI observability."
)
print(response.text)
```

<Frame>
  <img src="/img/cookbook/vertexai_trace_cookbook.png" />
</Frame>

## Advanced Usage

### Using with the `@track` decorator

If you have multiple steps in your LLM pipeline, you can use the `@track` decorator to log the traces for each step. If Gemini is called within one of these steps, the LLM call will be associated with that corresponding step:

```python
from opik import track

@track
def generate_story(prompt):
    response = gemini_client.models.generate_content(
        model="gemini-2.0-flash-001", contents=prompt
    )
    return response.text

@track
def generate_topic():
    prompt = "Generate a topic for a story about Opik."
    response = gemini_client.models.generate_content(
        model="gemini-2.0-flash-001", contents=prompt
    )
    return response.text

@track
def generate_opik_story():
    topic = generate_topic()
    story = generate_story(topic)
    return story

# Execute the multi-step pipeline
generate_opik_story()
```

The trace can now be viewed in the UI with hierarchical spans showing the relationship between different steps:

<Frame>
  <img src="/img/cookbook/gemini_trace_decorator_cookbook.png" />
</Frame>

## Cost Tracking

The `track_genai` wrapper automatically tracks token usage and cost for all supported Google AI models.

Cost information is automatically captured and displayed in the Opik UI, including:

- Token usage details
- Cost per request based on Google AI pricing
- Total trace cost

<Tip>
  View the complete list of supported models and providers on the [Supported Models](/tracing/supported_models) page.
</Tip>
