{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "HJ20wT_BerPk"
      },
      "source": [
        "##### Copyright 2025 Google LLC."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "cellView": "form",
        "id": "j2MB-H8HewXV"
      },
      "outputs": [],
      "source": [
        "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n",
        "# you may not use this file except in compliance with the License.\n",
        "# You may obtain a copy of the License at\n",
        "#\n",
        "# https://www.apache.org/licenses/LICENSE-2.0\n",
        "#\n",
        "# Unless required by applicable law or agreed to in writing, software\n",
        "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
        "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
        "# See the License for the specific language governing permissions and\n",
        "# limitations under the License."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "-F_6aP8CezSv"
      },
      "source": [
        "# Processing datasets with the Batch API\n",
        "\n",
        "<a target=\"_blank\" href=\"https://colab.research.google.com/github/google-gemini/cookbook/blob/main/examples/Datasets.ipynb\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" height=30/></a>\n",
        "\n",
        "This notebook shows you how to process a downloaded JSONL dataset using the Batch API. To use it you will need to have logging enabled and have collected some logs into a dataset. For details on this process check out the [Logging and Datasets docs](https://ai.google.dev/gemini-api/docs/logs-datasets)."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "8TDPaDa8fsGy"
      },
      "source": [
        "## Get set up\n",
        "\n",
        "Install the SDK and get authenticated."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "BrnjZTlf7WAh"
      },
      "outputs": [],
      "source": [
        "%pip install -q \"google-genai>=1.34.0\""
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "2j33fdxGfzI1"
      },
      "source": [
        "To run the following cell, your API key must be stored it in a Colab Secret named `GOOGLE_API_KEY`. If you don't already have an API key, or you're not sure how to create a Colab Secret, see [Authentication ![image](https://storage.googleapis.com/generativeai-downloads/images/colab_icon16.png)](../quickstarts/Authentication.ipynb) for an example."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "qQk-akvB9lP4"
      },
      "outputs": [],
      "source": [
        "from google.colab import userdata\n",
        "\n",
        "GEMINI_API_KEY = userdata.get('GEMINI_API_KEY')"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "WIHmH-5l9-av"
      },
      "outputs": [],
      "source": [
        "from google import genai\n",
        "from google.genai import types\n",
        "\n",
        "client = genai.Client(api_key=GEMINI_API_KEY)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "HLCnp--GAPCx"
      },
      "source": [
        "## Add your dataset\n",
        "\n",
        "To download your dataset, visit the [AI Studio logs dashboard](https://aistudio.google.com/logs), select your project and dataset and choose `Export dataset` in `JSONL` format.\n",
        "\n",
        "![Export dataset as JSONL]()\n",
        "\n",
        "In the next step, you will upload the JSONL file to this notebook for processing."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "-E6jCb-pAO0u"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Saved dataset.jsonl to /content/dataset.jsonl\n"
          ]
        }
      ],
      "source": [
        "import pathlib\n",
        "\n",
        "dataset_path = pathlib.Path(\"dataset.jsonl\")\n",
        "\n",
        "upload_json = True\n",
        "if upload_json:\n",
        "  from google.colab import files\n",
        "  files.upload_file(filename=str(dataset_path))\n",
        "else:\n",
        "  # Alternatively, you can try the system out with these sample logs.\n",
        "  !wget https://storage.googleapis.com/generativeai-downloads/data/spam_dataset.jsonl -O {dataset_path}"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "61yGeCtGCIhr"
      },
      "source": [
        "## Evaluate against different models\n",
        "\n",
        "Here you will perform an analysis of this dataset against 2 different models. For this exercise, the existing outputs and model names will be cleared, as the Batch API requires the model name to be uniform throughout each batch, while datasets can be assembled from multiple models."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "mPM-QefREQjx"
      },
      "outputs": [],
      "source": [
        "import json\n",
        "from typing import Iterator\n",
        "\n",
        "\n",
        "def with_no_model_no_response(ds_path: pathlib.Path) -> Iterator[str]:\n",
        "  for row in ds_path.open():\n",
        "    data = json.loads(row)\n",
        "    # Clear the model field. Batches require homogenous models.\n",
        "    data['request'].pop('model', None)\n",
        "    # This example will compare two batches, so existing responses can be discarded.\n",
        "    data.pop('response', None)\n",
        "    yield json.dumps(data)\n",
        "\n",
        "\n",
        "clean_dataset = pathlib.Path('clear_dataset.jsonl')\n",
        "\n",
        "with clean_dataset.open(\"w\") as f:\n",
        "  for line in with_no_model_no_response(dataset_path):\n",
        "    print(line, file=f)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ZOQDwkA_V3cy"
      },
      "source": [
        "Now upload the scrubbed dataset through the File API so it can be used for inference."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "eR4Fe8129816"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Uploaded file: files/ej2yk0lw3298\n"
          ]
        }
      ],
      "source": [
        "clean_dataset_ref = client.files.upload(\n",
        "    file=clean_dataset,\n",
        "    config=types.UploadFileConfig(\n",
        "        display_name='eval dataset',\n",
        "        mime_type='application/json'\n",
        "    )\n",
        ")\n",
        "\n",
        "print(f\"Uploaded file: {clean_dataset_ref.name}\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "A_GxyM5sWWn1"
      },
      "source": [
        "Now define and start the batches. This example will run a batch against Gemini 2.5 Flash and Pro."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "svDuOw55-RCC"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Created batch job from file: batches/6u50zzu2d9xf639ic6pmdmb7s8opo9mtskie\n",
            "Created batch job from file: batches/1hjav84vzv50ld9oeev416my3bvc44aesr8c\n"
          ]
        }
      ],
      "source": [
        "models_to_eval = ['gemini-2.5-flash', 'gemini-2.5-pro']\n",
        "\n",
        "batches = []\n",
        "for model in models_to_eval:\n",
        "  batch_ref = client.batches.create(\n",
        "      model=model,\n",
        "      # Reuse the same dataset across batches.\n",
        "      src=clean_dataset_ref.name,\n",
        "      config=types.CreateBatchJobConfig(\n",
        "          display_name=f'Resubmit of dataset with {model}'\n",
        "      )\n",
        "  )\n",
        "\n",
        "  print(f\"Created batch job from file: {batch_ref.name}\")\n",
        "  batches.append(batch_ref)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "mbzdlum6XMfV"
      },
      "source": [
        "Wait for the batches to finish. This can take up to 24 hours to complete."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "l_KbRKGUXRY6"
      },
      "outputs": [],
      "source": [
        "import time\n",
        "\n",
        "remaining = {b.name for b in batches}\n",
        "while remaining:\n",
        "  for batch in batches:\n",
        "    if batch.name not in remaining:\n",
        "      continue\n",
        "\n",
        "    # Poll for the batch status.\n",
        "    batch_job = client.batches.get(name=batch.name)\n",
        "\n",
        "    # Success?\n",
        "    if batch_job.state.name == 'JOB_STATE_SUCCEEDED':\n",
        "      print(f'COMPLETE: {batch.name}')\n",
        "      remaining.remove(batch.name)\n",
        "\n",
        "    # Failure?\n",
        "    elif batch_job.state.name in {'JOB_STATE_FAILED', 'JOB_STATE_CANCELLED'}:\n",
        "      print(f'ERROR: {batch.name}')\n",
        "      remaining.remove(batch.name)\n",
        "\n",
        "    # Otherwise still pending.\n",
        "    time.sleep(30)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "OBP1KZFRZuC4"
      },
      "source": [
        "When the results are complete, download the resulting data."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "k7mD2-iIZxnI"
      },
      "outputs": [],
      "source": [
        "result_files = []\n",
        "for i, batch in enumerate(batches, start=1):\n",
        "  batch_job = client.batches.get(name=batch.name)\n",
        "\n",
        "  result_file_name = batch_job.dest.file_name\n",
        "\n",
        "  file_content_bytes = client.files.download(file=result_file_name)\n",
        "  file_content = file_content_bytes.decode('utf-8')\n",
        "\n",
        "  result_file = pathlib.Path(f'results_{i}.jsonl')\n",
        "  result_file.write_text(file_content)\n",
        "\n",
        "  result_files.append(result_file)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "D2YZcwQUZife"
      },
      "source": [
        "To compare the results you will need an evaluation function. In practice you may evaluate in one of many ways (e.g. by comparison, by using LLMs). This example assumes the model should invoke a function call to be correct.\n",
        "\n",
        "Update `is_correct` to match your evaluation criteria, returning `True` when the result is good, and `False` otherwise."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "WR4iC1T6bRk0"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "gemini-2.5-flash: 53.33%\n",
            "gemini-2.5-pro: 93.33%\n"
          ]
        }
      ],
      "source": [
        "def is_correct(row) -> bool:\n",
        "  try:\n",
        "    return 'functionCall' in row['response']['candidates'][0]['content']['parts'][-1]\n",
        "  except (KeyError, IndexError):\n",
        "    return False\n",
        "\n",
        "\n",
        "for i, file in enumerate(result_files):\n",
        "  with file.open() as f:\n",
        "    file_score = 0\n",
        "    total_count = 0\n",
        "    for row in f:\n",
        "      data = json.loads(row)\n",
        "\n",
        "      file_score += int(is_correct(data))\n",
        "      total_count += 1\n",
        "\n",
        "    model = models_to_eval[i]\n",
        "    if total_count:\n",
        "      print(f'{model}: {file_score / total_count:.2%}')\n",
        "    else:\n",
        "      print(f'No results found for {model}')"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "m6xWkuXmgK2l"
      },
      "source": [
        "## Next steps\n",
        "\n",
        "For more on Logging and Datasets, check out:\n",
        "* The [Logging and Datasets docs](https://ai.google.dev/gemini-api/docs/logs-datasets)\n",
        "* The [Batch API cookbook](https://github.com/google-gemini/cookbook/blob/main/quickstarts/Batch_mode.ipynb)"
      ]
    }
  ],
  "metadata": {
    "colab": {
      "name": "Datasets.ipynb",
      "toc_visible": true
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}
