{
  "cells": [
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "IxEDQsSwYLyX"
      },
      "outputs": [],
      "source": [
        "# Copyright 2025 Google LLC\n",
        "#\n",
        "# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
        "# you may not use this file except in compliance with the License.\n",
        "# You may obtain a copy of the License at\n",
        "#\n",
        "#     https://www.apache.org/licenses/LICENSE-2.0\n",
        "#\n",
        "# Unless required by applicable law or agreed to in writing, software\n",
        "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
        "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
        "# See the License for the specific language governing permissions and\n",
        "# limitations under the License."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "EN9HLxPkbbKh"
      },
      "source": [
        "# Gemini Enterprise answer eval using BLEU, ROUGE, BERT, Similarity Score"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "eec94beefdbb"
      },
      "source": [
        "<table align=\"left\">\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/generative-ai/blob/main/search/gemini-enterprise/gemini_enterprise_eval.ipynb\">\n",
        "      <img width=\"32px\" src=\"https://www.gstatic.com/pantheon/images/bigquery/welcome_page/colab-logo.svg\" alt=\"Google Colaboratory logo\"><br> Open in Colab\n",
        "    </a>\n",
        "  </td>\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://console.cloud.google.com/vertex-ai/colab/import/https:%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fgenerative-ai%2Fmain%2Fsearch%2Fgemini-enterprise%2Fgemini_enterprise_eval.ipynb\">\n",
        "      <img width=\"32px\" src=\"https://lh3.googleusercontent.com/JmcxdQi-qOpctIvWKgPtrzZdJJK-J3sWE1RsfjZNwshCFgE_9fULcNpuXYTilIR2hjwN\" alt=\"Google Cloud Colab Enterprise logo\"><br> Open in Colab Enterprise\n",
        "    </a>\n",
        "  </td>\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/generative-ai/main/search/gemini-enterprise/gemini_enterprise_eval.ipynb\">\n",
        "      <img src=\"https://www.gstatic.com/images/branding/gcpiconscolors/vertexai/v1/32px.svg\" alt=\"Vertex AI logo\"><br> Open in Vertex AI Workbench\n",
        "    </a>\n",
        "  </td>\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://github.com/GoogleCloudPlatform/generative-ai/blob/main/search/gemini-enterprise/gemini_enterprise_eval.ipynb\">\n",
        "      <img width=\"32px\" src=\"https://raw.githubusercontent.com/primer/octicons/refs/heads/main/icons/mark-github-24.svg\" alt=\"GitHub logo\"><br> View on GitHub\n",
        "    </a>\n",
        "  </td>\n",
        "</table>\n",
        "\n",
        "<div style=\"clear: both;\"></div>\n",
        "\n",
        "<b>Share to:</b>\n",
        "\n",
        "<a href=\"https://www.linkedin.com/sharing/share-offsite/?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/search/gemini-enterprise/gemini_enterprise_eval.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/8/81/LinkedIn_icon.svg\" alt=\"LinkedIn logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://bsky.app/intent/compose?text=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/search/gemini-enterprise/gemini_enterprise_eval.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/7/7a/Bluesky_Logo.svg\" alt=\"Bluesky logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://twitter.com/intent/tweet?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/search/gemini-enterprise/gemini_enterprise_eval.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/5/5a/X_icon_2.svg\" alt=\"X logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://reddit.com/submit?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/search/gemini-enterprise/gemini_enterprise_eval.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://redditinc.com/hubfs/Reddit%20Inc/Brand/Reddit_Logo.png\" alt=\"Reddit logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://www.facebook.com/sharer/sharer.php?u=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/search/gemini-enterprise/gemini_enterprise_eval.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/5/51/Facebook_f_logo_%282019%29.svg\" alt=\"Facebook logo\">\n",
        "</a>"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "31447366377f"
      },
      "source": [
        "| Authors |\n",
        "| --- |\n",
        "| [Nikhil Kulkarni](https://github.com/nikhilkul) |\n",
        "| [Koushik Ghosh](https://github.com/Koushik25feb) |\n",
        "| [Koyel Guha](https://github.com/koyelguha) |"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "8BnpNxbCMHy7"
      },
      "source": [
        "## Overview: Gemini Enterprise Answer Evaluation\n",
        "\n",
        "This notebook provides a comprehensive framework for evaluating the quality of answers generated by an Gemini Enterprise application. It compares the Gemini Enterprise output against a \"golden dataset\" of expected answers using several widely recognized natural language processing (NLP) metrics.\n",
        "\n",
        "**Target Audience:** Developers and researchers working with Gemini Enterprise applications who need to quantitatively assess the performance of their answer generation models.\n",
        "\n",
        "**Key Features:**\n",
        "\n",
        "*   **Metric-Based Evaluation:** Utilizes established NLP metrics such as BLEU, ROUGE, BERTScore, and Semantic Similarity to provide a robust evaluation.\n",
        "*   **Google Sheets Integration:** Seamlessly loads the golden dataset from a specified Google Sheet and can output the evaluation results back to a sheet.\n",
        "*   **BigQuery Integration:** Provides an option to save the evaluation results to a BigQuery table for further analysis and historical tracking.\n",
        "*   **Gemini Enterprise API Interaction:** Includes helper functions to interact with Gemini Enterprise Search and Assist APIs to fetch answers for evaluation.\n",
        "*   **Qualitative Rating:** Maps the numerical scores from the metrics to a qualitative rating (e.g., Excellent, Good) for easier interpretation.\n",
        "*   **Configurable:** Allows users to configure project details, engine ID, application type, and input/output sheet URLs.\n",
        "\n",
        "**Evaluation Framework:**\n",
        "\n",
        "The notebook employs a multi-faceted evaluation approach using the following NLP metrics:\n",
        "\n",
        "*   **BLEU (Bilingual Evaluation Understudy):** Measures the n-gram overlap between the generated answer and the expected answer. It is a precision-focused metric, indicating how much of the generated text is present in the reference.\n",
        "*   **ROUGE (Recall-Oriented Understudy for Gisting Evaluation):** A set of metrics that measure the overlap of n-grams, word sequences, and word pairs between the generated answer and the expected answer. It is a recall-focused metric, indicating how much of the reference text is covered by the generated text. The notebook specifically uses ROUGE-L, which measures the longest common subsequence.\n",
        "*   **BERTScore:** Leverages pre-trained BERT embeddings to compute a similarity score between the generated answer and the expected answer. It considers semantic similarity beyond simple word overlap, making it more robust to paraphrasing.\n",
        "*   **Semantic Similarity:** Calculates the semantic similarity between the generated answer and the expected answer using a Sentence Transformer model (`all-MiniLM-L6-v2`). This provides a measure of how similar the meaning of the two texts is, regardless of the exact wording.\n",
        "\n",
        "For each question in the golden dataset, the notebook calculates these four scores. A qualitative rating (Excellent, Good, Moderate, Low, Poor) is then assigned based on the average of the BLEU, ROUGE, and BERT scores, and also individually for the Semantic Similarity score. This provides both numerical and easily interpretable qualitative feedback on the performance of the Gemini Enterprise application.\n",
        "\n",
        "**Input and Output Options:**\n",
        "\n",
        "*   **Input (Golden Dataset):** The golden dataset, containing the test queries and their corresponding expected answers, is loaded from a Google Sheet. You need to provide the Google Drive URL of your sheet and the name of the worksheet containing the data. The notebook expects at least two columns: one for the query/question and one for the expected answer. Column names MUST be `search_query` and `expected answers`\n",
        "\n",
        "Sample:\n",
        "\n",
        "| `search_query` | `expected_answers` |\n",
        "|----------------|--------------------|\n",
        "|                |                    |\n",
        "|                |                    |\n",
        "\n",
        "*   **Output (Evaluation Results):** The evaluation results, including the calculated scores (BLEU, ROUGE, BERTScore, Semantic Similarity) and their corresponding qualitative ratings for each question, can be saved in the following formats:\n",
        "    *   **CSV File:** The results are saved to a local CSV file within the Colab environment.\n",
        "    *   **Google Sheet:** The results can be written to a specified worksheet within your Google Sheet. The notebook handles the case where the worksheet already exists.\n",
        "    *   **BigQuery Table:** The results can be appended to a BigQuery table, including a timestamp for each run, allowing for historical tracking and further analysis using BigQuery's capabilities. You need to provide the dataset ID and table name.\n",
        "\n",
        "**How to Use:**\n",
        "\n",
        "1.  **Setup:** Provide your Google Cloud project details, Gemini Enterprise engine ID, and the URLs for your golden dataset Google Sheet, as well as the desired BigQuery dataset and table names if using that option.\n",
        "2.  **Authentication:** Authenticate your Google Cloud account and enable the necessary APIs (Discovery Engine, Sheets, Drive, BigQuery).\n",
        "3.  **Data Loading:** The notebook will fetch your golden dataset from the specified Google Sheet.\n",
        "4.  **Answer Retrieval:** The notebook will query your Gemini Enterprise application with the questions from the golden dataset to get the generated answers.\n",
        "5.  **Evaluation:** The notebook will compute the specified NLP metrics by comparing the generated answers to the expected answers in your golden dataset.\n",
        "6.  **Results:** The results, including the scores and qualitative ratings, will be saved to the configured output options (CSV, Google Sheet, and/or BigQuery).\n",
        "\n",
        "**Prerequisites:**\n",
        "\n",
        "*   Access to a Google Cloud project.\n",
        "*   An Gemini Enterprise application (Search or Assist).\n",
        "*   A Google Sheet containing your golden dataset with at least two columns: one for the query/question and one for the expected answer.\n",
        "*   Necessary APIs enabled in your Google Cloud project (Discovery Engine, Sheets, Drive, BigQuery).\n",
        "\n",
        "This notebook can be easily adapted to evaluate other answer generation systems by modifying the helper functions to interact with the relevant APIs."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "65-2go3u2WKL"
      },
      "source": [
        "## Step 1: Initialization"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "uQrsAEV90W3R"
      },
      "outputs": [],
      "source": [
        "# @title Step 1.1 Install necessary libraries\n",
        "\n",
        "%pip install --upgrade --quiet pandas openpyxl nltk rouge-score bert-score transformers colabtools google-cloud-discoveryengine pandas==2.2.2"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "xtiEtYsi2TeX"
      },
      "outputs": [],
      "source": [
        "import datetime\n",
        "\n",
        "# @title Step 1.2 Import necessary libraries\n",
        "import json\n",
        "import logging\n",
        "import os\n",
        "import time\n",
        "\n",
        "import google.auth.transport.requests\n",
        "import pandas as pd\n",
        "import requests\n",
        "import vertexai\n",
        "from google.auth import default\n",
        "from google.colab import auth\n",
        "\n",
        "creds, _ = google.auth.default()\n",
        "auth_req = google.auth.transport.requests.Request()"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "3af2185658f8"
      },
      "outputs": [],
      "source": [
        "# @title Add logger\n",
        "\n",
        "logger = logging.getLogger(\"gemini_enterprise_eval\")\n",
        "logger.setLevel(logging.DEBUG)\n",
        "log_file_path = \"./gemini_enterprise_eval_notebook_logs.log\"\n",
        "\n",
        "# Ensuring that handlers are not added multiple times if the cell is run multiple times\n",
        "# This prevents duplicate log entries in the file and console\n",
        "if logger.hasHandlers():\n",
        "    logger.handlers.clear()  # Clear existing handlers\n",
        "\n",
        "\n",
        "file_handler = logging.FileHandler(log_file_path, mode=\"a\")\n",
        "file_handler.setLevel(logging.DEBUG)\n",
        "formatter = logging.Formatter(\"%(asctime)s - %(levelname)s - %(name)s - %(message)s\")\n",
        "file_handler.setFormatter(formatter)\n",
        "logger.addHandler(file_handler)\n",
        "\n",
        "# Optionally, adding a StreamHandler to also print logs to the Colab console output\n",
        "console_handler = logging.StreamHandler()\n",
        "console_handler.setLevel(logging.INFO)\n",
        "console_handler.setFormatter(formatter)\n",
        "logger.addHandler(console_handler)\n",
        "\n",
        "\n",
        "logger.info(f\"Logging initialized. Logs will be saved to: {log_file_path}\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ozdeRSNN2bX6"
      },
      "source": [
        "## Step 2: Setup"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "k03CjFR72aup"
      },
      "outputs": [],
      "source": [
        "# @title Step 2.1 Set the project related configuration.\n",
        "\n",
        "# Use project number and not Project ID\n",
        "project_num = \"00000000\"  # @param{ type : 'string' }\n",
        "\n",
        "# Engine ID is the ID of the Gemini Enterprise APP\n",
        "engine_id = \"gemini_enterprise_app_engine_id\"  # @param{ type : 'string' }\n",
        "\n",
        "# Assist or Search - Note: \"Assist\" means \"Search + Assist\"\n",
        "# Whereas Search means \"Search + Answer\"\n",
        "# option to put are [search, assist]\n",
        "app_type = \"search\"  # @param{ type : 'string' }\n",
        "\n",
        "# Location global, us and eu\n",
        "location = \"global\"  # @param [\"us\", \"eu\", \"global\"]\n",
        "\n",
        "# Use Project ID\n",
        "auth_project_id = \"[your-project-id]\"  # @param{ type : 'string' }\n",
        "\n",
        "# Input Queries\n",
        "# Note: Every user needs to have their own copy of this doc. Please make a copy of the golden data google sheet below and add that link.\n",
        "eval_data_google_drive_url = \"[spreadsheet-url]\"  # @param{ type : 'string' }\n",
        "\n",
        "# Input Queries\n",
        "worksheet_name = \"input_queries\"  # @param{ type : 'string' }\n",
        "\n",
        "# (Optional) Output file name used for debugging. This file will be saved in the colab env.\n",
        "output_file_name = \"test_output\"  # @param{ type : 'string' }\n",
        "\n",
        "# Eval data worksheet\n",
        "sheet_name_suffix = datetime.datetime.fromtimestamp(time.time()).strftime(\n",
        "    \"%Y-%m-%d %H:%M:%S\"\n",
        ")\n",
        "eval_data_worksheet_name = \"sample_outputs\"  # @param{ type : 'string' }\n",
        "\n",
        "# Eval data with metrics worksheet\n",
        "eval_data_results_worksheet_name = \"eval_data_results\"  # @param{ type : 'string' }\n",
        "\n",
        "# Top 'K' for search Results\n",
        "K = 10  # @param{ type : 'string' }"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "U7ao9DKG2v9F"
      },
      "outputs": [],
      "source": [
        "# @title Step 2.2 Enable Google Sheets Integration\n",
        "\n",
        "# Enable Google Sheets Integration by visiting (only if you are using the golden dataset from the spreadsheet\n",
        "\n",
        "print(\n",
        "    f\"https://console.developers.google.com/apis/api/sheets.googleapis.com/overview?project={auth_project_id}\"\n",
        ")\n",
        "print(\n",
        "    f\"https://console.developers.google.com/apis/api/drive.googleapis.com/overview?project={auth_project_id}\"\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "nNF1D9zwqtI5"
      },
      "source": [
        "## Step 3: Authenticate your Google Cloud Account and enable APIs\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "Am9pMNg5_rJq"
      },
      "outputs": [],
      "source": [
        "# @title Step 3.1 Authenticate your Google Cloud Account and enable APIs\n",
        "\n",
        "# Authenticate gcloud.\n",
        "auth.authenticate_user(project_id=auth_project_id)\n",
        "\n",
        "region = \"us-central1\"  # region = global is not supported yet\n",
        "\n",
        "# Configure gcloud.\n",
        "!gcloud config set project {auth_project_id}\n",
        "!gcloud config get-value project\n",
        "\n",
        "# Initialize Vertex AI\n",
        "vertexai.init(project=auth_project_id, location=region)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "9pSfnRBC-ZWm"
      },
      "outputs": [],
      "source": [
        "# @title Step 3.2 Authenticate refresh\n",
        "\n",
        "creds.refresh(auth_req)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "pGEKqBzb3TJc"
      },
      "source": [
        "## Step 4: Helper Functions\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "d2d2ae9ebc19"
      },
      "outputs": [],
      "source": [
        "# @title Step 4.0: Build the Discovery engine API based on the location\n",
        "\n",
        "if location == \"us\":\n",
        "    base_discovery_engine_domain = \"us-discoveryengine.googleapis.com\"\n",
        "elif location == \"eu\":\n",
        "    base_discovery_engine_domain = \"eu-discoveryengine.googleapis.com\"\n",
        "else:  # Default to global if the location is not explicitly 'us' or 'eu'\n",
        "    base_discovery_engine_domain = \"discoveryengine.googleapis.com\""
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "BfXosgvSRYnz"
      },
      "outputs": [],
      "source": [
        "# @title Step 4.1: Search\n",
        "\n",
        "from functools import cache\n",
        "\n",
        "serving_config_id = \"default_search\"  # @param{ type : 'string' }\n",
        "discovery_engine_url = f\"https://{base_discovery_engine_domain}/v1alpha/projects/{auth_project_id}/locations/{location}/collections/default_collection/engines/{engine_id}/servingConfigs/{serving_config_id}\"\n",
        "\n",
        "# Create serving config with Control\n",
        "create_serving_config_response = requests.post(\n",
        "    discovery_engine_url,\n",
        "    headers={\n",
        "        \"Content-Type\": \"application/json\",\n",
        "        \"Authorization\": \"Bearer \" + creds.token,\n",
        "        \"X-Goog-User-Project\": auth_project_id,\n",
        "    },\n",
        "    json={\n",
        "        \"displayName\": f\"{serving_config_id}\",\n",
        "        \"solutionType\": \"SOLUTION_TYPE_SEARCH\",\n",
        "    },\n",
        ")\n",
        "\n",
        "\n",
        "@cache\n",
        "def get_search_results(query, num=K):\n",
        "    search_response = requests.post(\n",
        "        f\"{discovery_engine_url}:search\",\n",
        "        headers={\n",
        "            \"Content-Type\": \"application/json\",\n",
        "            \"Authorization\": \"Bearer \" + creds.token,\n",
        "            \"X-Goog-User-Project\": auth_project_id,\n",
        "        },\n",
        "        json={\n",
        "            \"query\": query,\n",
        "            \"pageSize\": num,\n",
        "        },\n",
        "    )\n",
        "    results = search_response.json()\n",
        "    logger.info(f\"query : {query}\")\n",
        "    logger.info(f\"Response code:  {search_response.status_code}\")\n",
        "    return results"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "75fJyI1Li7W9"
      },
      "outputs": [],
      "source": [
        "# @title Step 4.2: Assistant\n",
        "\n",
        "\n",
        "ENDPOINT = f\"https://{base_discovery_engine_domain}\"\n",
        "ASSISTANT_NAME = f\"projects/{auth_project_id}/locations/{location}/collections/default_collection/engines/{engine_id}/assistants/default_assistant\"\n",
        "\n",
        "\n",
        "@cache\n",
        "def get_assist_results(query: str):\n",
        "    response = requests.post(\n",
        "        f\"{ENDPOINT}/v1alpha/{ASSISTANT_NAME}:assist\",\n",
        "        headers={\n",
        "            \"Content-Type\": \"application/json; charset=utf-8\",\n",
        "            \"Authorization\": f\"Bearer {creds.token}\",\n",
        "            \"X-Goog-User-Project\": f\"{auth_project_id}\",\n",
        "        },\n",
        "        data=json.dumps({\"query\": {\"text\": query}}),\n",
        "    )\n",
        "    if response.status_code != 200:\n",
        "        logger.error(f\"Assistant failed for query: {query} | {response.content}\")\n",
        "        answer = \"FAILED\"\n",
        "        assist_token = \"None\"\n",
        "    else:\n",
        "        assist_response = response.json()\n",
        "        assist_token = assist_response.get(\"assistToken\")\n",
        "        answer_data = assist_response.get(\"answer\")\n",
        "        state = answer_data.get(\"state\")\n",
        "        if state == \"SKIPPED\":\n",
        "            answer = answer_data[\"assistSkippedReasons\"][0]\n",
        "        else:\n",
        "            answer = (\n",
        "                answer_data.get(\"replies\")[0]\n",
        "                .get(\"groundedContent\")\n",
        "                .get(\"content\")\n",
        "                .get(\"text\")\n",
        "            )\n",
        "    return answer, assist_token"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "HaBPXNospMFZ"
      },
      "outputs": [],
      "source": [
        "# @title Step 4.3: Answer\n",
        "\n",
        "\n",
        "session = requests.post(\n",
        "    f\"https://{base_discovery_engine_domain}/v1alpha/projects/{auth_project_id}/locations/{location}/collections/default_collection/engines/{engine_id}/sessions\",  # auto session model\n",
        "    headers={\n",
        "        \"Content-Type\": \"application/json\",\n",
        "        \"Authorization\": \"Bearer \" + creds.token,\n",
        "    },\n",
        "    json={\n",
        "        \"userPseudoId\": \"12345\",  # customer id\n",
        "    },\n",
        ")\n",
        "\n",
        "\n",
        "@cache\n",
        "def get_answer_results(query):\n",
        "    response = requests.post(\n",
        "        f\"https://{base_discovery_engine_domain}/v1alpha/projects/{auth_project_id}/locations/{location}/collections/default_collection/engines/{engine_id}/servingConfigs/default_search:answer\",\n",
        "        headers={\n",
        "            \"Content-Type\": \"application/json\",\n",
        "            \"Authorization\": \"Bearer \" + creds.token,\n",
        "        },\n",
        "        json={\n",
        "            \"query\": {\"text\": query},\n",
        "            \"searchSpec\": {\n",
        "                \"searchParams\": {\n",
        "                    \"maxReturnResults\": K,\n",
        "                },\n",
        "            },\n",
        "            \"session\": session.json()[\"name\"],\n",
        "        },\n",
        "    )\n",
        "\n",
        "    if response.status_code != 200:\n",
        "        logger.error(f\"Answer API failed for query: {query} | {response.content}\")\n",
        "        answer_data = \"FAILED\"\n",
        "        answer_token = \"None\"\n",
        "    else:\n",
        "        answer_response = response.json()\n",
        "        answer_token = answer_response.get(\"answerQueryToken\")\n",
        "        answer_data = answer_response.get(\"answer\").get(\"answerText\")\n",
        "    return answer_data, answer_token"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "zt2WtcbFTDkw"
      },
      "source": [
        "## Step 5: Get Golden Dataset"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "c60b3b38e62f"
      },
      "outputs": [],
      "source": [
        "# Define the path for the golden dataset file\n",
        "GOLDEN_DATA_FILE_PATH = \"/content/golden_dataset.csv\"\n",
        "\n",
        "# Check if the file already exists.\n",
        "# This prevents overwriting a user's uploaded file if they run the cell multiple times.\n",
        "if not os.path.exists(GOLDEN_DATA_FILE_PATH):\n",
        "    # Create an empty DataFrame with the required columns\n",
        "    sample_df = pd.DataFrame(columns=[\"search_query\", \"expected_answers\"])\n",
        "\n",
        "    # Save the DataFrame to a CSV file without writing the index\n",
        "    sample_df.to_csv(GOLDEN_DATA_FILE_PATH, index=False)\n",
        "    logger.info(f\"✅ A sample '{GOLDEN_DATA_FILE_PATH}' has been created!\")\n",
        "    logger.info(\"It contains the columns: 'search_query' and 'expected_answers'.\")\n",
        "    logger.info(\"\\n**Instructions for your Golden Dataset:**\")\n",
        "    logger.info(\"1.  **Option A: Download, Populate, and Re-upload:**\")\n",
        "    logger.info(\"    - Click the folder icon on the left sidebar (Files).\")\n",
        "    logger.info(\n",
        "        \"    - Hover over 'golden_dataset.csv', click the three dots (`⋮`), and select 'Download'.\"\n",
        "    )\n",
        "    logger.info(\n",
        "        \"    - Open the downloaded file in a spreadsheet editor (like Excel or Google Sheets).\"\n",
        "    )\n",
        "    logger.info(\n",
        "        \"    - Populate the `search_query` and `expected_answers` columns with your data.\"\n",
        "    )\n",
        "    logger.info(\"    - Save the file (ensure it's still named `golden_dataset.csv`).\")\n",
        "    logger.info(\n",
        "        \"    - Re-upload the populated file to the `/content/` directory in Colab.\"\n",
        "    )\n",
        "    logger.info(\n",
        "        \"      (Click the folder icon -> Click the 'Upload' icon (paper with up arrow) -> Select your file).\"\n",
        "    )\n",
        "    logger.info(\"2.  **Option B: Upload Your Own CSV:**\")\n",
        "    logger.info(\n",
        "        \"    - If you already have a CSV file with `search_query` and `expected_answers` columns,\"\n",
        "    )\n",
        "    logger.info(\"      you can directly upload it to the `/content/` directory.\")\n",
        "    logger.info(\n",
        "        \"      (Click the folder icon -> Click the 'Upload' icon -> Select your file).\"\n",
        "    )\n",
        "    logger.info(\"    - Ensure your file is named `golden_dataset.csv`.\")\n",
        "\n",
        "else:\n",
        "    logger.info(\n",
        "        f\"✨ '{GOLDEN_DATA_FILE_PATH}' already exists. Skipping sample file creation.\"\n",
        "    )\n",
        "    logger.info(\"You can proceed to the next cell to load your dataset.\")\n",
        "    logger.info(\"\\n**Remember:** Your loaded file should be named 'golden_dataset.csv'\")\n",
        "    logger.info(\n",
        "        \"and located in the `content` directory with 'search_query' and 'expected_answers' columns.\"\n",
        "    )"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "NSmQbXaf5fW_"
      },
      "outputs": [],
      "source": [
        "# @title Step 5.1: Get Golden Dataset from CSV file locally(notebook)\n",
        "import pandas as pd\n",
        "\n",
        "# Define the path to your golden dataset file\n",
        "GOLDEN_DATA_FILE_PATH = \"/content/golden_dataset.csv\"\n",
        "\n",
        "try:\n",
        "    df = pd.read_csv(GOLDEN_DATA_FILE_PATH)\n",
        "    print(f\"Dataset loaded successfully from '{GOLDEN_DATA_FILE_PATH}'!\")\n",
        "    print(\"\\nFirst 5 rows of your dataset:\")\n",
        "    print(df.head())\n",
        "    print(f\"\\nDataset has {len(df)} rows and {len(df.columns)} columns.\")\n",
        "\n",
        "    # Optional: Basic validation of expected columns\n",
        "    required_columns = [\"search_query\", \"expected_answers\"]\n",
        "    if not all(col in df.columns for col in required_columns):\n",
        "        print(\"\\n⚠️ Warning: The loaded CSV does not contain all expected columns.\")\n",
        "        print(f\"Expected columns: {required_columns}\")\n",
        "        print(f\"Found columns: {df.columns.tolist()}\")\n",
        "\n",
        "except FileNotFoundError:\n",
        "    print(f\"❌ Error: '{GOLDEN_DATA_FILE_PATH}' not found.\")\n",
        "    print(\n",
        "        \"Please ensure you have created or uploaded the file correctly as per the instructions in the previous cell.\"\n",
        "    )\n",
        "except pd.errors.EmptyDataError:\n",
        "    print(f\"⚠️ Warning: '{GOLDEN_DATA_FILE_PATH}' is empty or only contains headers.\")\n",
        "    print(\"Please populate the file with your data.\")\n",
        "except Exception as e:\n",
        "    print(f\"An unexpected error occurred while reading the CSV: {e}\")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "tqcxHBclTocn"
      },
      "outputs": [],
      "source": [
        "import gspread\n",
        "\n",
        "# @title Step 5.2: Get Golden Dataset from Google Drive/Sheet\n",
        "import pandas as pd\n",
        "from google.colab import auth, drive\n",
        "\n",
        "drive.mount(\"/content/drive\")\n",
        "auth.authenticate_user()\n",
        "creds, _ = default()\n",
        "gc = gspread.authorize(creds)\n",
        "\n",
        "# Replace with your actual spreadsheet details\n",
        "spreadsheet = gc.open_by_url(f\"{eval_data_google_drive_url}\")\n",
        "worksheet = spreadsheet.worksheet(f\"{worksheet_name}\")\n",
        "# spreadsheet.add_worksheet(f\"{eval_data_worksheet_name}\", rows=\"100\", cols=\"20\")\n",
        "\n",
        "df = pd.DataFrame(worksheet.get_all_values()[1:], columns=worksheet.get_all_values()[0])"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "z7T3p6NGTpDF"
      },
      "outputs": [],
      "source": [
        "# @title Step 5.3 Populate Gemini Enterprise answer based on the Golden dataset\n",
        "\n",
        "df[[\"answer_result\", \"answer_token\"]] = df[\"search_query\"].apply(\n",
        "    lambda q: pd.Series(get_answer_results(q))\n",
        ")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "nogEpwig7-KB"
      },
      "outputs": [],
      "source": [
        "# @title Step 5.4: Visualise the Golden dataset to verify\n",
        "df"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "W25FZRfQ2EUG"
      },
      "source": [
        "## Step 6: Evaluation Functionality\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "EyGQKZ4j0WKd"
      },
      "outputs": [],
      "source": [
        "# @title Step 6.1: Evaluation imports\n",
        "\n",
        "import nltk\n",
        "import pandas as pd\n",
        "from bert_score import score as bert_score\n",
        "from nltk.translate.bleu_score import sentence_bleu\n",
        "from rouge_score import rouge_scorer\n",
        "from sentence_transformers import SentenceTransformer, util\n",
        "\n",
        "model = SentenceTransformer(\n",
        "    \"all-MiniLM-L6-v2\"  # similarity score model (open source - Hugging Face based)\n",
        ")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "YuzV1_pY8hlF"
      },
      "outputs": [],
      "source": [
        "# @title Step 6.2: Download the punkt for BERT eval process\n",
        "\n",
        "nltk.download(\"punkt\")\n",
        "nltk.download(\"punkt_tab\")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "i1TjiHn40MV1"
      },
      "outputs": [],
      "source": [
        "# @title Step 6.3: Evaluation utility functions\n",
        "\n",
        "\n",
        "def get_semantic_score(expected, actual):\n",
        "    emb1 = model.encode(expected, convert_to_tensor=True)\n",
        "    emb2 = model.encode(actual, convert_to_tensor=True)\n",
        "    similarity = util.cos_sim(emb1, emb2).item()\n",
        "    return similarity  # Score between 0 and 1\n",
        "\n",
        "\n",
        "def compute_scores(expected, actual):\n",
        "    \"\"\"Compute BLEU, ROUGE, and BERTScore\"\"\"\n",
        "    # BLEU\n",
        "    reference = [nltk.word_tokenize(expected.lower())]\n",
        "    candidate = nltk.word_tokenize(actual.lower())\n",
        "    bleu = sentence_bleu(reference, candidate)\n",
        "\n",
        "    # ROUGE\n",
        "    scorer = rouge_scorer.RougeScorer([\"rougeL\"], use_stemmer=True)\n",
        "    rouge = scorer.score(expected, actual)[\"rougeL\"].fmeasure\n",
        "\n",
        "    # BERT Score\n",
        "    P, R, F1 = bert_score([actual], [expected], lang=\"en\", verbose=False)\n",
        "    bert = F1[0].item()\n",
        "\n",
        "    return bleu, rouge, bert\n",
        "\n",
        "\n",
        "def score_to_rating(score):\n",
        "    \"\"\"Map metric values to a qualitative rating\"\"\"\n",
        "    if score > 0.8:\n",
        "        return 5, \"Excellent match – high semantic and lexical similarity.\"\n",
        "    if score > 0.6:\n",
        "        return 4, \"Good match – minor differences, but meaning mostly intact.\"\n",
        "    if score > 0.4:\n",
        "        return 3, \"Moderate match – some loss in meaning.\"\n",
        "    if score > 0.2:\n",
        "        return 2, \"Low match – significant differences from expected.\"\n",
        "    return 1, \"Poor match – largely incorrect or off-topic.\""
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "n6MaJX4rBRCC"
      },
      "outputs": [],
      "source": [
        "# @title Step 6.4: Evaluate the Golden dataset with answers with respect to BLEU, ROUGE, BERT and Similarity score\n",
        "results = []\n",
        "\n",
        "for _, row in df.iterrows():\n",
        "    question = row[\"search_query\"]\n",
        "    expected = row[\"expected_answers\"]\n",
        "    actual = row[\"answer_result\"]\n",
        "\n",
        "    bleu, rouge, bert = compute_scores(expected, actual)\n",
        "\n",
        "    avg = (bleu + rouge + bert) / 3\n",
        "    score, reasoning = score_to_rating(avg)\n",
        "    similarity = get_semantic_score(expected, actual)\n",
        "    score_to_rating\n",
        "\n",
        "    results.append(\n",
        "        {\n",
        "            \"question\": question,\n",
        "            \"expected_answer\": expected,\n",
        "            \"actual_answer\": actual,\n",
        "            \"BLEU_score\": bleu,\n",
        "            \"BLEU_rating\": score_to_rating(bleu),\n",
        "            \"ROUGE_score\": rouge,\n",
        "            \"ROUGE_rating\": score_to_rating(rouge),\n",
        "            \"BERTScore\": bert,\n",
        "            \"BERT_rating\": score_to_rating(bert),\n",
        "            \"similarity\": similarity,\n",
        "            \"similarity_rating\": score_to_rating(similarity),\n",
        "        }\n",
        "    )"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "TstyJSJ1oMzQ"
      },
      "source": [
        "## Step 7: Saving the Eval result"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "69Et8wmKKEoj"
      },
      "outputs": [],
      "source": [
        "# @title Step 7.1: Converting the result in the dataframe and adding timestamp\n",
        "import datetime\n",
        "\n",
        "output_df = pd.DataFrame(results)\n",
        "# Add a timestamp column to the DataFrame\n",
        "output_df[\"timestamp\"] = datetime.datetime.now()"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "Ox5dYhEYBSx2"
      },
      "outputs": [],
      "source": [
        "# @title Step 7.2: Save the output in the CSV\n",
        "\n",
        "output_csv_file_name = \"eval_output.csv\"  # @param{ type : 'string' }\n",
        "output_df.to_csv(output_csv_file_name, index=False)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "veY1b9m_BeCc"
      },
      "outputs": [],
      "source": [
        "# @title Step 7.3: Save the output in Google Sheet\n",
        "\n",
        "spreadsheet = gc.open_by_url(f\"{eval_data_google_drive_url}\")\n",
        "\n",
        "try:\n",
        "    worksheet_eval_data = spreadsheet.add_worksheet(\n",
        "        f\"{eval_data_worksheet_name}\", rows=\"100\", cols=\"20\"\n",
        "    )\n",
        "except gspread.exceptions.APIError as e:\n",
        "    logger.info(\n",
        "        f\"Worksheet '{eval_data_worksheet_name}' already exists or another API error occurred: {e}\"\n",
        "    )\n",
        "    # If the worksheet already exists, get it\n",
        "    worksheet_eval_data = spreadsheet.worksheet(eval_data_worksheet_name)\n",
        "else:\n",
        "    pass  # No need to get the worksheet again if it was just created\n",
        "\n",
        "# Convert tuple columns to strings\n",
        "for col in [\"BLEU_rating\", \"ROUGE_rating\", \"BERT_rating\", \"similarity_rating\"]:\n",
        "    if col in output_df.columns:\n",
        "        output_df[col] = output_df[col].apply(\n",
        "            lambda x: f\"{x[0]}: {x[1]}\" if isinstance(x, tuple) else x\n",
        "        )\n",
        "\n",
        "# Convert timestamp column to string\n",
        "if \"timestamp\" in output_df.columns:\n",
        "    output_df[\"timestamp\"] = output_df[\"timestamp\"].astype(str)\n",
        "\n",
        "worksheet_eval_data.clear()\n",
        "worksheet_eval_data.update(\n",
        "    [output_df.columns.values.tolist()] + output_df.values.tolist()\n",
        ")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "f8931369"
      },
      "outputs": [],
      "source": [
        "# @title Step 7.4: Save the output in BigQuery table\n",
        "\n",
        "# provide the BigQuery details here\n",
        "dataset_id = \"gemini_enterprise_eval_dataset\"  # @param{type: 'string'}\n",
        "table_name = \"gemini_enterprise_eval_result\"  # @param{type: 'string'}\n",
        "project_id = auth_project_id  # Use the authenticated project ID\n",
        "\n",
        "import datetime\n",
        "\n",
        "from google.cloud import bigquery\n",
        "from google.cloud.exceptions import NotFound\n",
        "\n",
        "client = bigquery.Client(project=project_id)\n",
        "\n",
        "# Add a timestamp column to the DataFrame (if not already added in a previous step)\n",
        "if \"timestamp\" not in output_df.columns:\n",
        "    output_df[\"timestamp\"] = datetime.datetime.now()\n",
        "\n",
        "# Convert tuple columns to strings before saving to BigQuery\n",
        "for col in [\"BLEU_rating\", \"ROUGE_rating\", \"BERT_rating\", \"similarity_rating\"]:\n",
        "    if col in output_df.columns:\n",
        "        output_df[col] = output_df[col].apply(\n",
        "            lambda x: f\"{x[0]}: {x[1]}\" if isinstance(x, tuple) else x\n",
        "        )\n",
        "\n",
        "# Convert timestamp column to string before saving to BigQuery\n",
        "if \"timestamp\" in output_df.columns:\n",
        "    output_df[\"timestamp\"] = output_df[\"timestamp\"].astype(str)\n",
        "\n",
        "# Construct a full table ID\n",
        "table_id = f\"{project_id}.{dataset_id}.{table_name}\"\n",
        "\n",
        "# Check if the dataset exists, create if not\n",
        "try:\n",
        "    client.get_dataset(dataset_id)\n",
        "    logger.info(f\"Dataset {dataset_id} already exists.\")\n",
        "except NotFound:\n",
        "    logger.info(f\"Dataset {dataset_id} not found. Creating dataset.\")\n",
        "    dataset = bigquery.Dataset(f\"{project_id}.{dataset_id}\")\n",
        "    dataset.location = location  # Use the location defined earlier\n",
        "    client.create_dataset(dataset, timeout=30)\n",
        "    logger.info(f\"Dataset {dataset_id} created.\")\n",
        "\n",
        "\n",
        "# Check if the table exists, create if not\n",
        "try:\n",
        "    client.get_table(table_id)\n",
        "    logger.info(f\"Table {table_name} already exists.\")\n",
        "except NotFound:\n",
        "    logger.info(f\"Table {table_name} not found. Creating table.\")\n",
        "    # Create table with inferred schema (schema will be inferred during the load job)\n",
        "    table = bigquery.Table(table_id)\n",
        "    client.create_table(table, exists_ok=True)\n",
        "    logger.info(f\"Table {table_name} created.\")\n",
        "\n",
        "\n",
        "# Load data from DataFrame to BigQuery\n",
        "job_config = bigquery.LoadJobConfig(\n",
        "    write_disposition=\"WRITE_APPEND\",  # Append data if table exists\n",
        ")\n",
        "\n",
        "job = client.load_table_from_dataframe(output_df, table_id, job_config=job_config)\n",
        "\n",
        "job.result()  # Wait for the job to complete.\n",
        "\n",
        "logger.info(f\"Loaded {job.output_rows} rows into {table_id}.\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "186a3b18e798"
      },
      "source": [
        "## Learnings and Conclusion\n",
        "\n",
        "### What We Learned\n",
        "\n",
        "This notebook provided a comprehensive framework for evaluating the quality of answers generated by an Gemini Enterprise application. Our process involved several key stages:\n",
        "\n",
        "1.  **Environment Setup and Configuration:**\n",
        "    *   We successfully installed and imported all necessary libraries for data manipulation (pandas), NLP evaluation (NLTK, rouge-score, bert-score, sentence-transformers), and Google Cloud integration (google-cloud-discoveryengine, gspread, google-auth).\n",
        "    *   We configured essential parameters, including Google Cloud project details, Gemini Enterprise engine ID, application type (search/assist), and locations for input (Google Sheets) and output (CSV, Google Sheets, BigQuery).\n",
        "\n",
        "2.  **Data Ingestion and Answer Retrieval:**\n",
        "    *   The notebook demonstrated how to load a \"golden dataset\" containing queries and their expected answers from a Google Sheet.\n",
        "    *   We utilized helper functions to interact with the Gemini Enterprise Discovery Engine APIs (specifically `get_answer_results` in the main flow) to retrieve answers generated by the Gemini Enterprise application for each query in our golden dataset.\n",
        "\n",
        "3.  **Metric-Based Evaluation:**\n",
        "    *   We implemented a robust evaluation methodology using four distinct NLP metrics:\n",
        "        *   **BLEU Score:** To measure n-gram precision against reference answers.\n",
        "        *   **ROUGE-L Score:** To measure recall based on the longest common subsequence.\n",
        "        *   **BERTScore:** To assess semantic similarity using contextual embeddings, going beyond lexical overlap.\n",
        "        *   **Semantic Similarity:** Calculated using the `all-MiniLM-L6-v2` Sentence Transformer model to capture the similarity in meaning between generated and expected answers.\n",
        "    *   For each query, we computed these four scores by comparing the Gemini Enterprise-generated answer with the corresponding expected answer.\n",
        "    *   A qualitative rating system (Excellent, Good, Moderate, Low, Poor) was applied to the numerical scores, providing an easily interpretable assessment of answer quality for each metric.\n",
        "\n",
        "4.  **Results Storage and Reporting:**\n",
        "    *   The evaluation results, including the raw scores, qualitative ratings, original queries, expected answers, and actual answers, were compiled into a structured Pandas DataFrame.\n",
        "    *   A timestamp was added to each evaluation run for tracking purposes.\n",
        "    *   The notebook demonstrated multiple ways to persist these results:\n",
        "        *   Saving to a local CSV file.\n",
        "        *   Writing to a new or existing worksheet in a Google Sheet.\n",
        "        *   Appending to a BigQuery table, enabling historical analysis and more complex querying. The notebook also handled the creation of the BigQuery dataset and table if they didn't already exist.\n",
        "\n",
        "### Conclusion\n",
        "\n",
        "This notebook successfully establishes a repeatable and quantitative process for evaluating the performance of an Gemini Enterprise application's answer generation capabilities. By leveraging a golden dataset and a suite of established NLP metrics, we can:\n",
        "\n",
        "*   **Objectively measure answer quality:** Moving beyond subjective assessments to data-driven insights.\n",
        "*   **Identify areas for improvement:** Pinpoint queries or types of queries where the Gemini Enterprise application may be underperforming.\n",
        "*   **Track performance over time:** By storing results (especially in BigQuery), we can monitor how changes to the Gemini Enterprise configuration, underlying data, or models impact answer quality.\n",
        "*   **Benchmark different configurations:** The framework can be used to compare the performance of different Gemini Enterprise engine settings or versions.\n",
        "\n",
        "The integration with Google Sheets and BigQuery makes the golden dataset management and results analysis highly accessible and scalable. The qualitative ratings alongside numerical scores offer a balanced view, catering to both technical and non-technical stakeholders.\n",
        "\n",
        "**Potential Next Steps:**\n",
        "\n",
        "*   **Automate the evaluation pipeline:** Integrate this notebook into a CI/CD pipeline for regular, automated performance checks.\n",
        "*   **Expand the golden dataset:** Continuously add more diverse and challenging queries to the golden dataset for more comprehensive testing.\n",
        "*   **Error Analysis:** Perform a deeper dive into queries with low scores to understand the root causes of poor performance (e.g., issues with grounding, summarization, or relevance).\n",
        "*   **Experiment with different models/settings:** Use this evaluation framework to systematically test the impact of different Gemini Enterprise configurations or underlying LLMs.\n",
        "*   **Visualize results:** Create dashboards (e.g., in Looker Studio using BigQuery data) to visualize evaluation trends and key metrics.\n",
        "\n",
        "This evaluation framework is a valuable asset for anyone looking to rigorously assess and improve their Gemini Enterprise applications."
      ]
    }
  ],
  "metadata": {
    "colab": {
      "name": "gemini_enterprise_eval.ipynb",
      "toc_visible": true
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}
