{
 "cells": [
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "ur8xi4C7S06n"
   },
   "outputs": [],
   "source": [
    "# Copyright 2025 Google LLC\n",
    "#\n",
    "# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
    "# you may not use this file except in compliance with the License.\n",
    "# You may obtain a copy of the License at\n",
    "#\n",
    "#     https://www.apache.org/licenses/LICENSE-2.0\n",
    "#\n",
    "# Unless required by applicable law or agreed to in writing, software\n",
    "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
    "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
    "# See the License for the specific language governing permissions and\n",
    "# limitations under the License."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "0HwZ0xnBlLzH"
   },
   "source": [
    "# Intro to Batch Evaluations with the Gemini API\n",
    "\n",
    "<table align=\"left\">\n",
    "  <td style=\"text-align: center\">\n",
    "    <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/generative-ai/blob/main/gemini/evaluation/evaltask_approach/intro_batch_evaluation.ipynb\">\n",
    "      <img width=\"32px\" src=\"https://www.gstatic.com/pantheon/images/bigquery/welcome_page/colab-logo.svg\" alt=\"Google Colaboratory logo\"><br> Open in Colab\n",
    "    </a>\n",
    "  </td>\n",
    "  <td style=\"text-align: center\">\n",
    "    <a href=\"https://console.cloud.google.com/vertex-ai/colab/import/https:%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fgenerative-ai%2Fmain%2Fgemini%2Fevaluation%2Fevaltask_approach%2Fintro_batch_evaluation.ipynb\">\n",
    "      <img width=\"32px\" src=\"https://lh3.googleusercontent.com/JmcxdQi-qOpctIvWKgPtrzZdJJK-J3sWE1RsfjZNwshCFgE_9fULcNpuXYTilIR2hjwN\" alt=\"Google Cloud Colab Enterprise logo\"><br> Open in Colab Enterprise\n",
    "    </a>\n",
    "  </td>\n",
    "  <td style=\"text-align: center\">\n",
    "    <a href=\"https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/generative-ai/main/gemini/evaluation/evaltask_approach/intro_batch_evaluation.ipynb\">\n",
    "      <img src=\"https://www.gstatic.com/images/branding/gcpiconscolors/vertexai/v1/32px.svg\" alt=\"Vertex AI logo\"><br> Open in Vertex AI Workbench\n",
    "    </a>\n",
    "  </td>\n",
    "  <td style=\"text-align: center\">\n",
    "    <a href=\"https://github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/evaluation/evaltask_approach/intro_batch_evaluation.ipynb\">\n",
    "      <img width=\"32px\" src=\"https://raw.githubusercontent.com/primer/octicons/refs/heads/main/icons/mark-github-24.svg\" alt=\"GitHub logo\"><br> View on GitHub\n",
    "    </a>\n",
    "  </td>\n",
    "</table>\n",
    "\n",
    "<div style=\"clear: both;\"></div>\n",
    "\n",
    "<b>Share to:</b>\n",
    "\n",
    "<a href=\"https://www.linkedin.com/sharing/share-offsite/?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/evaluation/evaltask_approach/intro_batch_evaluation.ipynb\" target=\"_blank\">\n",
    "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/8/81/LinkedIn_icon.svg\" alt=\"LinkedIn logo\">\n",
    "</a>\n",
    "\n",
    "<a href=\"https://bsky.app/intent/compose?text=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/evaluation/evaltask_approach/intro_batch_evaluation.ipynb\" target=\"_blank\">\n",
    "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/7/7a/Bluesky_Logo.svg\" alt=\"Bluesky logo\">\n",
    "</a>\n",
    "\n",
    "<a href=\"https://twitter.com/intent/tweet?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/evaluation/evaltask_approach/intro_batch_evaluation.ipynb\" target=\"_blank\">\n",
    "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/5/5a/X_icon_2.svg\" alt=\"X logo\">\n",
    "</a>\n",
    "\n",
    "<a href=\"https://reddit.com/submit?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/evaluation/evaltask_approach/intro_batch_evaluation.ipynb\" target=\"_blank\">\n",
    "  <img width=\"20px\" src=\"https://redditinc.com/hubfs/Reddit%20Inc/Brand/Reddit_Logo.png\" alt=\"Reddit logo\">\n",
    "</a>\n",
    "\n",
    "<a href=\"https://www.facebook.com/sharer/sharer.php?u=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/evaluation/evaltask_approach/intro_batch_evaluation.ipynb\" target=\"_blank\">\n",
    "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/5/51/Facebook_f_logo_%282019%29.svg\" alt=\"Facebook logo\">\n",
    "</a>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "84f0f73a0f76"
   },
   "source": [
    "| Author(s) |\n",
    "| --- |\n",
    "| Jessica Wang, [Ivan Nardini](https://github.com/inardini) |"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "tvgnzT1CKxrO"
   },
   "source": [
    "## Overview\n",
    "\n",
    "Different from getting online (synchronous) responses, where you are limited to one input request at a time, the batch evaluations in Vertex AI allow you to send a large number of evaluation requests to a Gemini model in a single batch request. Then, the model responses asynchronously populate to your storage output location in [Cloud Storage](https://cloud.google.com/storage/docs/introduction).\n",
    "\n",
    "### Objectives\n",
    "\n",
    "In this tutorial, you learn how to run batch evaluation with the Gemini API in Vertex AI. This tutorial shows how to use **Cloud Storage** as input sources. Vertex AI Gen AI Eval service supports **BigQuery** as well. Refer to the [documentation](https://cloud.google.com/vertex-ai/generative-ai/docs/models/run-evaluation#batch-eval) to learn more.\n",
    "\n",
    "You will complete the following tasks:\n",
    "\n",
    "- Preparing batch inputs and an output location\n",
    "- Submitting a batch evaluation long running operation\n",
    "- Retrieving batch evaluation results\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "61RBz8LLbxCR"
   },
   "source": [
    "## Get started"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "BMHhBIAaCSLv"
   },
   "source": [
    "### Install Google Vertex AI SDK and other required packages"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "H86tTuJB2G-t"
   },
   "outputs": [],
   "source": [
    "%pip install google-cloud-aiplatform[evaluation] gcsfs --force-reinstall --quiet"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "dmWOrTJ3gx13"
   },
   "source": [
    "### Authenticate your notebook environment\n",
    "\n",
    "If you're running this notebook on Google Colab, run the cell below to authenticate your environment."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "NyKGtVQjgx13"
   },
   "outputs": [],
   "source": [
    "# from google.colab import auth\n",
    "# auth.authenticate_user()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "G4uuqlA8XxdM"
   },
   "outputs": [],
   "source": [
    "# ! gcloud auth login"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "DF4l8DTdWgPY"
   },
   "source": [
    "### Set Google Cloud project information\n",
    "\n",
    "To get started using Vertex AI, you must have an existing Google Cloud project and [enable the Vertex AI API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com).\n",
    "\n",
    "Learn more about [setting up a project and a development environment](https://cloud.google.com/vertex-ai/docs/start/cloud-environment)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "Nqwi-5ufWp_B"
   },
   "outputs": [],
   "source": [
    "# Use the environment variable if the user doesn't provide Project ID.\n",
    "import os\n",
    "\n",
    "import vertexai\n",
    "\n",
    "PROJECT_ID = \"[your-project-id]\"  # @param {type: \"string\", placeholder: \"[your-project-id]\", isTemplate: true}\n",
    "if not PROJECT_ID or PROJECT_ID == \"[your-project-id]\":\n",
    "    PROJECT_ID = str(os.environ.get(\"GOOGLE_CLOUD_PROJECT\"))\n",
    "\n",
    "LOCATION = os.environ.get(\"GOOGLE_CLOUD_REGION\", \"us-central1\")\n",
    "\n",
    "BUCKET_NAME = \"[your-bucket-name]\"  # @param {type: \"string\", placeholder: \"[your-bucket-name]\", isTemplate: true}\n",
    "BUCKET_URI = f\"gs://{BUCKET_NAME}\"\n",
    "\n",
    "!gsutil mb -l {LOCATION} {BUCKET_URI}\n",
    "\n",
    "vertexai.init(project=PROJECT_ID, location=LOCATION)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "BfKq2vOWpFoR"
   },
   "source": [
    "### Import libraries"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "JxCmmFj5pHUw"
   },
   "outputs": [],
   "source": [
    "import json\n",
    "import subprocess\n",
    "import time\n",
    "from pprint import pprint\n",
    "\n",
    "import pandas as pd\n",
    "from IPython.display import display"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "eECVD7xAuvIV"
   },
   "source": [
    "### Helper functions\n",
    "\n",
    "Here you can find some helper functions\n",
    "\n",
    "- `send_request` and `get_operation`: These handle the mechanics of making authenticated API calls using curl and gcloud. While the Vertex AI Python SDK is great for many tasks, in this case we need to use curl for interacting with specific REST batch prediction endpoints.\n",
    "\n",
    "- `expand_json_columns_in_df_simplified`, `extract_metric_score`, `style_df_for_slide_corrected`: These are our data wrangling and presentation helpers. The API returns results in a nested JSON format. These functions will parse that JSON, extract scores and explanations, and format the final DataFrame into a easy-to-read table."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "5Mpm7mcfuwUO"
   },
   "outputs": [],
   "source": [
    "def send_request(request_file):\n",
    "    \"\"\"\n",
    "    Makes an authenticated POST request to the given API endpoint using gcloud authentication.\n",
    "    \"\"\"\n",
    "    address = f\"https://{LOCATION}-aiplatform.googleapis.com/v1beta1/projects/{PROJECT_ID}/locations/us-central1:evaluateDataset\"\n",
    "    try:\n",
    "        # Get the access token\n",
    "        token_result = subprocess.run(\n",
    "            [\"gcloud\", \"auth\", \"print-access-token\"],\n",
    "            capture_output=True,\n",
    "            text=True,\n",
    "            check=True,\n",
    "        )\n",
    "        access_token = token_result.stdout.strip()\n",
    "\n",
    "        # Construct the curl command\n",
    "        curl_command = [\n",
    "            \"curl\",\n",
    "            \"-i\",\n",
    "            \"-X\",\n",
    "            \"POST\",\n",
    "            \"-H\",\n",
    "            \"Content-Type: application/json\",\n",
    "            \"-H\",\n",
    "            f\"Authorization: Bearer {access_token}\",\n",
    "            address,\n",
    "            \"-d\",\n",
    "            f\"@{request_file}\",\n",
    "        ]\n",
    "\n",
    "        # Execute the curl command\n",
    "        response = subprocess.run(\n",
    "            curl_command, capture_output=True, text=True, check=True\n",
    "        )\n",
    "\n",
    "        # Extract JSON from the response (ignoring HTTP headers)\n",
    "        json_part = response.stdout.split(\"\\n\\n\")[-1]  # Extract last part after headers\n",
    "\n",
    "        # Try parsing the response as JSON\n",
    "        try:\n",
    "            response_json = json.loads(json_part)\n",
    "            return response_json.get(\"name\", \"No 'name' field found in response\")\n",
    "        except json.JSONDecodeError:\n",
    "            return \"Failed to parse response as JSON:\\n\" + response.stdout\n",
    "\n",
    "    except subprocess.CalledProcessError as e:\n",
    "        return f\"Error executing request: {e}\"\n",
    "\n",
    "\n",
    "def get_operation(operation):\n",
    "    \"\"\"\n",
    "    Makes an authenticated request to the given API endpoint using gcloud authentication.\n",
    "    \"\"\"\n",
    "    address = f\"https://{LOCATION}-aiplatform.googleapis.com/v1beta1/\" + operation\n",
    "    try:\n",
    "        # Get the access token\n",
    "        token_result = subprocess.run(\n",
    "            [\"gcloud\", \"auth\", \"print-access-token\"],\n",
    "            capture_output=True,\n",
    "            text=True,\n",
    "            check=True,\n",
    "        )\n",
    "        access_token = token_result.stdout.strip()\n",
    "\n",
    "        # Construct the curl command\n",
    "        curl_command = [\n",
    "            \"curl\",\n",
    "            \"-H\",\n",
    "            \"GET\",\n",
    "            \"-H\",\n",
    "            \"Content-Type: application/json\",\n",
    "            \"-H\",\n",
    "            f\"Authorization: Bearer {access_token}\",\n",
    "            address,\n",
    "        ]\n",
    "\n",
    "        # Execute the curl command\n",
    "        response = subprocess.run(\n",
    "            curl_command, capture_output=True, text=True, check=True\n",
    "        )\n",
    "\n",
    "        # Try parsing the response as JSON\n",
    "        try:\n",
    "            return json.loads(response.stdout)\n",
    "        except json.JSONDecodeError:\n",
    "            print(\"raw response\")\n",
    "            return response.stdout  # Return raw response if not JSON\n",
    "\n",
    "    except subprocess.CalledProcessError as e:\n",
    "        return f\"Error: {e}\"\n",
    "\n",
    "\n",
    "def expand_json_columns_in_df_simplified(\n",
    "    df: pd.DataFrame,\n",
    "    json_instance_col: str = \"jsonInstance\",\n",
    "    eval_results_col: str = \"evaluationResults\",\n",
    ") -> pd.DataFrame:\n",
    "    \"\"\"\n",
    "    Expands JSON data stored in specified columns of a Pandas DataFrame (Simplified).\n",
    "    \"\"\"\n",
    "\n",
    "    # Input validation\n",
    "    if json_instance_col not in df.columns:\n",
    "        raise ValueError(f\"Column '{json_instance_col}' not found in DataFrame.\")\n",
    "    if eval_results_col not in df.columns:\n",
    "        raise ValueError(f\"Column '{eval_results_col}' not found in DataFrame.\")\n",
    "\n",
    "    # Helper function to process each row\n",
    "    def _process_row_simplified(row):\n",
    "        prompt, reference, response = None, None, None\n",
    "        score, explanation = None, None\n",
    "\n",
    "        # Process jsonInstance column\n",
    "        json_instance_str = row[json_instance_col]\n",
    "        if isinstance(json_instance_str, str) and json_instance_str:\n",
    "            try:\n",
    "                inner_data = json.loads(json_instance_str)\n",
    "                if isinstance(inner_data, dict):\n",
    "                    prompt = inner_data.get(\"prompt\")\n",
    "                    reference = inner_data.get(\"reference\")\n",
    "                    response = inner_data.get(\"response\")\n",
    "            except (json.JSONDecodeError, Exception):\n",
    "                pass\n",
    "\n",
    "        # Process evaluationResults column\n",
    "        evaluation_results = row[eval_results_col]\n",
    "        if isinstance(evaluation_results, list) and len(evaluation_results) > 0:\n",
    "            first_result = evaluation_results[0]\n",
    "            if isinstance(first_result, dict):\n",
    "                pointwise_result = first_result.get(\"pointwiseMetricResult\")\n",
    "                if isinstance(pointwise_result, dict):\n",
    "                    score = pointwise_result.get(\"score\")\n",
    "                    explanation = pointwise_result.get(\"explanation\")\n",
    "\n",
    "        return pd.Series(\n",
    "            [prompt, reference, response, score, explanation],\n",
    "            index=[\"prompt\", \"reference\", \"response\", \"score\", \"explanation\"],\n",
    "        )\n",
    "\n",
    "    # Apply the helper function row-wise\n",
    "    extracted_data_df = df.apply(_process_row_simplified, axis=1)\n",
    "    return extracted_data_df\n",
    "\n",
    "\n",
    "def extract_metric_score(\n",
    "    df: pd.DataFrame,\n",
    "    metric_col: str = \"pointwiseMetricResult\",\n",
    "    score_key: str = \"score\",\n",
    ") -> pd.DataFrame:\n",
    "    \"\"\"\n",
    "    Extracts a numeric score from a dictionary stored in a DataFrame column.\n",
    "    \"\"\"\n",
    "\n",
    "    # Input validation\n",
    "    if metric_col not in df.columns:\n",
    "        raise ValueError(f\"Column '{metric_col}' not found in DataFrame.\")\n",
    "\n",
    "    # Extract function\n",
    "    def _get_score(metric_dict):\n",
    "        \"\"\"Helper function to safely extract the score.\"\"\"\n",
    "        if isinstance(metric_dict, dict):\n",
    "            return metric_dict.get(score_key)\n",
    "        return None\n",
    "\n",
    "    # Apply the helper function to the metric column\n",
    "    extracted_scores = df[metric_col].apply(_get_score)\n",
    "\n",
    "    # Assign the new Series as a column to the DataFrame\n",
    "    df[metric_col] = extracted_scores\n",
    "\n",
    "    # Convert the new column to numeric, coercing errors to NaN\n",
    "    df[metric_col] = pd.to_numeric(df[metric_col], errors=\"coerce\")\n",
    "\n",
    "    return df\n",
    "\n",
    "\n",
    "def style_df_for_slide_corrected(\n",
    "    df: pd.DataFrame,\n",
    "    n_rows: int = 10,\n",
    "    text_col_width: int = 200,\n",
    "    cols_to_show: list = None,\n",
    "    score_precision: int = 2,\n",
    "    font_size: str = \"10pt\",\n",
    "    caption: str = \"Model Evaluation Results\",\n",
    ") -> \"pd.io.formats.style.Styler\":\n",
    "    \"\"\"\n",
    "    Styles a DataFrame for better presentation, suitable for slide screenshots.\n",
    "    \"\"\"\n",
    "    if not isinstance(df, pd.DataFrame):\n",
    "        raise TypeError(\"Input must be a Pandas DataFrame.\")\n",
    "\n",
    "    # Select cols and rows ---\n",
    "    if cols_to_show is None:\n",
    "        default_cols = [\"prompt\", \"reference\", \"response\", \"score\", \"explanation\"]\n",
    "        cols_to_show = [col for col in default_cols if col in df.columns]\n",
    "        if not cols_to_show:\n",
    "            cols_to_show = list(df.columns)\n",
    "\n",
    "    missing_cols = [col for col in cols_to_show if col not in df.columns]\n",
    "    if missing_cols:\n",
    "        raise ValueError(f\"Columns not found in DataFrame: {missing_cols}\")\n",
    "\n",
    "    df_view = df[cols_to_show].head(n_rows).copy()\n",
    "\n",
    "    text_cols = [\"prompt\", \"reference\", \"response\", \"explanation\"]\n",
    "    text_cols_in_view = [col for col in text_cols if col in df_view.columns]\n",
    "\n",
    "    # Format text\n",
    "    for col in text_cols_in_view:\n",
    "        df_view[col] = df_view[col].fillna(\"\").astype(str)\n",
    "        df_view[col] = df_view[col].str.slice(0, text_col_width) + df_view[col].apply(\n",
    "            lambda x: \"...\" if len(x) > text_col_width else \"\"\n",
    "        )\n",
    "\n",
    "    # Apply style\n",
    "    styler = df_view.style\n",
    "\n",
    "    # Format nums\n",
    "    if \"score\" in df_view.columns:\n",
    "        styler = styler.format({\"score\": f\"{{:.{score_precision}f}}\"}, na_rep=\"-\")\n",
    "\n",
    "    # General table styles\n",
    "    styles = [\n",
    "        {\n",
    "            \"selector\": \"th\",\n",
    "            \"props\": [\n",
    "                (\"font-size\", font_size),\n",
    "                (\"text-align\", \"center\"),\n",
    "                (\"font-weight\", \"bold\"),\n",
    "                (\"background-color\", \"#f2f2f2\"),\n",
    "            ],\n",
    "        },\n",
    "        {\n",
    "            \"selector\": \"td\",\n",
    "            \"props\": [\n",
    "                (\"font-size\", font_size),\n",
    "                (\"text-align\", \"left\"),\n",
    "                (\"padding\", \"5px\"),\n",
    "            ],\n",
    "        },\n",
    "        {\"selector\": \"tr:nth-child(even)\", \"props\": [(\"background-color\", \"#f9f9f9\")]},\n",
    "        {\n",
    "            \"selector\": \"table\",\n",
    "            \"props\": [\n",
    "                (\"border-collapse\", \"collapse\"),\n",
    "                (\"border\", \"1px solid #ccc\"),\n",
    "                (\"width\", \"100%\"),\n",
    "            ],\n",
    "        },\n",
    "        {\"selector\": \"th, td\", \"props\": [(\"border\", \"1px solid #ddd\")]},\n",
    "        {\n",
    "            \"selector\": \"caption\",\n",
    "            \"props\": [\n",
    "                (\"caption-side\", \"top\"),\n",
    "                (\"font-size\", \"1.2em\"),\n",
    "                (\"font-weight\", \"bold\"),\n",
    "                (\"margin\", \"10px\"),\n",
    "            ],\n",
    "        },\n",
    "    ]\n",
    "    styler = styler.set_table_styles(styles)\n",
    "\n",
    "    # Hide index\n",
    "    styler = styler.hide(axis=\"index\")\n",
    "\n",
    "    # Add caption\n",
    "    if caption:\n",
    "        styler = styler.set_caption(caption)\n",
    "\n",
    "    # Specific column alignment\n",
    "    if \"score\" in df_view.columns:\n",
    "        styler = styler.set_properties(subset=[\"score\"], **{\"text-align\": \"center\"})\n",
    "\n",
    "    return styler"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "VQgfCUd7rF2S"
   },
   "source": [
    "## Prepare evaluation metrics\n",
    "\n",
    "This is where you define how you want to evaluate your model's responses. The batch evaluation service is powerful and flexible. You can use:\n",
    "\n",
    "- [Model Based Metrics](https://cloud.google.com/vertex-ai/generative-ai/docs/models/determine-eval#pointwise-pairwise): Use another powerful model (the \"autorater\") to judge the quality of your target model's output. You can provide a custom prompt template, like we're doing here, to guide the autorater. This is incredibly powerful for assessing subjective qualities like \"fluency,\" \"style,\" or \"safety.\" Autorater model defaults to `gemini-2.0-flash` if not specified in the request and both Pointwise and Pairwise evaluation are supported.\n",
    "\n",
    "- [Computation based metrics](https://cloud.google.com/vertex-ai/generative-ai/docs/models/determine-eval#computation-based-metrics): These are traditional, objective metrics like Exact Match, ROUGE (for summarization) and BLEU (for translation) that compare the generated text to a reference text.\n",
    "\n",
    "Batch evaluation also supports aggregation of successful evaluated instances.\n",
    "By specifying one or more of the aggegation metrics, it generates a high-level summary of the scores across the entire dataset.\n",
    "\n",
    "  - AVERAGE\n",
    "  - MODE\n",
    "  - STANDARD_DEVIATION\n",
    "  - VARIANCE\n",
    "  - MINIMUM\n",
    "  - MAXIMUM\n",
    "  - MEDIAN\n",
    "  - PERCENTILE_P90\n",
    "  - PERCENTILE_P95\n",
    "  - PERCENTILE_P99"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "gRZsEnNScMys"
   },
   "outputs": [],
   "source": [
    "metrics = [\n",
    "    {\n",
    "        \"pointwise_metric_spec\": {\n",
    "            \"metric_prompt_template\": (\n",
    "                \"Evaluate the fluency of this sentence: {response}. \"\n",
    "                \"Give score from 0 to 1. 0 - not fluent at all. \"\n",
    "                \"1 - very fluent.\"\n",
    "            )\n",
    "        },\n",
    "        \"aggregation_metrics\": [\"AVERAGE\", \"MEDIAN\"],\n",
    "    }\n",
    "]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "1_xZADsak23H"
   },
   "source": [
    "## Prepare evaluation dataset\n",
    "\n",
    "Now, let's create our evaluation dataset. The batch service expects each item in your dataset to be a JSON object. For a pointwise, model-based evaluation like ours, each JSON object needs a prompt, a response (the model output you want to evaluate), and optionally a reference (a ground-truth answer).\n",
    "\n",
    "Here, we're creating a pandas DataFrame first because it's a familiar and easy way to structure data. We have three columns:\n",
    "\n",
    "- `prompt`: The input given to the model (in this case, a text to be summarized).\n",
    "\n",
    "- `reference`: A \"golden\" summary, which we could use for other metrics but won't be used by our specific \"fluency\" metric. It's still good practice to include it.\n",
    "\n",
    "- `response`: The actual summary generated by the model we're testing.\n",
    "The input for batch requests specifies the items to send to the autorater model for evaluation.  Batch evaluation supports both Cloud storage JSONL files and BigQuery tables.  In this tutorial, we are going to use a Cloud storage JSONL file."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "SaQ3xqNxhss7"
   },
   "outputs": [],
   "source": [
    "eval_dict = {\n",
    "    \"prompt\": [\n",
    "        \"Researchers at the Institute for Advanced Studies have developed a new type of solar panel that boasts a 5% increase in efficiency compared to current market leaders. The innovation lies in a novel perovskite crystal structure that is both more stable and better at capturing a wider spectrum of light. Commercial production is expected within three years.\",\n",
    "        \"Introducing the 'SilentStep' treadmill. Engineered with advanced noise-reduction technology, it allows for near-silent operation, perfect for apartment living or early morning workouts. It features 12 pre-set programs, a heart rate monitor, and folds easily for storage. Maximum user weight is 250 lbs.\",\n",
    "        \"This study investigated the effects of intermittent fasting (IF) versus daily caloric restriction (DCR) on metabolic markers in overweight adults over 12 weeks. Both groups achieved similar weight loss. However, the IF group showed significantly better improvements in insulin sensitivity and reduction in visceral fat compared to the DCR group, suggesting potential unique metabolic benefits beyond weight loss alone.\",\n",
    "        \"The old lighthouse stood sentinel on the cliff, its beam cutting through the thick fog rolling in from the sea. For generations, its light had guided ships safely to the harbor below. Elias, the keeper, felt the weight of that tradition as he climbed the winding stairs for his nightly duty, the rhythmic groan of the turning lens a familiar comfort.\",\n",
    "        \"The project planning meeting concluded with action items assigned. Marketing (Jane) to finalize competitor analysis by Friday. Engineering (Tom) to provide a prototype schematic by next Wednesday. Budget approval pending confirmation from Finance (Mr. Davies). Next sync meeting scheduled for Thursday, 10 AM.\",\n",
    "        \"To prepare the marinade, combine 1/4 cup soy sauce, 2 tablespoons honey, 1 tablespoon sesame oil, 2 minced garlic cloves, and 1 teaspoon grated ginger in a bowl. Whisk well. Add your protein (chicken, beef, or tofu) and ensure it's fully coated. Marinate for at least 30 minutes, or preferably 2 hours in the refrigerator.\",\n",
    "        \"The Library of Alexandria, in Egypt, was one of the largest and most significant libraries of the ancient world. Flourishing under the Ptolemaic dynasty, it was dedicated to the Muses, the nine goddesses of the arts. It functioned more as a research institution, attracting scholars from across the Hellenistic world, but its eventual destruction remains a subject of debate among historians.\",\n",
    "        \"A blockchain is a distributed, immutable ledger. Transactions are grouped into blocks, each cryptographically linked to the previous one using a hash. This chain structure, combined with decentralization across many computers, makes it extremely difficult to tamper with recorded data.\",\n",
    "        \"Deforestation in the Amazon rainforest continues to be a major environmental concern, primarily driven by cattle ranching and agriculture. This loss of forest cover contributes significantly to global carbon emissions and biodiversity loss. Recent satellite data indicates a slight decrease in the rate of deforestation compared to the previous year, but levels remain alarmingly high.\",\n",
    "        \"While the novel's premise was intriguing - a world where memories can be traded - the execution felt uneven. Character development was shallow, particularly for the protagonist, and the pacing dragged significantly in the middle third. However, the world-building details were imaginative and offered glimpses of a truly fascinating concept.\",\n",
    "    ],\n",
    "    \"reference\": [\n",
    "        \"A new solar panel developed by institute researchers shows a 5% efficiency gain over current leaders due to a novel, stable perovskite structure capturing more light. Commercialization is expected in three years.\",\n",
    "        \"The 'SilentStep' treadmill offers near-silent operation suitable for shared spaces. It includes 12 programs, a heart rate monitor, easy folding for storage, and supports up to 250 lbs.\",\n",
    "        \"A 12-week study comparing intermittent fasting (IF) and daily caloric restriction (DCR) in overweight adults found similar weight loss, but IF led to significantly better insulin sensitivity and visceral fat reduction, indicating unique metabolic advantages.\",\n",
    "        \"An old lighthouse keeper, Elias, feels the weight of tradition as he tends the light that has guided ships through fog for generations, finding comfort in the familiar sounds of the lighthouse.\",\n",
    "        \"Meeting takeaways: Jane (Marketing) to complete competitor analysis by Friday; Tom (Engineering) to deliver prototype schematic by next Wednesday. Budget approval awaits Finance confirmation. Next meeting: Thursday, 10 AM.\",\n",
    "        \"Whisk together soy sauce, honey, sesame oil, minced garlic, and grated ginger for the marinade. Coat protein (chicken, beef, tofu) and marinate for at least 30 minutes (2 hours refrigerated recommended).\",\n",
    "        \"The ancient Library of Alexandria in Egypt, significant under the Ptolemaic dynasty and dedicated to the Muses, served as a major research hub attracting scholars. Its destruction is historically debated.\",\n",
    "        \"A blockchain is a distributed, unchangeable ledger where transactions are grouped into blocks cryptographically linked in a chain, making tampering difficult due to decentralization.\",\n",
    "        \"Amazon deforestation, mainly from cattle ranching and agriculture, remains a critical environmental issue contributing to carbon emissions and biodiversity loss. While recent data shows a slight decrease, the rate is still alarmingly high.\",\n",
    "        \"The novel's intriguing memory-trading premise suffered from uneven execution, with shallow character development and slow pacing, though its imaginative world-building was a highlight.\",\n",
    "    ],\n",
    "    \"response\": [\n",
    "        \"Researchers developed a new solar panel. It is 5% more efficient. It uses a new perovskite crystal structure. This structure is stable and captures more light. It should be commercially produced in three years.\",\n",
    "        \"This is the 'SilentStep' treadmill. It is quiet because of noise-reduction tech. Good for apartments. Has 12 programs. Has heart monitor. Folds. Max weight 250 lbs.\",\n",
    "        \"Overweight adults were studied for 12 weeks comparing intermittent fasting (IF) and daily caloric restriction (DCR). Weight loss was similar. IF group had better insulin sensitivity and less visceral fat than DCR group. IF might have unique benefits.\",\n",
    "        \"A lighthouse on a cliff guides ships in fog. Elias is the keeper. He climbs the stairs for his duty. He feels the tradition. The lens groans.\",\n",
    "        \"The meeting ended. Jane will do competitor analysis by Friday. Tom will provide a schematic next Wednesday. Finance needs to approve the budget. The next meeting is Thursday at 10 AM.\",\n",
    "        \"Make the marinade: mix 1/4c soy sauce, 2tbsp honey, 1tbsp sesame oil, 2 cloves garlic (minced), 1tsp ginger (grated). Whisk it. Put protein in. Cover it. Marinate 30+ minutes, better for 2 hours in fridge.\",\n",
    "        \"The Library of Alexandria was a big ancient library in Egypt. It was important during the Ptolemaic rule and dedicated to the Muses. Scholars came there to research. How it was destroyed is debated by historians.\",\n",
    "        \"Blockchain is like a shared digital book that cannot be changed easily. Information (transactions) goes into blocks. Blocks are linked using crypto hashes. Because it's spread out on many computers, changing data is very hard.\",\n",
    "        \"Deforestation in the Amazon is a big worry. Cattle and farming are main causes. It increases carbon emissions and hurts biodiversity. Satellites show the rate decreased slightly last year, but it's still very high.\",\n",
    "        \"The book had a cool idea about trading memories. But it wasn't done perfectly. Characters weren't deep, especially the main one. The middle part was slow. The world details were creative and showed a good concept.\",\n",
    "    ],\n",
    "}\n",
    "\n",
    "eval_dict[\"prompt\"] = [\n",
    "    f\"Summarize the following text:\\n{p}\" for p in eval_dict[\"prompt\"]\n",
    "]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "IkAdyVmw-Xj3"
   },
   "outputs": [],
   "source": [
    "eval_df = pd.DataFrame(eval_dict)\n",
    "eval_df.head()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "YWEd9CpwpM41"
   },
   "source": [
    "## Load the eval dataset in Cloud storage\n",
    "\n",
    "The batch evaluation service reads its input from Google Cloud Storage (or BigQuery). Here, we take our pandas DataFrame and save it to GCS in the required JSONL (JSON Lines) format. Each line in the file is a separate, complete JSON object."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "3Q0gq_rc_owb"
   },
   "outputs": [],
   "source": [
    "evaluation_file_uri = BUCKET_URI + \"/pairwise_data.jsonl\"\n",
    "eval_df.to_json(evaluation_file_uri, orient=\"records\", lines=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "pM0o19tH3mQK"
   },
   "source": [
    "## Send a batch evaluation request\n",
    "\n",
    "It's time to assemble the final request. This is the heart of our tutorial. We're creating a JSON object that specifies:\n",
    "\n",
    "- `dataset`: Points to the pointwise.jsonl file we just uploaded to GCS.\n",
    "\n",
    "- `metrics`: Includes the metrics configuration we defined earlier (the model-based fluency check).\n",
    "\n",
    "- `output_config`: Tells the service where to save the results—in the root of our GCS bucket.\n",
    "\n",
    "We save this request to a local file and then use our `send_request` helper function to kick off the job.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "DJgifx88_0Sz"
   },
   "outputs": [],
   "source": [
    "request = {\n",
    "    \"dataset\": {\"gcs_source\": {\"uris\": evaluation_file_uri}},\n",
    "    \"metrics\": metrics,\n",
    "    \"output_config\": {\"gcs_destination\": {\"output_uri_prefix\": BUCKET_URI}},\n",
    "}\n",
    "\n",
    "# Write the JSON to a file\n",
    "with open(\"pairwise_fluency_request.json\", \"w\") as json_file:\n",
    "    json.dump(request, json_file, indent=2)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "89VJboTU3sN6"
   },
   "outputs": [],
   "source": [
    "operation = send_request(\"pairwise_fluency_request.json\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "E5X-3VDX3xnz"
   },
   "source": [
    "## Wait for the batch evaluation job to complete\n",
    "\n",
    "Batch jobs are asynchronous, meaning they run in the background. This while loop is a simple poller. It checks the status of the job every 30 seconds using our get_operation helper.\n",
    "\n",
    "Once the 'done' field in the response is True, we know the job has finished, and we can move on to the fun part: seeing the results.\n",
    "\n",
    "For a real-world application, you might use a more sophisticated notification system like Pub/Sub or Cloud Functions instead of a while loop."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "QheQlZhN35d6"
   },
   "outputs": [],
   "source": [
    "# Refresh the job until complete\n",
    "while \"done\" not in get_operation(operation):\n",
    "    print(\"Batch evaluation job is runnning...\")\n",
    "    time.sleep(30)\n",
    "\n",
    "# Check if the job succeeds\n",
    "response_json = get_operation(operation)\n",
    "print(\"Operation complete. Please see path of the results in outputInfo\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "HrNdlNq0e873"
   },
   "outputs": [],
   "source": [
    "pprint(response_json)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "iaZqGEtexdXT"
   },
   "source": [
    "## Get the evaluation results\n",
    "\n",
    "Success! The job is done. The API response contains the GCS path where the results are stored. We'll grab that path and use pandas to read the output JSONL files directly into DataFrames.\n",
    "\n",
    "- `evaluation_results.jsonl` contains the detailed, row-by-row evaluation for each item in our dataset.\n",
    "- `aggregation_results.jsonl` contains the overall AVERAGE and MEDIAN scores we requested.\n",
    "\n",
    "Finally, we use our handy helper functions to parse the nested JSON and display the results in a beautifully styled table. You should be able to see the fluency score and the autorater's explanation for each response, allowing you to quickly diagnose your model's performance."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "W-s8Mqo-qPPB"
   },
   "outputs": [],
   "source": [
    "output_uri = response_json[\"response\"][\"outputInfo\"][\"gcsOutputDirectory\"]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "g1iClcKQCxk4"
   },
   "outputs": [],
   "source": [
    "evaluation_results = expand_json_columns_in_df_simplified(\n",
    "    pd.read_json(\n",
    "        output_uri + \"/evaluation_results.jsonl\",\n",
    "        lines=True,\n",
    "    )\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "dfPk9L47JM2k"
   },
   "outputs": [],
   "source": [
    "styled_table = style_df_for_slide_corrected(\n",
    "    evaluation_results,\n",
    "    n_rows=5,\n",
    "    text_col_width=300,\n",
    "    cols_to_show=[\"prompt\", \"response\", \"score\", \"explanation\"],\n",
    "    caption=\"Evaluation Summary\",\n",
    ")\n",
    "\n",
    "display(styled_table)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "Lfeck78oDbMR"
   },
   "outputs": [],
   "source": [
    "evaluation_results_agg = extract_metric_score(\n",
    "    pd.read_json(\n",
    "        output_uri + \"/aggregation_results.jsonl\",\n",
    "        lines=True,\n",
    "    )\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "SVqThP7gFnXx"
   },
   "outputs": [],
   "source": [
    "evaluation_results_agg"
   ]
  }
 ],
 "metadata": {
  "colab": {
   "name": "intro_batch_evaluation.ipynb",
   "toc_visible": true
  },
  "environment": {
   "kernel": "python3",
   "name": "common-cpu.m129",
   "type": "gcloud",
   "uri": "us-docker.pkg.dev/deeplearning-platform-release/gcr.io/base-cpu:m129"
  },
  "kernelspec": {
   "display_name": "Python 3 (Local)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.17"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
