{
  "cells": [
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "i3oNB_qC4X2Y"
      },
      "outputs": [],
      "source": [
        "# Copyright 2025 Google LLC\n",
        "#\n",
        "# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
        "# you may not use this file except in compliance with the License.\n",
        "# You may obtain a copy of the License at\n",
        "#\n",
        "#     https://www.apache.org/licenses/LICENSE-2.0\n",
        "#\n",
        "# Unless required by applicable law or agreed to in writing, software\n",
        "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
        "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
        "# See the License for the specific language governing permissions and\n",
        "# limitations under the License."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "c4-kxwz23nzr"
      },
      "source": [
        "# Supervised Fine-Tuning with integrated Gen AI Evaluation\n",
        "\n",
        "\n",
        "<table align=\"left\">\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/generative-ai/blob/main/gemini/tuning/sft_gemini_automatic_evaluation.ipynb\">\n",
        "      <img width=\"32px\" src=\"https://www.gstatic.com/pantheon/images/bigquery/welcome_page/colab-logo.svg\" alt=\"Google Colaboratory logo\"><br> Open in Colab\n",
        "    </a>\n",
        "  </td>\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://console.cloud.google.com/vertex-ai/colab/import/https:%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fgenerative-ai%2Fmain%2Fgemini%2Ftuning%2Fsft_gemini_automatic_evaluation.ipynb\">\n",
        "      <img width=\"32px\" src=\"https://lh3.googleusercontent.com/JmcxdQi-qOpctIvWKgPtrzZdJJK-J3sWE1RsfjZNwshCFgE_9fULcNpuXYTilIR2hjwN\" alt=\"Google Cloud Colab Enterprise logo\"><br> Open in Colab Enterprise\n",
        "    </a>\n",
        "  </td>    \n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/generative-ai/main/gemini/tuning/sft_gemini_automatic_evaluation.ipynb\">\n",
        "      <img src=\"https://www.gstatic.com/images/branding/gcpiconscolors/vertexai/v1/32px.svg\" alt=\"Vertex AI logo\"><br> Open in Workbench\n",
        "    </a>\n",
        "  </td>\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/tuning/sft_gemini_automatic_evaluation.ipynb\">\n",
        "      <img width=\"32px\" src=\"https://raw.githubusercontent.com/primer/octicons/refs/heads/main/icons/mark-github-24.svg\" alt=\"GitHub logo\"><br> View on GitHub\n",
        "    </a>\n",
        "  </td>\n",
        "</table>\n",
        "\n",
        "<div style=\"clear: both;\"></div>\n",
        "\n",
        "<b>Share to:</b>\n",
        "\n",
        "<a href=\"https://www.linkedin.com/sharing/share-offsite/?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/tuning/sft_gemini_automatic_evaluation.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/8/81/LinkedIn_icon.svg\" alt=\"LinkedIn logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://bsky.app/intent/compose?text=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/tuning/sft_gemini_automatic_evaluation.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/7/7a/Bluesky_Logo.svg\" alt=\"Bluesky logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://twitter.com/intent/tweet?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/tuning/sft_gemini_automatic_evaluation.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/5/5a/X_icon_2.svg\" alt=\"X logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://reddit.com/submit?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/tuning/sft_gemini_automatic_evaluation.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://redditinc.com/hubfs/Reddit%20Inc/Brand/Reddit_Logo.png\" alt=\"Reddit logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://www.facebook.com/sharer/sharer.php?u=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/tuning/sft_gemini_automatic_evaluation.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/5/51/Facebook_f_logo_%282019%29.svg\" alt=\"Facebook logo\">\n",
        "</a>            "
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "pO98gUu-4eTJ"
      },
      "source": [
        "| Authors |\n",
        "| --- |\n",
        "| Kelsi Lakey |\n",
        "| [Ivan Nardini](https://github.com/inardini) |"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "RaUnT2fXLRe5"
      },
      "source": [
        "This notebook demonstrates a powerful new feature in Vertex AI: [integrated evaluation for supervised fine-tuning (SFT)]((https://cloud.devsite.corp.google.com/vertex-ai/generative-ai/docs/models/gemini-use-supervised-tuning#create_a_text_model_supervised_tuning_job)).\n",
        "\n",
        "Traditionally, tuning a model and evaluating it are two separate, time-consuming steps. This creates a slow feedback loop, making it difficult to iterate quickly.\n",
        "\n",
        "This notebook shows you the new, integrated workflow where you can get performance metrics on model checkpoints *as they are being trained*. By adding a simple EvaluationConfig to your tuning job, you can now get performance metrics using Gen AI Evaluation Service on model checkpoints as they are being trained.\n",
        "\n",
        "This gives allows you:\n",
        "\n",
        "*   Faster Iteration: See if a tuning run is working in minutes, not hours.\n",
        "*   Cost Savings: Identify and stop bad runs early.\n",
        "*   Better Models: Get a clear view of how your model improves over time, helping you pinpoint the best checkpoint and avoid overfitting.\n",
        "\n",
        "\n",
        "You will learn how to:\n",
        "\n",
        "* Configure an EvaluationConfig to define custom, model-based metrics.  \n",
        "* Launch a supervised fine-tuning job for gemini-2.5-flash with automatic evaluation enabled.  \n",
        "* Monitor the job and retrieve performance results for each checkpoint programmatically.  \n",
        "* Find and analyze your tuning and evaluation results in the Vertex AI Experiments UI."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "llEFILYz2aye"
      },
      "source": [
        "## Get started"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "oo2rh4cC2e1r"
      },
      "source": [
        "### Install Google Gen AI SDK and other required packages\n",
        "\n",
        "The new Google Gen AI SDK provides a unified interface to Gemini through both the Gemini Developer API and the Gemini API on Vertex AI. With a few exceptions, code that runs on one platform will run on both. This means that you can prototype an application using the Developer API and then migrate the application to Vertex AI without rewriting your code."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "l_ok3vdw2cyf"
      },
      "outputs": [],
      "source": [
        "%pip install --upgrade --user --quiet google-genai google-cloud-aiplatform gradio plotly"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "21gF8JP8RPso"
      },
      "source": [
        "### Authenticate your notebook environment (Colab only)\n",
        "\n",
        "You'll need a Google Cloud project with the Vertex AI API enabled. Authenticate your notebook environment to continue."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "86VNaqlgD9rK"
      },
      "outputs": [],
      "source": [
        "import sys\n",
        "\n",
        "if \"google.colab\" in sys.modules:\n",
        "    from google.colab import auth\n",
        "\n",
        "    auth.authenticate_user()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "zEWOpk9Qd-g3"
      },
      "source": [
        "### Set Google Cloud project information\n",
        "\n",
        "To get started using Vertex AI, you must have an existing Google Cloud project and [enable the Vertex AI API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com).\n",
        "\n",
        "Learn more about [setting up a project and a development environment](https://cloud.google.com/vertex-ai/docs/start/cloud-environment)."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "8OmiMYjpeHv8"
      },
      "outputs": [],
      "source": [
        "# Use the environment variable if the user doesn't provide Project ID.\n",
        "import os\n",
        "\n",
        "from google import genai\n",
        "from google.cloud import aiplatform\n",
        "\n",
        "# Please fill in these values for your project.\n",
        "PROJECT_ID = \"[your-project-id]\"  # @param {type: \"string\", placeholder: \"[your-project-id]\", isTemplate: true}\n",
        "if not PROJECT_ID or PROJECT_ID == \"[your-project-id]\":\n",
        "    PROJECT_ID = str(os.environ.get(\"GOOGLE_CLOUD_PROJECT\"))\n",
        "\n",
        "LOCATION = os.environ.get(\"GOOGLE_CLOUD_REGION\", \"us-central1\")\n",
        "\n",
        "# A GCS bucket is required for tuning and evaluation artifacts.\n",
        "BUCKET_NAME = \"[your-gcs-bucket-name]\"  # @param {type:\"string\", placeholder: \"[your-gcs-bucket-name]\", isTemplate: true}\n",
        "BUCKET_URI = f\"gs://{BUCKET_NAME}\"\n",
        "\n",
        "# Create the GCS bucket if it doesn't exist\n",
        "!gsutil mb -l {LOCATION} -p {PROJECT_ID} {BUCKET_URI}\n",
        "\n",
        "# Initialize the Vertex AI and Gen AI SDKs\n",
        "aiplatform.init(project=PROJECT_ID, location=LOCATION, staging_bucket=BUCKET_URI)\n",
        "client = genai.Client(vertexai=True, project=PROJECT_ID, location=LOCATION)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "k8CI-TcqD06L"
      },
      "source": [
        "### Import libraries"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "rerpHL_eEG8D"
      },
      "outputs": [],
      "source": [
        "import time\n",
        "import uuid\n",
        "\n",
        "import pandas as pd\n",
        "from google.genai import types"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "DhjmRffOOPAS"
      },
      "source": [
        "## Initialize model\n",
        "\n",
        "Define the model to be tuned. `gemini-2.5-flash` is the Gemini text model support supervised tuning."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "jL-zRl5_OVZW"
      },
      "outputs": [],
      "source": [
        "base_model = \"gemini-2.5-flash\""
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "4xIqi0Vqeqwn"
      },
      "source": [
        "## Prepare your datasets\n",
        "\n",
        "For integrated evaluation, you need both a training dataset and a validation dataset. The model trains on the former and is tested against the latter at each checkpoint. The data should be in JSONL format, where each line is a JSON object containing prompt and response keys.\n",
        "\n",
        "Example format:\n",
        "\n",
        "```json\n",
        "{\n",
        "  \"contents\": [\n",
        "    {\n",
        "      \"role\": \"user\",\n",
        "      \"parts\": [\n",
        "        {\n",
        "          \"text\": \"Honesty is usually the best policy. It is disrespectful to lie to someone. If you don't want to date someone, you should say so.  Sometimes it is easy to be honest. ... It is not necessary for everyone around you to know that you are turning down a date.\\n\\nProvide a summary of the article in two or three sentences:\\n\\n\"\n",
        "        }\n",
        "      ]\n",
        "    },\n",
        "    {\n",
        "      \"role\": \"model\",\n",
        "      \"parts\": [\n",
        "        {\n",
        "          \"text\": \"Tell the truth. Use a \\\"compliment sandwich\\\". Be direct. Treat the person with respect. Communicate effectively.\"\n",
        "        }\n",
        "      ]\n",
        "    }\n",
        "  ]\n",
        "}\n",
        "```\n",
        "\n",
        "For this tutorial, we will use public sample datasets."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "VpzmI1K61Tn2"
      },
      "outputs": [],
      "source": [
        "training_dataset_uri = \"gs://cloud-samples-data/ai-platform/generative_ai/gemini-2_0/text/sft_train_data.jsonl\"\n",
        "validation_dataset_uri = \"gs://cloud-samples-data/ai-platform/generative_ai/gemini-2_0/text/sft_validation_data.jsonl\""
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "7YGurtXHJy_y"
      },
      "source": [
        "## Define the Evaluation Config\n",
        "\n",
        "This is the core of the new feature.\n",
        "\n",
        "The `EvaluationConfig` object tells the tuning job how to measure performance using another model (an \"autorater\") to judge the quality of your tuned model's responses. Here, we'll define a `metric` to evaluate the \"fluency\" of the output.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "_SdsjiWGVze8"
      },
      "outputs": [],
      "source": [
        "evaluation_config = types.EvaluationConfig(\n",
        "    # Required. Define a list of metrics. A minimum of 1 is required.\n",
        "    metrics=[\n",
        "        types.Metric(\n",
        "            name=\"fluency\",\n",
        "            prompt_template=\"Evaluate the sentence fluency of the response. Provide a score from 1-5.\\n RESPONSE: {response}\",\n",
        "            # Optional. Guide the autorater's persona for better results.\n",
        "            judge_model_system_instruction=\"You are a professional editor specializing in linguistics.\",\n",
        "        ),\n",
        "    ],\n",
        "    # Required. Define where to store the detailed, row-by-row evaluation results.\n",
        "    output_config=types.OutputConfig(\n",
        "        gcs_destination=types.GcsDestination(\n",
        "            output_uri_prefix=f\"{BUCKET_URI}/evaluation\"\n",
        "        )\n",
        "    ),\n",
        "    # Optional. Configure the autorater itself.\n",
        "    autorater_config=types.AutoraterConfig(\n",
        "        # The number of validation samples to evaluate for each checkpoint.\n",
        "        # This is a trade-off: more samples = better metrics but higher cost/time.\n",
        "        sampling_count=6,\n",
        "    ),\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "IgMb3E0YEqL2"
      },
      "source": [
        "## Fine-tune the Model with Integrated Evaluation\n",
        "\n",
        "Now we launch the tuning job. We provide the base model, datasets, and our evaluation\\_config. The service will handle the rest, automatically running evaluations on checkpoints.\n",
        "\n",
        "*Note: The default hyperparameter settings are optimized for most use cases. You can customize parameters like adapter\\_size to address specific performance needs.*"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "vQM2vDBZ27b_"
      },
      "outputs": [],
      "source": [
        "tuned_model_display_name = (\n",
        "    \"gemini-flash-integrated-eval-demo\" + f\"_{str(uuid.uuid4())[:8]}\"\n",
        ")\n",
        "\n",
        "training_dataset = {\n",
        "    \"gcs_uri\": training_dataset_uri,\n",
        "}\n",
        "\n",
        "validation_dataset = types.TuningValidationDataset(\n",
        "    gcs_uri=validation_dataset_uri,\n",
        ")\n",
        "\n",
        "# Start the tuning job. This is an asynchronous call.\n",
        "# The SDK's tuning implementation is experimental and may change in future versions.\n",
        "sft_tuning_job = client.tunings.tune(\n",
        "    base_model=base_model,\n",
        "    training_dataset=training_dataset,\n",
        "    config=types.CreateTuningJobConfig(\n",
        "        tuned_model_display_name=tuned_model_display_name,\n",
        "        validation_dataset=validation_dataset,\n",
        "        evaluation_config=evaluation_config,\n",
        "    ),\n",
        ")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "yLlAgVjCNqXg"
      },
      "outputs": [],
      "source": [
        "# Get the tuning job for status checks.\n",
        "tuning_job = client.tunings.get(name=sft_tuning_job.name)\n",
        "print(tuning_job)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "gpM4q8PkhUHE"
      },
      "source": [
        "## Monitor the tuning job\n",
        "\n",
        "Tuning time depends on several factors, such as training data size, number of epochs, learning rate multiplier, etc.\n",
        "\n",
        "\n",
        "**⚠️ It will take \\~45 mins for this model tuning job and evaluation to complete.**"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "p8o3CTerhi_r"
      },
      "outputs": [],
      "source": [
        "print(\n",
        "    \"Monitoring job... This will take time. You can safely close this notebook and come back later.\"\n",
        ")\n",
        "\n",
        "while sft_tuning_job.state not in [\n",
        "    types.JobState.JOB_STATE_CANCELLED,\n",
        "    types.JobState.JOB_STATE_FAILED,\n",
        "    types.JobState.JOB_STATE_SUCCEEDED,\n",
        "]:\n",
        "    time.sleep(600)  # Check status every 10 minutes\n",
        "    sft_tuning_job.refresh()\n",
        "    print(f\"Current job state: {sft_tuning_job.state.name!s}\")\n",
        "\n",
        "print(f\"Job finished with state: {sft_tuning_job.state.name}\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "4jHBD7nah7fZ"
      },
      "source": [
        "## Get the final tuned model and experiment details\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "e1O1xCBS6spi"
      },
      "outputs": [],
      "source": [
        "tuned_model = tuning_job.tuned_model.endpoint\n",
        "experiment_name = tuning_job.experiment\n",
        "\n",
        "print(\"Tuned model experiment:\", experiment_name)\n",
        "print(\"Tuned model endpoint resource name:\", tuned_model)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "8DzlWWKpbGcu"
      },
      "source": [
        "## View Gen AI Evaluation results\n",
        "\n",
        "Evaluation results for each checkpoint are logged to the associated Vertex Experiment. You can view them in the UI or access them programmatically.\n",
        "\n",
        "Each checkpoint will be described with a `tuning-evaluation-checkpoint-#` \\\n",
        "Experiment Run. This will include the following Gen AI Evaluation metrics:\n",
        "  - Error message (if failed)\n",
        "  - `gcsDestination` (location of row based evaluation results)\n",
        "  - `gcsSource` (location of evaluation dataset)\n",
        "  - Aggregated evaluation metric results\n",
        "    - For SDK: `AVERAGE` and `STANDARD_DEVIATION` values for each metric\n",
        "    - For API: User defined aggregation metrics\n",
        "    - Aggregated results are indexed in the order they were defined in the Evaluation Config"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "psRbCfzwWz_g"
      },
      "source": [
        "### View Gen AI Evaluation Metrics using Vertex AI Experiments SDK"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "5J1LP3nCbNlg"
      },
      "outputs": [],
      "source": [
        "# Locate the Vertex AI Experiment and list the evaluation runs for each checkpoint\n",
        "experiment_runs = aiplatform.ExperimentRun.list(experiment=experiment_name)\n",
        "for run in experiment_runs:\n",
        "    if \"-evaluation-\" in run.name:\n",
        "        print(f\"--- Results for Run: {run.name} ---\")\n",
        "        print(pd.DataFrame.from_dict(run.get_metrics(), orient=\"index\"))\n",
        "        print(\"\\n\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "QNyncQ4nh_Ur"
      },
      "source": [
        "### View Results in the Google Cloud Console\n",
        "\n",
        "You can also get a rich, visual view of the results:\n",
        "\n",
        "1. Navigate to the **Vertex AI \\> Generative AI \\> Tuning** page in the Google Cloud Console.  \n",
        "2. Click on your tuning job (gemini-flash-integrated-eval-demo).  \n",
        "3. On the Details page, click the link under the **Experiment** field. This will take you directly to the Vertex AI Experiments page where you can compare metrics across all checkpoints.\n",
        "\n",
        "Here you can see an example:\n",
        "\n",
        "![7pdquo8m2vaizrs-1.png](https://storage.googleapis.com/github-repo/generative-ai/sft_gemini_automatic_evaluation/7pdquo8m2vaizrs-1.png)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "HMpW3AmBwB6c"
      },
      "source": [
        "## (Optional) Evaluation Results Viewer\n",
        "\n",
        "Launch a Gradio app to visualize evaluation results from a GCS bucket."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "1MFd7AA-wwQu"
      },
      "outputs": [],
      "source": [
        "import json\n",
        "\n",
        "import gradio as gr\n",
        "import numpy as np\n",
        "import pandas as pd\n",
        "import plotly.graph_objects as go\n",
        "from google.cloud import storage\n",
        "\n",
        "\n",
        "class EvaluationViewer:\n",
        "    def __init__(self):\n",
        "        self.client = None\n",
        "        self.bucket_name = None\n",
        "        self.evaluation_data = {}\n",
        "\n",
        "    def connect_to_gcs(self, bucket_name: str) -> str:\n",
        "        \"\"\"Connect to GCS bucket and load evaluation data.\"\"\"\n",
        "        try:\n",
        "            self.client = storage.Client()\n",
        "            self.bucket_name = bucket_name\n",
        "            self.bucket = self.client.bucket(bucket_name)\n",
        "\n",
        "            # Load evaluation data\n",
        "            self.load_evaluation_data()\n",
        "\n",
        "            return f\"✓ Successfully connected to bucket: {bucket_name}\\nFound {len(self.evaluation_data)} evaluation runs\"\n",
        "        except Exception as e:\n",
        "            return f\"✗ Error connecting to bucket: {e!s}\"\n",
        "\n",
        "    def load_evaluation_data(self):\n",
        "        \"\"\"Load all evaluation data from the bucket.\"\"\"\n",
        "        self.evaluation_data = {}\n",
        "\n",
        "        # List all evaluation folders\n",
        "        blobs = self.bucket.list_blobs(prefix=\"evaluation/evaluation_\")\n",
        "\n",
        "        eval_folders = set()\n",
        "        for blob in blobs:\n",
        "            parts = blob.name.split(\"/\")\n",
        "            # Only add folders that start with 'evaluation_' and don't contain 'dataset_checkpoint'\n",
        "            if (\n",
        "                len(parts) >= 2\n",
        "                and parts[1].startswith(\"evaluation_\")\n",
        "                and \"dataset_checkpoint\" not in parts[1]\n",
        "            ):\n",
        "                eval_folders.add(parts[1])\n",
        "\n",
        "        # Sort folders by timestamp to assign checkpoint numbers\n",
        "        sorted_folders = sorted(eval_folders)\n",
        "\n",
        "        # Load data from each evaluation folder with checkpoint naming\n",
        "        for idx, folder in enumerate(sorted_folders, 1):\n",
        "            timestamp = folder.replace(\"evaluation_\", \"\")\n",
        "            checkpoint_name = f\"checkpoint_{idx}\"\n",
        "            self.evaluation_data[timestamp] = {\n",
        "                \"aggregation\": [],\n",
        "                \"results\": [],\n",
        "                \"checkpoint_name\": checkpoint_name,\n",
        "            }\n",
        "\n",
        "            # Load aggregation results\n",
        "            agg_blob = self.bucket.blob(\n",
        "                f\"evaluation/{folder}/aggregation_results.jsonl\"\n",
        "            )\n",
        "            if agg_blob.exists():\n",
        "                content = agg_blob.download_as_text()\n",
        "                for line in content.strip().split(\"\\n\"):\n",
        "                    if line:\n",
        "                        self.evaluation_data[timestamp][\"aggregation\"].append(\n",
        "                            json.loads(line)\n",
        "                        )\n",
        "\n",
        "            # Load evaluation results (sample first 100 for performance)\n",
        "            res_blob = self.bucket.blob(f\"evaluation/{folder}/evaluation_results.jsonl\")\n",
        "            if res_blob.exists():\n",
        "                content = res_blob.download_as_text()\n",
        "                lines = content.strip().split(\"\\n\")\n",
        "                for line in lines[:100]:  # Limit to first 100 for performance\n",
        "                    if line:\n",
        "                        self.evaluation_data[timestamp][\"results\"].append(\n",
        "                            json.loads(line)\n",
        "                        )\n",
        "\n",
        "    def get_overview_stats(self) -> dict:\n",
        "        \"\"\"Calculate overview statistics for all evaluation runs.\"\"\"\n",
        "        if not self.evaluation_data:\n",
        "            return {}\n",
        "\n",
        "        stats = {\n",
        "            \"total_runs\": len(self.evaluation_data),\n",
        "            \"total_samples\": 0,\n",
        "            \"overall_avg\": [],\n",
        "            \"overall_std\": [],\n",
        "            \"latest_run\": None,\n",
        "            \"latest_checkpoint\": None,\n",
        "            \"latest_avg\": None,\n",
        "            \"latest_std\": None,\n",
        "            \"best_run\": None,\n",
        "            \"best_checkpoint\": None,\n",
        "            \"best_avg\": float(\"-inf\"),\n",
        "            \"best_std\": None,\n",
        "            \"worst_run\": None,\n",
        "            \"worst_checkpoint\": None,\n",
        "            \"worst_avg\": float(\"inf\"),\n",
        "            \"worst_std\": None,\n",
        "        }\n",
        "\n",
        "        # Process each evaluation run\n",
        "        for timestamp, data in sorted(self.evaluation_data.items(), reverse=True):\n",
        "            avg_score = None\n",
        "            std_score = None\n",
        "\n",
        "            for agg in data[\"aggregation\"]:\n",
        "                if agg[\"aggregationMetric\"] == \"AVERAGE\":\n",
        "                    avg_score = agg[\"pointwiseMetricResult\"][\"score\"]\n",
        "                elif agg[\"aggregationMetric\"] == \"STANDARD_DEVIATION\":\n",
        "                    std_score = agg[\"pointwiseMetricResult\"][\"score\"]\n",
        "\n",
        "            if avg_score is not None:\n",
        "                stats[\"overall_avg\"].append(avg_score)\n",
        "                if std_score is not None:\n",
        "                    stats[\"overall_std\"].append(std_score)\n",
        "\n",
        "                # Track latest run\n",
        "                if stats[\"latest_run\"] is None:\n",
        "                    stats[\"latest_run\"] = timestamp\n",
        "                    stats[\"latest_checkpoint\"] = data.get(\"checkpoint_name\", \"N/A\")\n",
        "                    stats[\"latest_avg\"] = avg_score\n",
        "                    stats[\"latest_std\"] = std_score\n",
        "\n",
        "                # Track best and worst\n",
        "                if avg_score > stats[\"best_avg\"]:\n",
        "                    stats[\"best_avg\"] = avg_score\n",
        "                    stats[\"best_std\"] = std_score\n",
        "                    stats[\"best_run\"] = timestamp\n",
        "                    stats[\"best_checkpoint\"] = data.get(\"checkpoint_name\", \"N/A\")\n",
        "\n",
        "                if avg_score < stats[\"worst_avg\"]:\n",
        "                    stats[\"worst_avg\"] = avg_score\n",
        "                    stats[\"worst_std\"] = std_score\n",
        "                    stats[\"worst_run\"] = timestamp\n",
        "                    stats[\"worst_checkpoint\"] = data.get(\"checkpoint_name\", \"N/A\")\n",
        "\n",
        "            stats[\"total_samples\"] += len(data[\"results\"])\n",
        "\n",
        "        # Calculate overall statistics\n",
        "        if stats[\"overall_avg\"]:\n",
        "            stats[\"global_avg\"] = np.mean(stats[\"overall_avg\"])\n",
        "            stats[\"global_std\"] = (\n",
        "                np.mean(stats[\"overall_std\"]) if stats[\"overall_std\"] else 0\n",
        "            )\n",
        "            stats[\"avg_range\"] = (min(stats[\"overall_avg\"]), max(stats[\"overall_avg\"]))\n",
        "\n",
        "        return stats\n",
        "\n",
        "    def get_aggregated_metrics_plot(self) -> go.Figure:\n",
        "        \"\"\"Create aggregated metrics visualization.\"\"\"\n",
        "        if not self.evaluation_data:\n",
        "            return go.Figure().add_annotation(\n",
        "                text=\"No evaluation data available\", showarrow=False\n",
        "            )\n",
        "\n",
        "        # Prepare data for plotting\n",
        "        timestamps = []\n",
        "        checkpoint_labels = []\n",
        "        averages = []\n",
        "        std_devs = []\n",
        "\n",
        "        for timestamp, data in sorted(self.evaluation_data.items()):\n",
        "            timestamps.append(timestamp[:19])  # Truncate microseconds for readability\n",
        "            checkpoint_labels.append(data.get(\"checkpoint_name\", \"N/A\"))\n",
        "\n",
        "            avg_score = None\n",
        "            std_score = None\n",
        "\n",
        "            for agg in data[\"aggregation\"]:\n",
        "                if agg[\"aggregationMetric\"] == \"AVERAGE\":\n",
        "                    avg_score = agg[\"pointwiseMetricResult\"][\"score\"]\n",
        "                elif agg[\"aggregationMetric\"] == \"STANDARD_DEVIATION\":\n",
        "                    std_score = agg[\"pointwiseMetricResult\"][\"score\"]\n",
        "\n",
        "            averages.append(avg_score if avg_score is not None else 0)\n",
        "            std_devs.append(std_score if std_score is not None else 0)\n",
        "\n",
        "        # Create subplots\n",
        "        fig = go.Figure()\n",
        "\n",
        "        # Add average scores with error bars\n",
        "        fig.add_trace(\n",
        "            go.Scatter(\n",
        "                x=timestamps,\n",
        "                y=averages,\n",
        "                error_y=dict(type=\"data\", array=std_devs, visible=True),\n",
        "                mode=\"lines+markers\",\n",
        "                name=\"Average Score\",\n",
        "                line=dict(color=\"blue\", width=2),\n",
        "                marker=dict(size=8),\n",
        "                text=checkpoint_labels,\n",
        "                hovertemplate=\"<b>%{text}</b><br>Time: %{x}<br>Score: %{y:.3f} ± %{error_y.array:.3f}<extra></extra>\",\n",
        "            )\n",
        "        )\n",
        "\n",
        "        fig.update_layout(\n",
        "            title=\"Evaluation Metrics Over Time\",\n",
        "            xaxis_title=\"Evaluation Run\",\n",
        "            yaxis_title=\"Score\",\n",
        "            hovermode=\"x unified\",\n",
        "            showlegend=True,\n",
        "            height=400,\n",
        "        )\n",
        "\n",
        "        return fig\n",
        "\n",
        "    def get_evaluation_results_table(\n",
        "        self, eval_run: str, start_idx: int = 0, page_size: int = 10\n",
        "    ) -> pd.DataFrame:\n",
        "        \"\"\"Get evaluation results as a paginated table.\"\"\"\n",
        "        if not eval_run or eval_run not in self.evaluation_data:\n",
        "            return pd.DataFrame(\n",
        "                {\"Message\": [\"Select an evaluation run to view results\"]}\n",
        "            )\n",
        "\n",
        "        results = self.evaluation_data[eval_run][\"results\"]\n",
        "\n",
        "        if not results:\n",
        "            return pd.DataFrame({\"Message\": [\"No results available for this run\"]})\n",
        "\n",
        "        # Prepare data for display\n",
        "        table_data = []\n",
        "        end_idx = min(start_idx + page_size, len(results))\n",
        "\n",
        "        for i in range(start_idx, end_idx):\n",
        "            result = results[i]\n",
        "            instance = json.loads(result[\"jsonInstance\"])\n",
        "\n",
        "            # Extract evaluation score and explanation\n",
        "            score = None\n",
        "            explanation = None\n",
        "            if result.get(\"evaluationResults\"):\n",
        "                eval_result = result[\"evaluationResults\"][0]\n",
        "                if \"pointwiseMetricResult\" in eval_result:\n",
        "                    score = eval_result[\"pointwiseMetricResult\"].get(\"score\", \"N/A\")\n",
        "                    explanation = eval_result[\"pointwiseMetricResult\"].get(\n",
        "                        \"explanation\", \"N/A\"\n",
        "                    )\n",
        "\n",
        "            table_data.append(\n",
        "                {\n",
        "                    \"Index\": i + 1,\n",
        "                    \"Request\": instance.get(\"request\", \"N/A\")[:100] + \"...\"\n",
        "                    if len(instance.get(\"request\", \"\")) > 100\n",
        "                    else instance.get(\"request\", \"N/A\"),\n",
        "                    \"Response\": instance.get(\"response\", \"N/A\")[:100] + \"...\"\n",
        "                    if len(instance.get(\"response\", \"\")) > 100\n",
        "                    else instance.get(\"response\", \"N/A\"),\n",
        "                    \"Reference\": instance.get(\"reference\", \"N/A\")[:100] + \"...\"\n",
        "                    if len(instance.get(\"reference\", \"\")) > 100\n",
        "                    else instance.get(\"reference\", \"N/A\"),\n",
        "                    \"Score\": score,\n",
        "                    \"Explanation\": explanation[:100] + \"...\"\n",
        "                    if explanation and len(explanation) > 100\n",
        "                    else explanation,\n",
        "                }\n",
        "            )\n",
        "\n",
        "        return pd.DataFrame(table_data)\n",
        "\n",
        "    def get_single_result_detail(self, eval_run: str, index: int) -> dict:\n",
        "        \"\"\"Get detailed view of a single evaluation result.\"\"\"\n",
        "        if not eval_run or eval_run not in self.evaluation_data:\n",
        "            return {\"error\": \"Invalid evaluation run\"}\n",
        "\n",
        "        results = self.evaluation_data[eval_run][\"results\"]\n",
        "\n",
        "        if index < 0 or index >= len(results):\n",
        "            return {\"error\": \"Invalid index\"}\n",
        "\n",
        "        result = results[index]\n",
        "        instance = json.loads(result[\"jsonInstance\"])\n",
        "\n",
        "        # Extract evaluation details\n",
        "        eval_details = {}\n",
        "        if result.get(\"evaluationResults\"):\n",
        "            eval_result = result[\"evaluationResults\"][0]\n",
        "            if \"pointwiseMetricResult\" in eval_result:\n",
        "                eval_details = eval_result[\"pointwiseMetricResult\"]\n",
        "\n",
        "        return {\n",
        "            \"request\": instance.get(\"request\", \"N/A\"),\n",
        "            \"response\": instance.get(\"response\", \"N/A\"),\n",
        "            \"reference\": instance.get(\"reference\", \"N/A\"),\n",
        "            \"baseline_model_response\": instance.get(\"baseline_model_response\", \"N/A\"),\n",
        "            \"score\": eval_details.get(\"score\", \"N/A\"),\n",
        "            \"explanation\": eval_details.get(\"explanation\", \"N/A\"),\n",
        "        }\n",
        "\n",
        "\n",
        "# Initialize the viewer\n",
        "viewer = EvaluationViewer()\n",
        "\n",
        "# Create Gradio interface\n",
        "with gr.Blocks(title=\"Evaluation Results Viewer\") as app:\n",
        "    gr.Markdown(\"# 📊 Evaluation Results Viewer\")\n",
        "    gr.Markdown(\n",
        "        \"Load data from your GCS bucket to visualize evaluation metrics and results.\"\n",
        "    )\n",
        "\n",
        "    with gr.Row():\n",
        "        bucket_input = gr.Textbox(\n",
        "            label=\"GCS Bucket Name\",\n",
        "            placeholder=\"Enter your GCS bucket name (e.g., my-evaluation-bucket)\",\n",
        "            scale=3,\n",
        "        )\n",
        "        connect_btn = gr.Button(\"Load from Bucket\", variant=\"primary\", scale=1)\n",
        "\n",
        "    status_output = gr.Textbox(label=\"Connection Status\", interactive=False)\n",
        "\n",
        "    with gr.Tabs():\n",
        "        with gr.Tab(\"📊 Overview\"):\n",
        "            gr.Markdown(\"### Evaluation Summary\")\n",
        "            with gr.Row():\n",
        "                with gr.Column(scale=1):\n",
        "                    overview_total_runs = gr.Number(\n",
        "                        label=\"Total Evaluation Runs\", interactive=False\n",
        "                    )\n",
        "                    overview_total_samples = gr.Number(\n",
        "                        label=\"Total Samples Evaluated\", interactive=False\n",
        "                    )\n",
        "                with gr.Column(scale=1):\n",
        "                    overview_global_avg = gr.Number(\n",
        "                        label=\"Global Average Score\", interactive=False, precision=3\n",
        "                    )\n",
        "                    overview_global_std = gr.Number(\n",
        "                        label=\"Global Std Deviation\", interactive=False, precision=3\n",
        "                    )\n",
        "\n",
        "            with gr.Row():\n",
        "                with gr.Column(scale=1):\n",
        "                    gr.Markdown(\"#### Latest Run\")\n",
        "                    overview_latest_checkpoint = gr.Textbox(\n",
        "                        label=\"Checkpoint\", interactive=False\n",
        "                    )\n",
        "                    overview_latest_run = gr.Textbox(\n",
        "                        label=\"Timestamp\", interactive=False\n",
        "                    )\n",
        "                    overview_latest_avg = gr.Number(\n",
        "                        label=\"Average Score\", interactive=False, precision=3\n",
        "                    )\n",
        "                    overview_latest_std = gr.Number(\n",
        "                        label=\"Std Deviation\", interactive=False, precision=3\n",
        "                    )\n",
        "                with gr.Column(scale=1):\n",
        "                    gr.Markdown(\"#### Best Run\")\n",
        "                    overview_best_checkpoint = gr.Textbox(\n",
        "                        label=\"Checkpoint\", interactive=False\n",
        "                    )\n",
        "                    overview_best_run = gr.Textbox(label=\"Timestamp\", interactive=False)\n",
        "                    overview_best_avg = gr.Number(\n",
        "                        label=\"Average Score\", interactive=False, precision=3\n",
        "                    )\n",
        "                    overview_best_std = gr.Number(\n",
        "                        label=\"Std Deviation\", interactive=False, precision=3\n",
        "                    )\n",
        "                with gr.Column(scale=1):\n",
        "                    gr.Markdown(\"#### Worst Run\")\n",
        "                    overview_worst_checkpoint = gr.Textbox(\n",
        "                        label=\"Checkpoint\", interactive=False\n",
        "                    )\n",
        "                    overview_worst_run = gr.Textbox(\n",
        "                        label=\"Timestamp\", interactive=False\n",
        "                    )\n",
        "                    overview_worst_avg = gr.Number(\n",
        "                        label=\"Average Score\", interactive=False, precision=3\n",
        "                    )\n",
        "                    overview_worst_std = gr.Number(\n",
        "                        label=\"Std Deviation\", interactive=False, precision=3\n",
        "                    )\n",
        "\n",
        "            gr.Markdown(\"#### Score Range\")\n",
        "            overview_range = gr.Textbox(\n",
        "                label=\"Min - Max Average Scores\", interactive=False\n",
        "            )\n",
        "\n",
        "        with gr.Tab(\"📈 Aggregated Metrics\"):\n",
        "            gr.Markdown(\"### Overall Evaluation Metrics\")\n",
        "            metrics_plot = gr.Plot(label=\"Metrics Over Time\")\n",
        "\n",
        "        with gr.Tab(\"📋 Evaluation Results\"):\n",
        "            gr.Markdown(\"### Browse Individual Evaluation Results\")\n",
        "\n",
        "            with gr.Row():\n",
        "                eval_run_dropdown = gr.Dropdown(\n",
        "                    label=\"Select Evaluation Run\", choices=[], interactive=True\n",
        "                )\n",
        "                refresh_runs_btn = gr.Button(\"🔄 Refresh\", scale=1)\n",
        "\n",
        "            with gr.Row():\n",
        "                page_size = gr.Slider(\n",
        "                    minimum=5, maximum=50, value=10, step=5, label=\"Results per page\"\n",
        "                )\n",
        "                page_number = gr.Number(value=1, label=\"Page\", minimum=1, precision=0)\n",
        "\n",
        "            results_table = gr.DataFrame(\n",
        "                label=\"Evaluation Results\", interactive=False, wrap=True\n",
        "            )\n",
        "\n",
        "        with gr.Tab(\"🔍 Result Details\"):\n",
        "            gr.Markdown(\"### Detailed View of Single Result\")\n",
        "\n",
        "            with gr.Row():\n",
        "                detail_eval_run = gr.Dropdown(\n",
        "                    label=\"Select Evaluation Run\", choices=[], interactive=True\n",
        "                )\n",
        "                result_index = gr.Number(\n",
        "                    value=0, label=\"Result Index (0-based)\", minimum=0, precision=0\n",
        "                )\n",
        "                load_detail_btn = gr.Button(\"Load Details\", variant=\"primary\")\n",
        "\n",
        "            with gr.Column():\n",
        "                gr.Markdown(\"#### Request\")\n",
        "                detail_request = gr.Textbox(label=\"\", lines=5, interactive=False)\n",
        "\n",
        "                gr.Markdown(\"#### Response\")\n",
        "                detail_response = gr.Textbox(label=\"\", lines=3, interactive=False)\n",
        "\n",
        "                gr.Markdown(\"#### Reference\")\n",
        "                detail_reference = gr.Textbox(label=\"\", lines=3, interactive=False)\n",
        "\n",
        "                gr.Markdown(\"#### Baseline Model Response\")\n",
        "                detail_baseline = gr.Textbox(label=\"\", lines=3, interactive=False)\n",
        "\n",
        "                with gr.Row():\n",
        "                    detail_score = gr.Number(label=\"Score\", interactive=False)\n",
        "                    detail_explanation = gr.Textbox(\n",
        "                        label=\"Explanation\", lines=2, interactive=False\n",
        "                    )\n",
        "\n",
        "    # Define interactions\n",
        "    def connect_and_load(bucket_name):\n",
        "        status = viewer.connect_to_gcs(bucket_name)\n",
        "\n",
        "        if \"Successfully\" in status:\n",
        "            # Get overview stats\n",
        "            stats = viewer.get_overview_stats()\n",
        "\n",
        "            # Update plots\n",
        "            metrics_fig = viewer.get_aggregated_metrics_plot()\n",
        "\n",
        "            # Update dropdown choices\n",
        "            eval_runs = list(viewer.evaluation_data.keys())\n",
        "\n",
        "            # Format overview outputs\n",
        "            overview_outputs = [\n",
        "                stats.get(\"total_runs\", 0),\n",
        "                stats.get(\"total_samples\", 0),\n",
        "                stats.get(\"global_avg\", 0),\n",
        "                stats.get(\"global_std\", 0),\n",
        "                stats.get(\"latest_checkpoint\", \"N/A\"),\n",
        "                stats.get(\"latest_run\", \"N/A\")[:19]\n",
        "                if stats.get(\"latest_run\")\n",
        "                else \"N/A\",\n",
        "                stats.get(\"latest_avg\", 0),\n",
        "                stats.get(\"latest_std\", 0),\n",
        "                stats.get(\"best_checkpoint\", \"N/A\"),\n",
        "                stats.get(\"best_run\", \"N/A\")[:19] if stats.get(\"best_run\") else \"N/A\",\n",
        "                stats.get(\"best_avg\", 0),\n",
        "                stats.get(\"best_std\", 0),\n",
        "                stats.get(\"worst_checkpoint\", \"N/A\"),\n",
        "                stats.get(\"worst_run\", \"N/A\")[:19] if stats.get(\"worst_run\") else \"N/A\",\n",
        "                stats.get(\"worst_avg\", 0),\n",
        "                stats.get(\"worst_std\", 0),\n",
        "                f\"{stats.get('avg_range', (0, 0))[0]:.3f} - {stats.get('avg_range', (0, 0))[1]:.3f}\"\n",
        "                if stats.get(\"avg_range\")\n",
        "                else \"N/A\",\n",
        "            ]\n",
        "\n",
        "            return (\n",
        "                status,\n",
        "                *overview_outputs,\n",
        "                metrics_fig,\n",
        "                gr.update(choices=eval_runs, value=eval_runs[0] if eval_runs else None),\n",
        "                gr.update(choices=eval_runs, value=eval_runs[0] if eval_runs else None),\n",
        "            )\n",
        "        return (\n",
        "            status,\n",
        "            0,\n",
        "            0,\n",
        "            0,\n",
        "            0,\n",
        "            \"N/A\",\n",
        "            \"N/A\",\n",
        "            0,\n",
        "            0,\n",
        "            \"N/A\",\n",
        "            \"N/A\",\n",
        "            0,\n",
        "            0,\n",
        "            \"N/A\",\n",
        "            \"N/A\",\n",
        "            0,\n",
        "            0,\n",
        "            \"N/A\",\n",
        "            go.Figure(),\n",
        "            gr.update(choices=[]),\n",
        "            gr.update(choices=[]),\n",
        "        )\n",
        "\n",
        "    def update_results_table(eval_run, page_num, page_size):\n",
        "        if not eval_run:\n",
        "            return pd.DataFrame()\n",
        "        start_idx = (page_num - 1) * page_size\n",
        "        return viewer.get_evaluation_results_table(eval_run, start_idx, page_size)\n",
        "\n",
        "    def load_result_details(eval_run, index):\n",
        "        details = viewer.get_single_result_detail(eval_run, int(index))\n",
        "\n",
        "        if \"error\" in details:\n",
        "            return \"\", \"\", \"\", \"\", None, details[\"error\"]\n",
        "\n",
        "        return (\n",
        "            details[\"request\"],\n",
        "            details[\"response\"],\n",
        "            details[\"reference\"],\n",
        "            details[\"baseline_model_response\"],\n",
        "            details[\"score\"],\n",
        "            details[\"explanation\"],\n",
        "        )\n",
        "\n",
        "    def refresh_runs():\n",
        "        eval_runs = list(viewer.evaluation_data.keys())\n",
        "        return (\n",
        "            gr.update(choices=eval_runs, value=eval_runs[0] if eval_runs else None),\n",
        "            gr.update(choices=eval_runs, value=eval_runs[0] if eval_runs else None),\n",
        "        )\n",
        "\n",
        "    # Connect button events\n",
        "    connect_btn.click(\n",
        "        fn=connect_and_load,\n",
        "        inputs=[bucket_input],\n",
        "        outputs=[\n",
        "            status_output,\n",
        "            overview_total_runs,\n",
        "            overview_total_samples,\n",
        "            overview_global_avg,\n",
        "            overview_global_std,\n",
        "            overview_latest_checkpoint,\n",
        "            overview_latest_run,\n",
        "            overview_latest_avg,\n",
        "            overview_latest_std,\n",
        "            overview_best_checkpoint,\n",
        "            overview_best_run,\n",
        "            overview_best_avg,\n",
        "            overview_best_std,\n",
        "            overview_worst_checkpoint,\n",
        "            overview_worst_run,\n",
        "            overview_worst_avg,\n",
        "            overview_worst_std,\n",
        "            overview_range,\n",
        "            metrics_plot,\n",
        "            eval_run_dropdown,\n",
        "            detail_eval_run,\n",
        "        ],\n",
        "    )\n",
        "\n",
        "    # Update table when dropdown or pagination changes\n",
        "    eval_run_dropdown.change(\n",
        "        fn=update_results_table,\n",
        "        inputs=[eval_run_dropdown, page_number, page_size],\n",
        "        outputs=[results_table],\n",
        "    )\n",
        "\n",
        "    page_number.change(\n",
        "        fn=update_results_table,\n",
        "        inputs=[eval_run_dropdown, page_number, page_size],\n",
        "        outputs=[results_table],\n",
        "    )\n",
        "\n",
        "    page_size.change(\n",
        "        fn=update_results_table,\n",
        "        inputs=[eval_run_dropdown, page_number, page_size],\n",
        "        outputs=[results_table],\n",
        "    )\n",
        "\n",
        "    # Refresh runs button\n",
        "    refresh_runs_btn.click(\n",
        "        fn=refresh_runs, outputs=[eval_run_dropdown, detail_eval_run]\n",
        "    )\n",
        "\n",
        "    # Load detail button\n",
        "    load_detail_btn.click(\n",
        "        fn=load_result_details,\n",
        "        inputs=[detail_eval_run, result_index],\n",
        "        outputs=[\n",
        "            detail_request,\n",
        "            detail_response,\n",
        "            detail_reference,\n",
        "            detail_baseline,\n",
        "            detail_score,\n",
        "            detail_explanation,\n",
        "        ],\n",
        "    )"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "0Y0z-H5xxMUg"
      },
      "outputs": [],
      "source": [
        "app.launch(share=True, height=800)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "M2ACJCgNxgXV"
      },
      "outputs": [],
      "source": [
        "app.close()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "mXnkGPjGjYWy"
      },
      "source": [
        "## Cleaning up\n",
        "\n",
        "To avoid incurring unexpected charges, it's important to clean up the resources created in this notebook."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "_BotIB6NjQlv"
      },
      "outputs": [],
      "source": [
        "delete_experiments = True\n",
        "delete_endpoint = True\n",
        "delete_bucket = True\n",
        "\n",
        "# Deleting experiment\n",
        "if delete_experiments:\n",
        "    experiment = aiplatform.Experiment.list()[0]\n",
        "    experiment.delete()\n",
        "\n",
        "# Deleting the endpoint itself removes the resource configuration.\n",
        "if delete_endpoint:\n",
        "    endpoint = aiplatform.Endpoint.list()[0]\n",
        "    endpoint.delete(force=True)\n",
        "\n",
        "# To fully clean up, you should also delete the model artifacts and dataset from your GCS bucket.\n",
        "# You can do this via the command line or the Google Cloud Console.\n",
        "if delete_bucket:\n",
        "    !gsutil -m rm -r {BUCKET_URI}"
      ]
    }
  ],
  "metadata": {
    "colab": {
      "name": "sft_gemini_automatic_evaluation.ipynb",
      "toc_visible": true
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}
